Search Results

Search found 8523 results on 341 pages for 'bobby tables'.

Page 99/341 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • How to record different authentication types (username / password vs token based) in audit log

    - by RM
    I have two types of users for my system, normal human users with a username / password, and delegation authorized accounts through OAuth (i.e. using a token identifier). The information that is stored for each is quite different, and are managed by different subsytems. They do however interact with the same tables / data within the system, so I need to maintain the audit trail regardless of whether human user, or token-based user modified the data. My solution at the moment is to have a table called something like AuditableIdentity, and then have the two types inheriting off that table (either in the single table, or as two seperate tables with 1 to 1 PK with AuditableIdentity. All operations would use the common AuditableIdentity PK for CreatedBy, ModifiedBy etc columns. There isn't any FK constraint on the audit columns, so any text can go in there, but I want an easy way to easily determine whether it was a human or system that made the change, and joining to the one AuditableIdentity table seems like a clean way to do that? Is there a best practice for this scenario? Is this an appropriate way of approaching the problem - or would you not bother with the common table and just rely on joins (to the two seperate un-related user / token tables) later to determine which user type matches which audit records?

    Read the article

  • How do I pass a lot of parameters to views in Dango?

    - by Mark
    I'm very new to Django and I'm trying to build an application to present my data in tables and charts. Till now my learning process went very smooth, but now I'm a bit stuck. My pageview retrieves large amounts of data from a database and puts it in the context. The template then generates different html-tables. So far so good. Now I want to add different charts to the template. I manage to do this by defining <img src=".../> tags. The Matplotlib chart is generate in my chartview an returned via: response=HttpResponse(content_type='image/png') canvas.print_png(response) return response Now I have different questions: the data is retrieved twice from the database. Once in the pageview to render the tables, and again in the chartview for making the charts. What is the best way to pass the data, already in the context of the page to the chartview? I need a lot of charts, each with different datasets. I could make a chartview for each chart, but probably there is a better way. How do I pass the different dataset names to the chartview? Some charts have 20 datasets, so I don't think that passing these dataset parameters via the url (like: <imgm src="chart/dataset1/dataset2/.../dataset20/chart.png />) is the right way. Any advice?

    Read the article

  • Index Tuning for SSIS tasks

    - by Raj More
    I am loading tables in my warehouse using SSIS. Since my SSIS is slow, it seemed like a great idea to build indexes on the tables. There are no primary keys (and therefore, foreign keys), indexes (clustered or otherwise), constraints, on this warehouse. In other words, it is 100% efficiency free. We are going to put indexes based on usage - by analyzing new queries and current query performance. So, instead of doing it our old fashioned sweat and grunt way of actually reading the SQL statements and execution plans, I thought I'd put the shiny new Database Engine Tuning Advisor to use. I turned SQL logging off in my SSIS package and ran a "Tuning" trace, saved it to a table and analyzed the output in the Tuning Advisor. Most of the lookups are done as: exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',1 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',2 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',3 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',4 and when analyzed, these statements have the reason "Event does not reference any tables". Huh? Does it not see the FROM dbo.Company??!! What is going on here? So, I have multiple questions: How do I get it to capture the actual statement executing in my trace, not what was submitted in a batch? Are there any best practices to follow for tuning performance related to SSIS packages running against SQL Server 2008?

    Read the article

  • Combining multiple rows into one row, Oracle

    - by Torbjørn
    Hi. I'm working with a database which is created in Oracle and used in a GIS-software through SDE. One of my colleuges is going to make some statistics out of this database and I'm not capable of finding a reasonable SQL-query for getting the data. I have two tables, one with registrations and one with registrationdetails. It's a one to many relationship, so the registration can have one or more details connected to it (no maximum number). table: Registration RegistrationID Date TotLenght 1 01.01.2010 5 2 01.02.2010 15 3 05.02.2009 10 2.table: RegistrationDetail DetailID RegistrationID Owner Type Distance 1 1 TD UB 1,5 2 1 AB US 2 3 1 TD UQ 4 4 2 AB UQ 13 5 2 AB UR 13,1 6 3 TD US 5 I want the resulting selection to be something like this: RegistrationID Date TotLenght DetailID RegistrationID Owner Type Distance DetailID RegistrationID Owner Type Distance DetailID RegistrationID Owner Type Distance 1 01.01.2010 5 1 1 TD UB 1,5 2 1 AB US 2 3 1 TD UQ 4 2 01.02.2010 15 4 2 AB UQ 13 5 2 AB UR 13,1 3 05.02.2009 10 6 3 TD US 5 With a normal join I get one row per each registration and detail. Can anyone help me with this? I don't have administrator-rights for the database, so I can't create any tables or variables. If it's possible, I could copy the tables into Access.

    Read the article

  • Vertical Image Border Changes Alignment when Screen is not maximized

    - by Randy
    I have some pretty straightforward HTML code with a few tables to organize various items of text or images. All works fine except that I need to place a vertical border on both the left and right sides of the screen. I am able to do this with a 2x2 pixel image that I stretch out. When the user has their screen maximized, everything looks great. But when the user hits "Restore Down", then the borders stay in place, but the tables get shoved down so that they start below where the borders end, which is off screen. in other words, the relational alignment between the borders and the tables gets all screwed up. Does anybody know how to make this alignment stay consistent on a restore down? I'm pretty much a newbie with html and asp, so speak slowly. If there is a better method to accomplish this, I'm all ears. Thanks. Here is the relevant section of code: <%@ Master Language="C#" AutoEventWireup="true" CodeFile="MasterPage.master.cs" Inherits="MasterPage" % <form id="form1" runat="server"> <asp:Image ID="LeftBorder" src="../Images/Border_Blue.jpg" runat="server" WIDTH="15" HEIGHT="1000" BORDER="0" alt="Image Missing" align="left"/> <asp:Image ID="RightBorder" src="../Images/Border_Blue.jpg" runat="server" WIDTH="15" HEIGHT="1000" BORDER="0" alt="Image Missing" align="right" /> <table id="BannerTable" style="height: 100px"> <tr> <td width="934px"> <img src="../Images/Header.jpg" alt="Image Missing" id="ImgBanner" align="left"/></td> </tr> </table>

    Read the article

  • FluentNHibernate, getting 1 column from another table

    - by puffpio
    We're using FluentNHibernate and we have run into a problem where our object model requires data from two tables like so: public class MyModel { public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual int FooId { get; set; } public virtual string FooName { get; set; } } Where there is a MyModel table that has Id, Name, and FooId as a foreign key into the Foo table. The Foo tables contains Id and FooName. This problem is very similar to another post here: http://stackoverflow.com/questions/1896645/nhibernate-join-tables-and-get-single-column-from-other-table but I am trying to figure out how to do it with FluentNHibernate. I can make the Id, Name, and FooId very easily..but mapping FooName I am having trouble with. This is my class map: public class MyModelClassMap : ClassMap<MyModel> { public MyModelClassMap() { this.Id(a => a.Id).Column("AccountId").GeneratedBy.Identity(); this.Map(a => a.Name); this.Map(a => a.FooId); // my attempt to map FooName but it doesn't work this.Join("Foo", join => join.KeyColumn("FooId").Map(a => a.FooName)); } } with that mapping I get this error: The element 'class' in namespace 'urn:nhibernate-mapping-2.2' has invalid child element 'join' in namespace 'urn:nhibernate-mapping-2.2'. List of possible elements expected: 'joined-subclass, loader, sql-insert, sql-update, sql-delete, filter, resultset, query, sql-query' in namespace 'urn:nhibernate-mapping-2.2'. any ideas?

    Read the article

  • Common one-to-many table for multiple entities

    - by Ben V
    Suppose I have two tables, Customer and Vendor. I want to have a common address table for customer and vendor addresses. Customers and Vendors can both have one to many addresses. Option 1 Add columns for the AddressID to the Customer and Vendor tables. This just doesn't seem like a clean solution to me. Customer Vendor Address -------- --------- --------- CustomerID VendorID AddressID AddressID1 AddressID1 Street AddressID2 AddressID2 City... Option 2 Move the foreign key to the Address table. For a Customer, Address.CustomerID will be populated. For a Vendor, Address.VendorID will be populated. I don't like this either - I shouldn't need to modify the address table every time I want to use it for another entity. Customer Vendor Address -------- --------- --------- CustomerID VendorID AddressID CustomerID VendorID Option 3 I've also seen this - only 1 foreign key column on the Address table with another column to identify which foreign key table the address belongs to. I don't like this one because it requires all the foreign key tables to have the same type of ID. It also seems messy once you start coding against it. Customer Vendor Address -------- --------- --------- CustomerID VendorID AddressID FKTable FKID So, am I just too picky, or is there something I haven't thought of?

    Read the article

  • atk4 advanced crud?

    - by thindery
    I have the following tables: -- ----------------------------------------------------- -- Table `product` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `product` ( `id` INT NOT NULL AUTO_INCREMENT , `productName` VARCHAR(255) NULL , `s7location` VARCHAR(255) NULL , PRIMARY KEY (`id`) ) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `pages` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `pages` ( `id` INT NOT NULL AUTO_INCREMENT , `productID` INT NULL , `pageName` VARCHAR(255) NOT NULL , `isBlank` TINYINT(1) NULL , `pageOrder` INT(11) NULL , `s7page` INT(11) NULL , PRIMARY KEY (`id`) , INDEX `productID` (`productID` ASC) , CONSTRAINT `productID` FOREIGN KEY (`productID` ) REFERENCES `product` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `field` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `field` ( `id` INT NOT NULL AUTO_INCREMENT , `pagesID` INT NULL , `fieldName` VARCHAR(255) NOT NULL , `fieldType` VARCHAR(255) NOT NULL , `fieldDefaultValue` VARCHAR(255) NULL , PRIMARY KEY (`id`) , INDEX `id` (`pagesID` ASC) , CONSTRAINT `pagesID` FOREIGN KEY (`pagesID` ) REFERENCES `pages` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; I have gotten CRUD to work on the 'product' table. //addproduct.php class page_addproduct extends Page { function init(){ parent::init(); $crud=$this->add('CRUD')->setModel('Product'); } } This works. but I need to get it so that when a new product is created it basically allows me to add new rows into the pages and field tables. For example, the products in the tables are a print product(like a greeting card) that has multiple pages to render. Page 1 may have 2 text fields that can be customized, page 2 may have 3 text fields, a slider to define text size, and a drop down list to pick a color, and page 3 may have five text fields that can all be customized. All three pages (and all form elements, 12 in this example) are associated with 1 product. So when I create the product, could i add a button to create a page for that product, then within the page i can add a button to add a new form element field? I'm still somewhat new to this, so my db structure may not be ideal. i'd appreciate any suggestions and feedback! Could someone point me toward some information, tutorials, documentation, ideas, suggestions, on how I can implement this?

    Read the article

  • Same query has nested loops when used with INSERT, but Hash Match without.

    - by AaronLS
    I have two tables, one has about 1500 records and the other has about 300000 child records. About a 1:200 ratio. I stage the parent table to a staging table, SomeParentTable_Staging, and then I stage all of it's child records, but I only want the ones that are related to the records I staged in the parent table. So I use the below query to perform this staging by joining with the parent tables staged data. --Stage child records INSERT INTO [dbo].[SomeChildTable_Staging] ([SomeChildTableId] ,[SomeParentTableId] ,SomeData1 ,SomeData2 ,SomeData3 ,SomeData4 ) SELECT [SomeChildTableId] ,D.[SomeParentTableId] ,SomeData1 ,SomeData2 ,SomeData3 ,SomeData4 FROM [dbo].[SomeChildTable] D INNER JOIN dbo.SomeParentTable_Staging I ON D.SomeParentTableID = I.SomeParentTableID; The execution plan indicates that the tables are being joined with a Nested Loop. When I run just the select portion of the query without the insert, the join is performed with Hash Match. So the select statement is the same, but in the context of an insert it uses the slower nested loop. I have added non-clustered index on the D.SomeParentTableID so that there is an index on both sides of the join. I.SomeParentTableID is a primary key with clustered index. Why does it use a nested loop for inserts that use a join? Is there a way to improve the performance of the join for the insert?

    Read the article

  • SQL Inner Join : DB stuck

    - by SurfingCat
    I postet this question a few days ago but I didn't explain exactly what I want. I ask the question better formulated again: To clarify my problem I added some new information: I got an MySQL DB with MyISAM tables. The two relevant tables are: * orders_products: orders_products_id, orders_id, product_id, product_name, product_price, product_name, product_model, final_price, ... * products: products_id, manufacturers_id, ... (for full information about the tables see screenshot products (Screenshot) and screenshot orders_products (Screenshot)) Now what I want is this: - Get all Orders who ordered products with manufacturers_id = 1. And the product name of the product of this order (with manufacturers_id = 1). Grouped by orders. What I did so far is this: SELECT op.orders_id, p.products_id, op.products_name, op.products_price, op.products_quantity FROM orders_products op , products p INNER JOIN products ON op.products_id = p.products_id WHERE p.manufacturers_id = 1 AND p.orders_id > 10000 p.orders_id 10000 for testing to get only a few order_id's. But thies query takes much time to get executed if it even works. Two times the sql server stucked. Where is the mistake?

    Read the article

  • 'Bank Switching' Sprites on old NES applications

    - by Jeffrey Kern
    I'm currently writing in C# what could basically be called my own interpretation of the NES hardware for an old-school looking game that I'm developing. I've fired up FCE and have been observing how the NES displayed and rendered graphics. In a nutshell, the NES could hold two bitmaps worth of graphical information, each with the dimensions of 128x128. These are called the PPU tables. One was for BG tiles and the other was for sprites. The data had to be in this memory for it to be drawn on-screen. Now, if a game had more graphical data then these two banks, it could write portions of this new information to these banks -overwriting what was there - at the end of each frame, and use it from the next frame onward. So, in old games how did the programmers 'bank switch'? I mean, within the level design, how did they know which graphic set to load? I've noticed that Mega Man 2 bankswitches when the screen programatically scrolls from one portion of the stage to the next. But how did they store this information in the level - what sprites to copy over into the PPU tables, and where to write them at? Another example would be hitting pause in MM2. BG tiles get over-written during pause, and then get restored when the player unpauses. How did they remember which tiles they replaced and how to restore them? If I was lazy, I could just make one huge static bitmap and just grab values that way. But I'm forcing myself to limit these values to create a more authentic experience. I've read the amazing guide on how M.C. Kids was made, and I'm trying to be barebones about how I program this game. It still just boggles my mind how these programmers accomplisehd what they did with what they had. EDIT: The only solution I can think of would be to hold separate tables that state what tiles should be in the PPU at what time, but I think that would be a huge memory resource that the NES wouldn't be able to handle.

    Read the article

  • Is it bad use "display: table;" to organise a layout into 2 columns?

    - by Colen
    Hello, I am trying to make a 2 column layout, apparently the bane of CSS. I know you shouldn't use tables for layout, but I've settled on this CSS. Note the use of display: table etc. div.container { width: 600px; height: 300px; margin: auto; display: table; table-layout: fixed; } ul { white-space: nowrap; overflow: hidden; display: table-cell; width: 40%; } div.inner { display: table-cell; width: auto; } With this layout: <div class="container"> <ul> <li>First</li> <li>Second</li> <li>Third</li> </ul> <div class="inner"> <p>Hello world</p> </div> </div> This seems to work admirably. However, I can't help wondering - am I obeying the letter of the "don't use tables" rule, but not the spirit? I think it's ok, since there's no positioning markup in the HTML code, but I'm just not sure about the "right" way to do it. I can't use css float, because I want the columns to expand and contract with the available space. Please, stack overflow, help me resolve my existential sense of dread at these pseudo-tables.

    Read the article

  • Using Parameter Values In SQL Statement

    - by Dangerous
    I am trying to write a database script (SQL SERVER 2008) which will copy information from database tables on one server to corresponding tables in another database on a different server. I have read that the correct way to do this is to use a sql statement in a format similar to the following: INSERT INTO <linked_server>.<database>.<owner>.<table_name> SELECT * FROM <linked_server>.<database>.<owner>.<table_name> As there will be several tables being copied, I would like to declare variables at the top of the script to allow the user to specify the names of each server and database that are to be used. These could then be used throughout the script. However, I am not sure how to use the variable values in the actual SQL statements. What I want to achieve is something like the following: DECLARE @SERVER_FROM AS NVARCHAR(50) = 'ServerFrom' DECLARE @DATABASE_FROM AS NVARCHAR(50) = 'DatabaseTo' DECLARE @SERVER_TO AS NVARCHAR(50) = 'ServerTo' DECLARE @DATABASE_TO AS NVARCHAR(50) = 'DatabaseTo' INSERT INTO @SERVER_TO.@DATABASE_TO.dbo.TableName SELECT * FROM @SERVER_FROM.@DATABASE_FROM.dbo.TableName ... How should I use the @ variables in this code in order for it to work correctly? Additionally, do you think my method above is correct for what I am trying to achieve and should I be using NVARCHAR(50) as my variable type or something else? Thanks

    Read the article

  • Problems with display of UTF-8 encoded content from a DB

    - by LookUp Webmaster
    Dear members of the Stackoverflow community, We are developing a web application using the Zend Framework, and we are facing some encoding issues that we hope you might help us solve. The situation goes something like this: There are certain tables on a MySQL database that need to be displayed as html. Because the site is designed using the Spanish language, the database contains some characters like "á" or "ñ". Our internal policy is to set all the encodings as UTF-8, including all the databases and the tables. The problem is, that when we retrieve the content from the DB, some characters are displayed as question marks. We are out of ideas. These are all the things that we have already tried and double-checked: 1. The SQL file from which we load all the data is properly UTF-8 encoded. 2. The SQL is loaded through phpmyadmin (which is configured as UTF-8), and the resulting tables are displayed properly. 3. The netbeans environment used for coding is also set as UTF-8. The weird thing is that all the content that is hard-coded either as php or html is displayed properly. Only the values that are extracted from the database have issues. Any ideas? Thank you very much.

    Read the article

  • generation of random numbers in java

    - by S.PRATHIBA
    Hi all, I want to create 30 tables which consists of the following fields.For example, Service_ID Service_Type consumer_feedback 75 Computing 1 35 Printer 0 33 Printer -1 3 rows in set (0.00 sec) mysql select * from consumer2; Service_ID Service_Type consumer_feedback 42 data 0 75 computing 0 mysql select * from consumer3; Service_ID Service_Type consumer_feedback 43 data -1 41 data 1 72 computing -1 As you can infer from the above tables, i am getting the feedback values.I have generated these consumer_feedback values,Service_ID,Service_Type using the concept of random numbers .I have used the funtion int min1=31;//printer int max1=35;//the values are generated if the Service_Type is printer. int provider1 = (int) (Math.random() * (max1 - min1 + 1) ) + min1; int min2=41;//data int max2 =45 int provider2 = (int) (Math.random() * (max2 - min2 + 1) ) + min2; int min3=71;//computing int max3=75; int provider3 = (int) (Math.random() * (max3 - min3 + 1) ) + min3; int min5 = -1;//feedback values int max5 =1; int feedback = (int) (Math.random() * (max5 - min5 + 1) ) + min5; I need the Service_Types to be distributed uniformly in all the 30 tables.Similarly I need feedback value of 1 to be generated many times other than 0 and -1.Please Help me.

    Read the article

  • How to (unit-)test data intensive PL/SQL application

    - by doom2.wad
    Our team is willing to unit-test a new code written under a running project extending an existing huge Oracle system. The system is written solely in PL/SQL, consists of thousands of tables, hundreds of stored procedures packages, mostly getting data from tables and/or inserting/updating other data. Our extension is not an exception. Most functions return data from a quite complex SELECT statementa over many mutually bound tables (with a little added logic before returning them) or make transformation from one complicated data structure to another (complicated in another way). What is the best approach to unit-test such code? There are no unit tests for existing code base. To make things worse, only packages, triggers and views are source-controlled, table structures (including "alter table" stuff and necessary data transformations are deployed via channel other than version control). There is no way to change this within our project's scope. Maintaining testing data set seems to be impossible since there is new code deployed to the production environment on weekly basis, usually without prior notice, often changing data structure (add a column here, remove one there). I'd be glad for any suggestion or reference to help us. Some team members tend to be tired by figuring out how to even start for our experience with unit-testing does not cover PL/SQL data intensive legacy systems (only those "from-the-book" greenfield Java projects).

    Read the article

  • Calculate the retrieved rows in database Visual C#

    - by Tanya Lertwichaiworawit
    I am new in Visual C# and would want to know how to calculate the retrieved data from a database. Using the above GUI, when "Calculate" is click, the program will display the number of students in textBox1, and the average GPA of all students in textBox2. Here is my database table "Students": I was able to display the number of students but I'm still confused to how I can calculate the average GPA Here's my code: private void button1_Click(object sender, EventArgs e) { string connection = @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\Database1.accdb"; OleDbConnection connect = new OleDbConnection(connection); string sql = "SELECT * FROM Students"; connect.Open(); OleDbCommand command = new OleDbCommand(sql, connect); DataSet data = new DataSet(); OleDbDataAdapter adapter = new OleDbDataAdapter(command); adapter.Fill(data, "Students"); textBox1.Text = data.Tables["Students"].Rows.Count.ToString(); double gpa; for (int i = 0; i < data.Tables["Students"].Rows.Count; i++) { gpa = Convert.ToDouble(data.Tables["Students"].Rows[i][2]); } connect.Close(); }

    Read the article

  • How to setup relationships in contact Database

    - by Walt
    I am currently embarking on a new venture to learn PHP and MySQL. I have done some simple databases in the past using Access, but this one is to be a web-centric database for tracking a myriad of data including contacts and project information. I will need to link the various tables in various relationships, and I am not sure the best way to do that. Since I am just starting out with PHP/MySQL I am researching online sources for learning as much as possible. If anyone has recommendations on books or websites, I would appreciate it. In setting up my tables, one major area that I am concerned with is contacts. I will have a variety of contacts that include: employees, clients, vendors, subcontractors, etc.. and a single contact can be multiple types and each type would have various additional fields that pertain to them. My thought was to have one contacts table that links to other tables for the various contact types. I'm not sure which field type or setup of table options are best... Thoughts? This scenario will likely play out in other areas of the database as well for projects and products. Any pointers/direction would be appreciated. WES

    Read the article

  • Is it possible to definitively identify whether a DML command was issued from a stored procedure?

    - by Ed Harper
    I have inherited a SQL Server 2008 database to which calling applications have access through stored procedures. Each table in the database has a shadow audit table into which Insert/Update/Delete operations for are logged. Performance testing on populating the audit tables showed that inserting the audit records using OUTPUT clauses was 20% or so faster than using triggers, so this has been implemented in the stored procedures. However, because this design cannot track changes made directly to the tables through DML statements issued directly against the tables, triggers have also been implemented which use the value of @@NESTLEVEL to determine whether or not to run the trigger (the assumption being that all DML run through stored procedures will have @@NESTLEVEL 1). i.e. the body of the trigger code looks something like: IF @@NESTLEVEL = 1 -- implies call is direct sql so generate history from here BEGIN ... insert into audit table This design is flawed because it won't track updates where DML statements are executed in dynamic SQL, or any other context where @@NESTLEVEL is raised above 1. Can anyone suggest a completely reliable method we can use in the triggers to execute them only if not triggered by a stored procedure? Or is this (as I suspect) not possible?

    Read the article

  • Validating Login / Changing User settings / Php Mysql

    - by Marcelo
    Hi everyone, my questions are about login, and changing already saved data. (Q1) 'Till now I've only saved input in the tables of the database (registration steps), now I need to check if the input (login steps), are the same of my table in database, in fact I have 3 types of users, then I'll have to check 3 kind of tables. Then if the input data matches with one of those 3 tables I will redirect the user to his specific area. I'm thinking about saved the submitted data $login=$_REQUEST['login']; and $password=$_REQUEST['password']; and compare with the login column in the database. Then if the login matches, I'll compare the password submitted with the one in the row, not in the column. But I don't know how to do this search and comparison,neither what to use. Then if both matches I'll redirect the user. Else I'll send an login error message. (this I know how to do) (Q2) What if need to change an already saved user ? For example to change an email address. My changing user's data web page is exactly the same like the registration user web page. Can I load the already saved options and values of registration (table user for example). Then the user will change whatever he thinks it's necessary, and then when he submits the new information, they would not create a new row in my table, but just be overwritten the old information? How can I do this? Sorry for any mistake in English, and Thanks for the attention.

    Read the article

  • Weird MySQL behavior, seems like a SQL bug

    - by Daniel Magliola
    I'm getting a very strange behavior in MySQL, which looks like some kind of weird bug. I know it's common to blame the tried and tested tool for one's mistakes, but I've been going around this for a while. I have 2 tables, I, with 2797 records, and C, with 1429. C references I. I want to delete all records in I that are not used by C, so i'm doing: select * from i where id not in (select id_i from c); That returns 0 records, which, given the record counts in each table, is physically impossible. I'm also pretty sure that the query is right, since it's the same type of query i've been using for the last 2 hours to clean up other tables with orphaned records. To make things even weirder... select * from i where id in (select id_i from c); DOES work, and brings me the 1297 records that I do NOT want to delete. So, IN works, but NOT IN doesn't. Even worse: select * from i where id not in ( select i.id from i inner join c ON i.id = c.id_i ); That DOES work, although it should be equivalent to the first query (i'm just trying mad stuff at this point). Alas, I can't use this query to delete, because I'm using the same table i'm deleting from in the subquery. I'm assuming something in my database is corrupt at this point. In case it matters, these are all MyISAM tables without any foreign keys, whatsoever, and I've run the same queries in my dev machine and in the production server with the same result, so whatever corruption there might be survived a mysqldump / source cycle, which sounds awfully strange. Any ideas on what could be going wrong, or, even more importantly, how I can fix/work around this? Thanks! Daniel

    Read the article

  • C# DynamicPDF Merging causing "Index out of bounds" error

    - by Dining Philanderer
    Greetings, We use DynamicPDF to merge multiple PDF documents stored in a MSSQL database. The vast majority of times it works wonderfully, but occasionally one of these documents will fail to merge generating the exception message "Index was outside the bounds of the array." I think I have isolated the problem to PDF files that are greater than 8.5 x 11.0. Does anyone know if this is a known issue with DynamicPDF? The merging code is posted here. What would be ideal is if there is a way to resize the PDF files to the correct size so this is not a concern at all... for (int docs = 0; docs < dsPDFInfo.Tables[0].Rows.Count; docs++) { byte[] bytePDFArray = (byte[])dsPDFInfo.Tables[0].Rows[docs]["Content"]; int iContentSize = Convert.ToInt32(dsPDFInfo.Tables[0].Rows[docs]["ContentSize"]); MemoryStream ms = new MemoryStream(bytePDFArray, 0, iContentSize); ceTe.DynamicPDF.Merger.PdfDocument pdfdoc = new ceTe.DynamicPDF.Merger.PdfDocument(ms); ceTe.DynamicPDF.Merger.MergeDocument mergedoc = new ceTe.DynamicPDF.Merger.MergeDocument(pdfdoc); docCombinedPDF.Append(mergedoc); } Thanks....

    Read the article

  • MySQL Paritioning performance

    - by Imran Pathan
    Measured performance on key partitioned tables and normal tables separately. But we couldn't find any performance improvement with partitioning. Queries are pruned. Using MySQL 5.1.47 on RHEL 4. Table details: UserUsage - Will have entries for user mobile number and data usage for each date. Mobile number and Date as PRI KEY. UserProfile - Queries prev table and stores summary for each mobile number. Mobile number PRI KEY. CREATE TABLE `UserUsage` ( `Msisdn` decimal(20,0) NOT NULL, `Date` date NOT NULL, . . PRIMARY KEY USING BTREE (`Msisdn`,`Date`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 PARTITION BY KEY(Msisdn) PARTITIONS 50; CREATE TABLE `UserProfile` ( `Msisdn` decimal(20,0) NOT NULL, . . PRIMARY KEY (`Msisdn`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 PARTITION BY KEY(Msisdn) PARTITIONS 50; Second table is updated by query select and order by date in first table in a perl program, query is select * from UserUsage where Msisdn=number order by Date desc limit 7 [Process data in perl] update UserProfile values(....) where Msisdn=number explain partition for select, shows row being scanned in a particular partition only. Is something wrong with partition design or queries as partitioning is taking almost same or more time compared to normal tables?

    Read the article

  • MYSQL JOIN WHERE ISSUES - need some kind of if condition

    - by Breezer
    Hi Well this will be hard to explain but ill do my best The thing is i have 4 tables all with a specific column to relate to eachother. 1 table with users(agent_users) , 1 with working hours(agent_pers), 1 with sold items(agent_stat),1 with project(agent_pro) the user and the project table is irrelevant in the issue at hand but to give you a better understanding why certain tables is included in my query i decided to still mention them =) The thing is that I use 2 pages to insert data to the working hour and the sold items during that time tables, then i have a third page to summarize everything for current month, the query for that is as following: SELECT *, SUM(sv_p_kom),SUM(sv_p_gick),SUM(sv_p_lunch) FROM (( agent_users LEFT JOIN agent_pers ON agent_users.sv_aid = agent_pers.sv_p_uid) LEFT JOIN agent_stat ON agent_pers.sv_p_uid = agent_stat.sv_s_uid) LEFT JOIN agent_pro ON agent_pers.sv_p_pid=agent_pro.p_id WHERE MONTH(agent_pers.sv_p_datum) =7 GROUP BY sv_aname so the problem is now that i dont want sold items from previous months to get included in the data received, i know i could solve that by simple adding in the WHERE part MONTH(agent_stat.sv_s_datum) =7 but then if no items been sold that month no data at all will show up not the time or anything. Any aid on how i could solve this is greatly appreciated. if there's something that's not so clear dont hesitate to ask and ill try my best to answer. after all my english isn't the best out there :P regards breezer

    Read the article

  • problem with phpMyAdmin advanced features

    - by typoknig
    Hi all, I am having trouble putting the final touches on my MySQL/Apache/phpMyAdmin install on a Windows XP system. I am trying to get rid of all the error message in phpMyAdmin and I have gotten rid of all of them except the ones related to "advanced features." The exact error message I have is : The additional features for working with linked tables have been deactivated. To find out why click here. I have read up on the cause of the errors but I must be missing something because I still cannot get the warning to go away. Here is what I have done: Created a linked-tables infrastructure (default name "phpmyadmin") per the phpMyAdmin instructions and enabled "pmadb" in my "config.inc.php" file. Specified (enabled) the table names in my "config.inc.php" file (there are 9 tables total). Created a "controluser" and granted only Select privilages per phpMyAdmin instructions Adjusted "controluser" pma and "controlpass" pmapass in "config.inc.php" file From what I can see these are all the instruction phpMyAdmin gives on this subject, and I am unable to locate any tutorials on the specifics of "advanced features" in phpMyAdmin. Any help would be appreciated, and be gentle, this is my first go with MySQL/phpMyAdmin

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >