Search Results

Search found 33124 results on 1325 pages for 'table valued functions'.

Page 31/1325 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Passing functions into other functions as parameters, bad practice?

    - by BlueHat
    We've been in the process of changing how our AS3 application talks to our back end and we're in the process of implementing a REST system to replace our old one. Sadly the developer who started the work is now on long term sick leave and it's been handed over to me. I've been working with it for the past week or so now and I understand the system, but there's one thing that's been worrying me. There seems to be a lot of passing of functions into functions. For example our class that makes the call to our servers takes in a function that it will then call and pass an object to when the process is complete and errors have been handled etc. It's giving me that "bad feeling" where I feel like it's horrible practice and I can think of some reasons why but I want some confirmation before I propose a re-work to system. I was wondering if anyone had any experience with this possible problem?

    Read the article

  • PHP user created function return values problem

    - by faX
    Okay I'm new here and I have been trying to figure this out all day I have two functions one calling the other but my functions only returns the last value for example 29 when it should return multiple values. As wondering how can I fix this problem so that my functions return all the values. Here is my PHP code. function parent_comments(){ if(articles_parent_comments_info($_GET['article_id']) !== false){ foreach(articles_parent_comments_info($_GET['article_id']) as $comment_info){ $comment_id = filternum($comment_info['comment_id']); reply_comment_count($comment_id); } } } function reply_comment_count($parent_id){ if(articles_reply_comments_info($_GET['article_id']) !== false){ foreach(articles_reply_comments_info($_GET['article_id']) as $reply_info){ $comment_id = filternum($reply_info['comment_id']); $reply_id = filternum($reply_info['parent_id']); if($parent_id === $reply_id){ reply_comment_count($comment_id); } } } return $comment_id; }

    Read the article

  • jQuery Reference First Column in HTML Table

    - by Vic
    I have a table where all of the cells are INPUT tags. I have a function which looks for the first input cell and replaces it with it's value. So this: <tr id="row_0" class="datarow"> <td><input class="tabcell" value="Injuries"></td> <td><input class="tabcell" value="01"></td> becomes this: <tr id="row_0" class="datarow"> <td>Injuries</td> <td><input class="tabcell" value="01"></td> Here is the first part of the function: function setRowLabels() { var row = []; $('.dataRow').each(function(i) { row.push($('td input:eq(0)', this).val() + ' - '); $('td input:eq(0)', this).replaceWith($('td input:eq(0)', this).val()); $('td input:gt(0)', this).each(function(e) { etcetera But when the page reloads, the first column is not an input type, so it changes the second column to text too! Can I tell it to only change the first column, no matter what the type is? I tried $('td:eq(0)', this).replaceWith($('td:eq(0)', this).val()); but it does not work. Any suggestions appreciated! Thanks

    Read the article

  • Is this a bad indexing strategy for a table?

    - by llamaoo7
    The table in question is part of a database that a vendor's software uses on our network. The table contains metadata about files. The schema of the table is as follows Metadata ResultID (PK, int, not null) MappedFieldname (char(50), not null) Fieldname (PK, char(50), not null) Fieldvalue (text, null) There is a clustered index on ResultID and Fieldname. This table typically contains millions of rows (in one case, it contains 500 million). The table is populated by 24 workers running 4 threads each when data is being "processed". This results in many non-sequential inserts. Later after processing, more data is inserted into this table by some of our in-house software. The fragmentation for a given table is at least 50%. In the case of the largest table, it is at 90%. We do not have a DBA. I am aware we desperately need a DB maintenance strategy. As far as my background, I'm a college student working part time at this company. My question is this, is a clustered index the best way to go about this? Should another index be considered? Are there any good references for this type and similar ad-hoc DBA tasks?

    Read the article

  • The [2] table entry '[3]' has no associated entry in the Media table. (error 2602)

    - by derekf
    Coworker started getting the above message in the event log and as dialog during install.  Argument [2] was File and argument [3] was a specific file. Error dialog read   Product: (app name) -- The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 2602. Package was a vendor-provided MSI that had been installed administratively, and then a patch (.msp) applied to the administrative install point. With some digging we found that the MSI still had the entries in the media table pointing at the CAB files, and that there were several files at the end of the sequence that did not have corresponding entries in the Media table (last sequence 990 in Media table, last entry in File table had sequence 994).  Attributes on files in the File table all had the msidbFileAttributesCompressed (&16384) attribute set, so they were all expecting to be within the CAB files, but since this was an admin install there were no CAB files. Resolved by clearing the Media table (replace with a single entry: Disk ID 1, LastSequence 994) and going through the file table and subtracting 8192 from each entry to mark files as not compressed.  Tested and worked.

    Read the article

  • validate uniqueness amongst multiple subclasses with Single Table Inheritance

    - by irkenInvader
    I have a Card model that has many Sets and a Set model that has many Cards through a Membership model: class Card < ActiveRecord::Base has_many :memberships has_many :sets, :through => :memberships end class Membership < ActiveRecord::Base belongs_to :card belongs_to :set validates_uniqueness_of :card_id, :scope => :set_id end class Set < ActiveRecord::Base has_many :memberships has_many :cards, :through => :memberships validates_presence_of :cards end I also have some sub-classes of the above using Single Table Inheritance: class FooCard < Card end class BarCard < Card end and class Expansion < Set end class GameSet < Set validates_size_of :cards, :is => 10 end All of the above is working as I intend. What I'm trying to figure out is how to validate that a Card can only belong to a single Expansion. I want the following to be invalid: some_cards = FooCard.all( :limit => 25 ) first_expansion = Expansion.new second_expansion = Expansion.new first_expansion.cards = some_cards second_expansion.cards = some_cards first_expansion.save # Valid second_expansion.save # **Should be invalid** However, GameSets should allow this behavior: other_cards = FooCard.all( :limit => 10 ) first_set = GameSet.new second_set = GameSet.new first_set.cards = other_cards # Valid second_set.cards = other_cards # Also valid I'm guessing that a validates_uniqueness_of call is needed somewhere, but I'm not sure where to put it. Any suggestions? UPDATE 1 I modified the Expansion class as sugested: class Expansion < Set validate :validates_uniqueness_of_cards def validates_uniqueness_of_cards membership = Membership.find( :first, :include => :set, :conditions => [ "card_id IN (?) AND sets.type = ?", self.cards.map(&:id), "Expansion" ] ) errors.add_to_base("a Card can only belong to a single Expansion") unless membership.nil? end end This works when creating initial expansions to validate that no current expansions contain the cards. However, this (falsely) invalidates future updates to the expansion with new cards. In other words: old_exp = Expansion.find(1) old_exp.card_ids # returns [1,2,3,4,5] new_exp = Expansion.new new_exp.card_ids = [6,7,8,9,10] new_exp.save # returns true new_exp.card_ids << [11,12] # no other Expansion contains these cards new_exp.valid? # returns false ... SHOULD be true

    Read the article

  • Convert VARCHAR() columns to NVARCHAR()

    - by ChrisD
    We recently underwent an upgrade that required us to change our database columns from varchar to NVarchar, to support unicode characters. Digging through the internet, I found a base script which I modified to handle reserved word table names, and maintain the NULL/NotNull constraint of the columns.   I Ran this script use NWOperationalContent – Your Catalog Name here GO SELECT 'ALTER TABLE ' + isnull(schema_name(syo.id), 'dbo') + '.[' +  syo.name +'] '     + ' ALTER COLUMN [' + syc.name + '] NVARCHAR(' + case syc.length when -1 then 'MAX'         ELSE convert(nvarchar(10),syc.length) end + ') '+         case  syc.isnullable when 1 then ' NULL' ELSE ' NOT NULL' END +';'    FROM sysobjects syo    JOIN syscolumns syc ON      syc.id = syo.id    JOIN systypes syt ON      syt.xtype = syc.xtype    WHERE      syt.name = 'varchar'     and syo.xtype='U'   which produced a series of ALTER statements which I could then execute the tables.  In some cases I had to drop indexes, alter the tables, and re-create the indexes.  There might have been a better way to do that, but manually dropping them got the job done.   use NWMerchandisingContent GO ALTER TABLE Locale Drop Constraint PK_Locale ALTER TABLE Country DROP CONSTRAINT PK_Country GO ALTER TABLE dbo.[Campaign]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[BundleLocalization]  ALTER COLUMN [Locale] NVARCHAR(8)  NOT NULL; ALTER TABLE dbo.[BundleLocalization]  ALTER COLUMN [UnitOfmeasure] NVARCHAR(200)  NULL; ALTER TABLE dbo.[BundleLocalization]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [Locale] NVARCHAR(8)  NOT NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [Imperative] NVARCHAR(MAX)  NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [Instructions] NVARCHAR(MAX)  NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[BundleComponent]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Bundle]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Banner]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Video]  ALTER COLUMN [Link] NVARCHAR(512)  NOT NULL; ALTER TABLE dbo.[Video]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[ProductUsage]  ALTER COLUMN [VideoLink] NVARCHAR(512)  NOT NULL; ALTER TABLE dbo.[ProductUsage]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Thumbnail]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[SkuLocalization]  ALTER COLUMN [Locale] NVARCHAR(8)  NOT NULL; ALTER TABLE dbo.[SkuLocalization]  ALTER COLUMN [UnitOfMeasure] NVARCHAR(150)  NOT NULL; ALTER TABLE dbo.[SkuLocalization]  ALTER COLUMN [SwatchColor] NVARCHAR(50)  NOT NULL; etc.. GO ALTER TABLE Locale ADD CONSTRAINT PK_Locale PRIMARY KEY (LocaleId) ALTER TABLE Country ADD CONSTRAINT PK_Country PRIMARY KEY (CountryId) Note that this alter is non-destructive to the data.   Hope this helps.

    Read the article

  • MySQL syntax: can't create table

    - by peng
    mysql> create table balance_sheet( -> Cash_and_cash_equivalents VARCHAR(20), -> Trading_financial_assets VARCHAR(20), -> Note_receivable VARCHAR(20), -> Account_receivable VARCHAR(20), -> Advance_money VARCHAR(20), -> Interest_receivable VARCHAR(20), -> Dividend_receivable VARCHAR(20), -> Other_notes_receivable VARCHAR(20), -> Due_from_related_parties VARCHAR(20), -> Inventory VARCHAR(20), -> Consumptive_biological_assets VARCHAR(20), -> Non_current_asset(expire_in_a_year) VARCHAR(20), -> Other_current_assets VARCHAR(20), -> Total_current_assets VARCHAR(20), -> Available_for_sale_financial_assets VARCHAR(20), -> Held_to_maturity_investment VARCHAR(20), -> Long_term_account_receivable VARCHAR(20), -> Long_term_equity_investment VARCHAR(20), -> Investment_real_eastate VARCHAR(20), -> Fixed_assets VARCHAR(20), -> Construction_in_progress VARCHAR(20), -> Project_material VARCHAR(20), -> Liquidation_of_fixed_assets VARCHAR(20), -> Capitalized_biological_assets VARCHAR(20), -> Oil_and_gas_assets VARCHAR(20), -> Intangible_assets VARCHAR(20), -> R&d_expense VARCHAR(20), -> Goodwill VARCHAR(20), -> Deferred_assets VARCHAR(20), -> Deferred_income_tax_assets VARCHAR(20), -> Other_non_current_assets VARCHAR(20), -> Total_non_current_assets VARCHAR(20), -> Total_assets VARCHAR(20), -> Short_term_borrowing VARCHAR(20), -> Transaction_financial_liabilities VARCHAR(20), -> Notes_payable VARCHAR(20), -> Account_payable VARCHAR(20), -> Item_received_in_advance VARCHAR(20), -> Employee_pay_payable VARCHAR(20), -> Tax_payable VARCHAR(20), -> Interest_payable VARCHAR(20), -> Dividend_payable VARCHAR(20), -> Other_account_payable VARCHAR(20), -> Due_to_related_parties VARCHAR(20), -> Non_current_liabilities_due_within_one_year VARCHAR(20), -> Other_current_liabilities VARCHAR(20), -> Total_current_liabilities VARCHAR(20), -> Long_term_loan VARCHAR(20), -> Bonds_payable VARCHAR(20), -> Long_term_payable VARCHAR(20), -> Specific_payable VARCHAR(20), -> Estimated_liabilities VARCHAR(20), -> Deferred_income_tax_liabilities VARCHAR(20), -> Other_non_current_liabilities VARCHAR(20), -> Total_non_current_liabilities VARCHAR(20), -> Total_liabilities VARCHAR(20), -> Paid_in_capital VARCHAR(20), -> Contributed_surplus VARCHAR(20), -> Treasury_stock VARCHAR(20), -> Earned_surplus VARCHAR(20), -> Retained_earnings VARCHAR(20), -> Translation_reserve VARCHAR(20), -> Nonrecurring_items VARCHAR(20), -> Total_equity(non) VARCHAR(20), -> Minority_interests VARCHAR(20), -> Total_equity VARCHAR(20), -> Total_liabilities_&_shareholder's_equity VARCHAR(20)); '> '> when i press enter,there is the output of ' ,no other reaction ,what's wrong?

    Read the article

  • SQL Cartesian product joining table to itself and inserting into existing table

    - by Emma
    I am working in phpMyadmin using SQL. I want to take the primary key (EntryID) from TableA and create a cartesian product (if I am using the term correctly) in TableB (empty table already created) for all entries which share the same value for FieldB in TableA, except where TableA.EntryID equals TableA.EntryID So, for example, if the values in TableA were: TableA.EntryID TableA.FieldB 1 23 2 23 3 23 4 25 5 25 6 25 The result in TableB would be: Primary key EntryID1 EntryID2 FieldD (Default or manually entered) 1 1 2 Default value 2 1 3 Default value 3 2 1 Default value 4 2 3 Default value 5 3 1 Default value 6 3 2 Default value 7 4 5 Default value 8 4 6 Default value 9 5 4 Default value 10 5 6 Default value 11 6 4 Default value 12 6 5 Default value I am used to working in Access and this is the first query I have attempted in SQL. I started trying to work out the query and got this far. I know it's not right yet, as I’m still trying to get used to the syntax and pieced this together from various articles I found online. In particular, I wasn’t sure where the INSERT INTO text went (to create what would be an Append Query in Access). SELECT EntryID FROM TableA.EntryID TableA.EntryID WHERE TableA.FieldB=TableA.FieldB TableA.EntryID<>TableA.EntryID INSERT INTO TableB.EntryID1 TableB.EntryID2 After I've got that query right, I need to do a TRIGGER query (I think), so if an entry changes it's value in TableA.FieldB (changing it’s membership of that grouping to another grouping), the cartesian product will be re-run on THAT entry, unless TableB.FieldD = valueA or valueB (manually entered values). I have been using the Designer Tab. Does there have to be a relationship link between TableA and TableB. If so, would it be two links from the EntryID Primary Key in TableA, one to each EntryID in TableB? I assume this would not work because they are numbered EntryID1 and EntryID2 and the name needs to be the same to set up a relationship? If you can offer any suggestions, I would be very grateful. Research: http://www.fluffycat.com/SQL/Cartesian-Joins/ Cartesian Join example two Q: You said you can have a Cartesian join by joining a table to itself. Show that! Select * From Film_Table T1, Film_Table T2;

    Read the article

  • Updating a sql server table with data from another table

    - by David G
    I have two basic SQL Server tables: Customer (ID [pk], AddressLine1, AddressLine2, AddressCity, AddressDistrict, AddressPostalCode) CustomerAddress(ID [pk], CustomerID [fk], Line1, Line2, City, District, PostalCode) CustomerAddress contains multiple addresses for the Customer record. For each Customer record I want to merge the most recent CustomerAddress record where most recent is determined by the highest CustomerAddress ID value. I've currently got the following: UPDATE Customer SET AddressLine1 = CustomerAddress.Line1, AddressPostalCode = CustomerAddress.PostalCode FROM Customer, CustomerAddress WHERE Customer.ID = CustomerAddress.CustomerID which works but how can I ensure that the most recent (highest ID) CustomerAddress record is selected to update the Customer table?

    Read the article

  • Scalar-Valued Function in NHibernate

    - by Byron Sommardahl
    I have the following scalar function in MS SQL 2005: CREATE FUNCTION [dbo].[Distance] ( @lat1 float, @long1 float,@lat2 float, @long2 float ) RETURNS float AS BEGIN RETURN (3958*3.1415926*sqrt((@lat2-@lat1)*(@lat2-@lat1) + cos(@lat2/57.29578)*cos(@lat1/57.29578)*(@long2-@long1)*(@long2-@long1))/180); END I need to be able to call this function from my NHibernate queries. I read over this article, but I got bogged down in some details that I didn't understand right away. If you've used scalar functions with NHibernate, could you possibly give me an example of how your HBM file would look for a function like this?

    Read the article

  • What advantages do we have when creating a separate mapping table for two relational tables

    - by Pankaj Upadhyay
    In various open source CMS, I have noticed that there is a separate table for mapping two relational tables. Like for categories and products, there is a separate product_category_mapping table. This table just has a primary key and two foreign keys from the categories and product tables. My question is what are the benefits of this database design rather than just linking the tables directly by defining a foreign key in either table? Is it just matter of convenience?

    Read the article

  • Sql Table Refactoring Challenge

    Ive been working a bit on cleaning up a large table to make it more efficient.  I pretty much know what I need to do at this point, but I figured Id offer up a challenge for my readers, to see if they can catch everything I have as well as to see if Ive missed anything.  So to that end, I give you my table: CREATE TABLE [dbo].[lq_ActivityLog]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [PlacementID] [int] NOT NULL, [CreativeID] [int] NOT NULL, [PublisherID] [int] NOT NULL, [CountryCode] [nvarchar](10) NOT NULL, [RequestedZoneID] [int] NOT NULL, [AboveFold] [int] NOT NULL, [Period] [datetime] NOT NULL, [Clicks] [int] NOT NULL, [Impressions] [int] NOT NULL, CONSTRAINT [PK_lq_ActivityLog2] PRIMARY KEY CLUSTERED ( [Period] ASC, [PlacementID] ASC, [CreativeID] ASC, [PublisherID] ASC, [RequestedZoneID] ASC, [AboveFold] ASC, [CountryCode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] And now some assumptions and additional information: The table has 200,000,000 rows currently PlacementID ranges from 1 to 5000 and should support at least 50,000 CreativeID ranges from 1 to 5000 and should support at least 50,000 PublisherID ranges from 1 to 500 and should support at least 50,000 CountryCode is a 2-character ISO standard (e.g. US) and there is a country table with an integer ID already.  There are < 300 rows. RequestedZoneID ranges from 1 to 100 and should support at least 50,000 AboveFold has values of 1, 0, or 1 only. Period is a date (no time). Clicks range from 0 to 5000. Impressions range from 0 to 5000000. The table is currently write-mostly.  Its primary purpose is to log advertising activity as quickly as possible.  Nothing in the rest of the system reads from it except for batch jobs that pull the data into summary tables. Heres the current information on the database tables size: Design Goals This table has been in use for about 5 years and has performed very well during that time.  The only complaints we have are that it is quite large and also there are occasionally timeouts for queries that reference it, particularly when batch jobs are pulling data from it.  Any changes should be made with an eye toward keeping write performance optimal  while trying to reduce space and improve read performance / eliminate timeouts during read operations. Refactor There are, I suggest to you, some glaringly obvious optimizations that can be made to this table.  And Im sure there are some ninja tweaks known to SQL gurus that would be a big help as well.  Ill post my own suggested changes in a follow-up post for now feel free to comment with your suggestions. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to access a row from af:table out of context

    - by Vijay Mohan
    Scenario : Lets say you have an adf table in a jsff and it is included as af:region inside other page(parent page).Now your requirement is to access some specific rows from the table and do some operations. Now, since you are aceessing the table outside the context in which it is present, so first you will have to setup the context and then you can use the visitCallback mechanism to do the opeartions on table. Here is the sample code: ================= final RichTable table = this.getRichTable();         FacesContext facesContext = FacesContext.getCurrentInstance();         VisitContext visitContext =   RequestContext.getCurrentInstance().createVisitContext(facesContext,null, EnumSet.of(VisitHint.SKIP_TRANSIENT,VisitHint.SKIP_UNRENDERED), null);         //Annonymous call         UIXComponent.visitTree(visitContext,facesContext.getViewRoot(),new VisitCallback(){             public VisitResult visit(VisitContext context, UIComponent target)               {                   if (table != target)                   {                     return VisitResult.ACCEPT;                   }                   else if(table == target)                   {                       //Here goes the Actual Logic                       Iterator selection = table.getSelectedRowKeys().iterator();                       while (selection.hasNext()) {                           Object key = selection.next();                           //store the original key                           Object origKey = table.getRowKey();                           try {                               table.setRowKey(key);                               Object o = table.getRowData();                               JUCtrlHierNodeBinding rowData = (JUCtrlHierNodeBinding)o;                               Row row = rowData.getRow();                               System.out.println(row.getAttribute(0));                           }                           catch(Exception ex){                               ex.printStackTrace();                           }                           finally {                               //restore original key                               table.setRowKey(origKey);                           }                       }                   }                   return VisitResult.COMPLETE;               }         }); 

    Read the article

  • Sql Table Refactoring Challenge

    Ive been working a bit on cleaning up a large table to make it more efficient.  I pretty much know what I need to do at this point, but I figured Id offer up a challenge for my readers, to see if they can catch everything I have as well as to see if Ive missed anything.  So to that end, I give you my table: CREATE TABLE [dbo].[lq_ActivityLog]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [PlacementID] [int] NOT NULL, [CreativeID] [int] NOT NULL, [PublisherID] [int] NOT NULL, [CountryCode] [nvarchar](10) NOT NULL, [RequestedZoneID] [int] NOT NULL, [AboveFold] [int] NOT NULL, [Period] [datetime] NOT NULL, [Clicks] [int] NOT NULL, [Impressions] [int] NOT NULL, CONSTRAINT [PK_lq_ActivityLog2] PRIMARY KEY CLUSTERED ( [Period] ASC, [PlacementID] ASC, [CreativeID] ASC, [PublisherID] ASC, [RequestedZoneID] ASC, [AboveFold] ASC, [CountryCode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] And now some assumptions and additional information: The table has 200,000,000 rows currently PlacementID ranges from 1 to 5000 and should support at least 50,000 CreativeID ranges from 1 to 5000 and should support at least 50,000 PublisherID ranges from 1 to 500 and should support at least 50,000 CountryCode is a 2-character ISO standard (e.g. US) and there is a country table with an integer ID already.  There are < 300 rows. RequestedZoneID ranges from 1 to 100 and should support at least 50,000 AboveFold has values of 1, 0, or 1 only. Period is a date (no time). Clicks range from 0 to 5000. Impressions range from 0 to 5000000. The table is currently write-mostly.  Its primary purpose is to log advertising activity as quickly as possible.  Nothing in the rest of the system reads from it except for batch jobs that pull the data into summary tables. Heres the current information on the database tables size: Design Goals This table has been in use for about 5 years and has performed very well during that time.  The only complaints we have are that it is quite large and also there are occasionally timeouts for queries that reference it, particularly when batch jobs are pulling data from it.  Any changes should be made with an eye toward keeping write performance optimal  while trying to reduce space and improve read performance / eliminate timeouts during read operations. Refactor There are, I suggest to you, some glaringly obvious optimizations that can be made to this table.  And Im sure there are some ninja tweaks known to SQL gurus that would be a big help as well.  Ill post my own suggested changes in a follow-up post for now feel free to comment with your suggestions. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • System Variables, Stored Procedures or Functions for Meta Data

    - by BuckWoody
    Whenever you want to know something about SQL Server’s configuration, whether that’s the Instance itself or a database, you have a few options. If you want to know “dynamic” data, such as how much memory or CPU is consumed or what a particular query is doing, you should be using the Dynamic Management Views (DMVs) that you can read about here: http://msdn.microsoft.com/en-us/library/ms188754.aspx  But if you’re looking for how much memory is installed on the server, the version of the Instance, the drive letters of the backups and so on, you have other choices. The first of these are system variables. You access these with a SELECT statement, and they are useful when you need a discrete value for use, say in another query or to put into a table. You can read more about those here: http://msdn.microsoft.com/en-us/library/ms173823.aspx You also have a few stored procedures you can use. These often bring back a lot more data, pre-formatted for the screen. You access these with the EXECUTE syntax. It is a bit more difficult to take the data they return and get a single value or place the results in another table, but it is possible. You can read more about those here: http://msdn.microsoft.com/en-us/library/ms187961.aspx Yet another option is to use a system function, which you access with a SELECT statement, which also brings back a discrete value that you can use in a test or to place in another table. You can read about those here: http://msdn.microsoft.com/en-us/library/ms187812.aspx  By the way, many of these constructs simply query from tables in the master or msdb databases for the Instance or the system tables in a user database. You can get much of the information there as well, and there are even system views in each database to show you the meta-data dealing with structure – more on that here: http://msdn.microsoft.com/en-us/library/ms186778.aspx  Some of these choices are the only way to get at a certain piece of data. But others overlap – you can use one or the other, they both come back with the same data. So, like many Microsoft products, you have multiple ways to do the same thing. And that’s OK – just research what each is used for and how it’s intended to be used, and you’ll be able to select (pun intended) the right choice. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Function like C# properties?

    - by alan2here
    I was directed here from SO as a better stack exchange site for this question. I've been thinking about the neatness and expression of C# properties over functions, although they only currently work where no parameters are used, and wondered. Is is possible, and if so why not, to have a stand alone function like C# property. For example: public class test { private byte n = 4; public test() { func = 2; byte n2 = func; func; } private byte func { get { return n; } set { n = value; } func { n++; } } } edit: Sorry for the vagueness first time round. I'm going to add some info and motivation. The 'n++' here is just a simple example, a placeholder, it's not intended to be representative of the actual code that would be used. I'm also looking at this from the point of view of looking at the property command as is, not in the context of using it for 'get_xyz' and 'set_xyz' member functions, which is certainly useful, but of instead comparing it more abstractly to functions and other programic elements. A 'get' property can be used instead of a function that takes no parameters, and syntactically they are perhaps only aesthetically, but as I see it noticeably nicer. However, properties also add the potential for an extra layer of polymorphism, one that relates to the 'func = 4;' getting, 'int n = func;' setting or 'func;' function like context in which they are used as well as the more common parameter based polymorphism. Potentially allowing for a lot of expression and contextual information reguarding how other would use your functions. As in many places uses and definitions would remain the same, it shouldn't break existing code. private byte func { get { } get bool { } set { } func { } func(bool) { } func(byte, myType) { } // etc... } So a read only function would look like this: private byte func { get { } } A normal function like this: private void func { func { } } A function with parameter polymorphism like this: private byte func { func(bool) { } func(byte, myType) { } } And a function that could return a value, or just compute, depending on the context it is used, that also has more conventional parameter polymorphism as well, like so: private byte func { get { } func(bool) { } func(byte, myType) { } }

    Read the article

  • ProgrammingError: (1146, "Table 'test_<DB>.<TABLE>' doesn't exist") when running unit test for Djang

    - by abigblackman
    I'm running a unit test using the Django framework and get this error. Running the actual code does not have this problem, running the unit tests creates a test database on the fly so I suspect the issue lies there. The code that throws the error looks like this member = Member.objects.get(email=email_address) and the model looks like class Member(models.Model): member_id = models.IntegerField(primary_key=True) created_on = models.DateTimeField(editable=False, default=datetime.datetime.utcnow()) flags = models.IntegerField(default=0) email = models.CharField(max_length=150, blank=True) phone = models.CharField(max_length=150, blank=True) country_iso = models.CharField(max_length=6, blank=True) location_id = models.IntegerField(null=True, blank=True) facebook_uid = models.IntegerField(null=True, blank=True) utc_offset = models.IntegerField(null=True, blank=True) tokens = models.CharField(max_length=3000, blank=True) class Meta: db_table = u'member' there's nothing too odd there i can see. the user running the tests has the same permissions to the database server as the user that runs the website where else can I look to see what's going wrong, why is this table not being created?

    Read the article

  • How to rotate table-headline in Latex table

    - by pagid
    Hi, is there a way to rotate the "Demo 1", "Demo2" and "Demo 3" headlines 90° in the following LaText table? \documentclass[a4paper,twoside,10pt]{report} \begin{document} \begin{tabular}{|l|l|l|l|} \hline & Demo1 & Demo2 & Demo3 \\ \hline Person 1 & x & & \\ \hline Person 2 & x & & x \\ \hline Person 3 & x & x & \\ \hline Person 4 & & x & x \\ \hline \end{tabular} \end{document} Thanks

    Read the article

  • Calling Web Service Functions Asynchronously from a Web Page

    - by SGWellens
    Over on the Asp.Net forums where I moderate, a user had a problem calling a Web Service from a web page asynchronously. I tried his code on my machine and was able to reproduce the problem. I was able to solve his problem, but only after taking the long scenic route through some of the more perplexing nuances of Web Services and Proxies. Here is the fascinating story of that journey. Start with a simple Web Service     public class Service1 : System.Web.Services.WebService    {        [WebMethod]        public string HelloWorld()        {            // sleep 10 seconds            System.Threading.Thread.Sleep(10 * 1000);            return "Hello World";        }    } The 10 second delay is added to make calling an asynchronous function more apparent. If you don't call the function asynchronously, it takes about 10 seconds for the page to be rendered back to the client. If the call is made from a Windows Forms application, the application freezes for about 10 seconds. Add the web service to a web site. Right-click the project and select "Add Web Reference…" Next, create a web page to call the Web Service. Note: An asp.net web page that calls an 'Async' method must have the Async property set to true in the page's header: <%@ Page Language="C#"          AutoEventWireup="true"          CodeFile="Default.aspx.cs"          Inherits="_Default"           Async='true'  %> Here is the code to create the Web Service proxy and connect the event handler. Shrewdly, we make the proxy object a member of the Page class so it remains instantiated between the various events. public partial class _Default : System.Web.UI.Page {    localhost.Service1 MyService;  // web service proxy     // ---- Page_Load ---------------------------------     protected void Page_Load(object sender, EventArgs e)    {        MyService = new localhost.Service1();        MyService.HelloWorldCompleted += EventHandler;          } Here is the code to invoke the web service and handle the event:     // ---- Async and EventHandler (delayed render) --------------------------     protected void ButtonHelloWorldAsync_Click(object sender, EventArgs e)    {        // blocks        ODS("Pre HelloWorldAsync...");        MyService.HelloWorldAsync();        ODS("Post HelloWorldAsync");    }    public void EventHandler(object sender, localhost.HelloWorldCompletedEventArgs e)    {        ODS("EventHandler");        ODS("    " + e.Result);    }     // ---- ODS ------------------------------------------------    //    // Helper function: Output Debug String     public static void ODS(string Msg)    {        String Out = String.Format("{0}  {1}", DateTime.Now.ToString("hh:mm:ss.ff"), Msg);        System.Diagnostics.Debug.WriteLine(Out);    } I added a utility function I use a lot: ODS (Output Debug String). Rather than include the library it is part of, I included it in the source file to keep this example simple. Fire up the project, open up a debug output window, press the button and we get this in the debug output window: 11:29:37.94 Pre HelloWorldAsync... 11:29:37.94 Post HelloWorldAsync 11:29:48.94 EventHandler 11:29:48.94 Hello World   Sweet. The asynchronous call was made and returned immediately. About 10 seconds later, the event handler fires and we get the result. Perfect….right? Not so fast cowboy. Watch the browser during the call: What the heck? The page is waiting for 10 seconds. Even though the asynchronous call returned immediately, Asp.Net is waiting for the event to fire before it renders the page. This is NOT what we wanted. I experimented with several techniques to work around this issue. Some may erroneously describe my behavior as 'hacking' but, since no ingesting of Twinkies was involved, I do not believe hacking is the appropriate term. If you examine the proxy that was automatically created, you will find a synchronous call to HelloWorld along with an additional set of methods to make asynchronous calls. I tried the other asynchronous method supplied in the proxy:     // ---- Begin and CallBack ----------------------------------     protected void ButtonBeginHelloWorld_Click(object sender, EventArgs e)    {        ODS("Pre BeginHelloWorld...");        MyService.BeginHelloWorld(AsyncCallback, null);        ODS("Post BeginHelloWorld");    }    public void AsyncCallback(IAsyncResult ar)    {        String Result = MyService.EndHelloWorld(ar);         ODS("AsyncCallback");        ODS("    " + Result);    } The BeginHelloWorld function in the proxy requires a callback function as a parameter. I tested it and the debug output window looked like this: 04:40:58.57 Pre BeginHelloWorld... 04:40:58.57 Post BeginHelloWorld 04:41:08.58 AsyncCallback 04:41:08.58 Hello World It works the same as before except for one critical difference: The page rendered immediately after the function call. I was worried the page object would be disposed after rendering the page but the system was smart enough to keep the page object in memory to handle the callback. Both techniques have a use: Delayed Render: Say you want to verify a credit card, look up shipping costs and confirm if an item is in stock. You could have three web service calls running in parallel and not render the page until all were finished. Nice. You can send information back to the client as part of the rendered page when all the services are finished. Immediate Render: Say you just want to start a service running and return to the client. You can do that too. However, the page gets sent to the client before the service has finished running so you will not be able to update parts of the page when the service finishes running. Summary: YourFunctionAsync() and an EventHandler will not render the page until the handler fires. BeginYourFunction() and a CallBack function will render the page as soon as possible. I found all this to be quite interesting and did a lot of searching and researching for documentation on this subject….but there isn't a lot out there. The biggest clues are the parameters that can be sent to the WSDL.exe program: http://msdn.microsoft.com/en-us/library/7h3ystb6(VS.100).aspx Two parameters are oldAsync and newAsync. OldAsync will create the Begin/End functions; newAsync will create the Async/Event functions. Caveat: I haven't tried this but it was stated in this article. I'll leave confirming this as an exercise for the student J. Included Code: I'm including the complete test project I created to verify the findings. The project was created with VS 2008 SP1. There is a solution file with 3 projects, the 3 projects are: Web Service Asp.Net Application Windows Forms Application To decide which program runs, you right-click a project and select "Set as Startup Project". I created and played with the Windows Forms application to see if it would reveal any secrets. I found that in the Windows Forms application, the generated proxy did NOT include the Begin/Callback functions. Those functions are only generated for Asp.Net pages. Probably for the reasons discussed earlier. Maybe those Microsoft boys and girls know what they are doing. I hope someone finds this useful. Steve Wellens

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >