Search Results

Search found 26438 results on 1058 pages for 'calendar table'.

Page 37/1058 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Sharepoint 2010 Calendar Event Receiver

    - by user521682
    I have been tasked with creating a calendar in a root site that get's updated from child site calendar events. I am able to access the Calendar List from the parent site in the child site's to add an event. However, I'm having trouble finding a way to create a uniqueIdentifier for the event so that when the child calendar's event is updated or deleted, the parent site calendar get's updated as well. The ListItem ID field appears to be only unique among the site. I did find a UniqueItemId field, but apparantly it's read-only in the SP LIstitem object. Can someone please give me some guidance here? Many thanks!

    Read the article

  • jQuery Reference First Column in HTML Table

    - by Vic
    I have a table where all of the cells are INPUT tags. I have a function which looks for the first input cell and replaces it with it's value. So this: <tr id="row_0" class="datarow"> <td><input class="tabcell" value="Injuries"></td> <td><input class="tabcell" value="01"></td> becomes this: <tr id="row_0" class="datarow"> <td>Injuries</td> <td><input class="tabcell" value="01"></td> Here is the first part of the function: function setRowLabels() { var row = []; $('.dataRow').each(function(i) { row.push($('td input:eq(0)', this).val() + ' - '); $('td input:eq(0)', this).replaceWith($('td input:eq(0)', this).val()); $('td input:gt(0)', this).each(function(e) { etcetera But when the page reloads, the first column is not an input type, so it changes the second column to text too! Can I tell it to only change the first column, no matter what the type is? I tried $('td:eq(0)', this).replaceWith($('td:eq(0)', this).val()); but it does not work. Any suggestions appreciated! Thanks

    Read the article

  • Is this a bad indexing strategy for a table?

    - by llamaoo7
    The table in question is part of a database that a vendor's software uses on our network. The table contains metadata about files. The schema of the table is as follows Metadata ResultID (PK, int, not null) MappedFieldname (char(50), not null) Fieldname (PK, char(50), not null) Fieldvalue (text, null) There is a clustered index on ResultID and Fieldname. This table typically contains millions of rows (in one case, it contains 500 million). The table is populated by 24 workers running 4 threads each when data is being "processed". This results in many non-sequential inserts. Later after processing, more data is inserted into this table by some of our in-house software. The fragmentation for a given table is at least 50%. In the case of the largest table, it is at 90%. We do not have a DBA. I am aware we desperately need a DB maintenance strategy. As far as my background, I'm a college student working part time at this company. My question is this, is a clustered index the best way to go about this? Should another index be considered? Are there any good references for this type and similar ad-hoc DBA tasks?

    Read the article

  • The [2] table entry '[3]' has no associated entry in the Media table. (error 2602)

    - by derekf
    Coworker started getting the above message in the event log and as dialog during install.  Argument [2] was File and argument [3] was a specific file. Error dialog read   Product: (app name) -- The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 2602. Package was a vendor-provided MSI that had been installed administratively, and then a patch (.msp) applied to the administrative install point. With some digging we found that the MSI still had the entries in the media table pointing at the CAB files, and that there were several files at the end of the sequence that did not have corresponding entries in the Media table (last sequence 990 in Media table, last entry in File table had sequence 994).  Attributes on files in the File table all had the msidbFileAttributesCompressed (&16384) attribute set, so they were all expecting to be within the CAB files, but since this was an admin install there were no CAB files. Resolved by clearing the Media table (replace with a single entry: Disk ID 1, LastSequence 994) and going through the file table and subtracting 8192 from each entry to mark files as not compressed.  Tested and worked.

    Read the article

  • validate uniqueness amongst multiple subclasses with Single Table Inheritance

    - by irkenInvader
    I have a Card model that has many Sets and a Set model that has many Cards through a Membership model: class Card < ActiveRecord::Base has_many :memberships has_many :sets, :through => :memberships end class Membership < ActiveRecord::Base belongs_to :card belongs_to :set validates_uniqueness_of :card_id, :scope => :set_id end class Set < ActiveRecord::Base has_many :memberships has_many :cards, :through => :memberships validates_presence_of :cards end I also have some sub-classes of the above using Single Table Inheritance: class FooCard < Card end class BarCard < Card end and class Expansion < Set end class GameSet < Set validates_size_of :cards, :is => 10 end All of the above is working as I intend. What I'm trying to figure out is how to validate that a Card can only belong to a single Expansion. I want the following to be invalid: some_cards = FooCard.all( :limit => 25 ) first_expansion = Expansion.new second_expansion = Expansion.new first_expansion.cards = some_cards second_expansion.cards = some_cards first_expansion.save # Valid second_expansion.save # **Should be invalid** However, GameSets should allow this behavior: other_cards = FooCard.all( :limit => 10 ) first_set = GameSet.new second_set = GameSet.new first_set.cards = other_cards # Valid second_set.cards = other_cards # Also valid I'm guessing that a validates_uniqueness_of call is needed somewhere, but I'm not sure where to put it. Any suggestions? UPDATE 1 I modified the Expansion class as sugested: class Expansion < Set validate :validates_uniqueness_of_cards def validates_uniqueness_of_cards membership = Membership.find( :first, :include => :set, :conditions => [ "card_id IN (?) AND sets.type = ?", self.cards.map(&:id), "Expansion" ] ) errors.add_to_base("a Card can only belong to a single Expansion") unless membership.nil? end end This works when creating initial expansions to validate that no current expansions contain the cards. However, this (falsely) invalidates future updates to the expansion with new cards. In other words: old_exp = Expansion.find(1) old_exp.card_ids # returns [1,2,3,4,5] new_exp = Expansion.new new_exp.card_ids = [6,7,8,9,10] new_exp.save # returns true new_exp.card_ids << [11,12] # no other Expansion contains these cards new_exp.valid? # returns false ... SHOULD be true

    Read the article

  • Convert VARCHAR() columns to NVARCHAR()

    - by ChrisD
    We recently underwent an upgrade that required us to change our database columns from varchar to NVarchar, to support unicode characters. Digging through the internet, I found a base script which I modified to handle reserved word table names, and maintain the NULL/NotNull constraint of the columns.   I Ran this script use NWOperationalContent – Your Catalog Name here GO SELECT 'ALTER TABLE ' + isnull(schema_name(syo.id), 'dbo') + '.[' +  syo.name +'] '     + ' ALTER COLUMN [' + syc.name + '] NVARCHAR(' + case syc.length when -1 then 'MAX'         ELSE convert(nvarchar(10),syc.length) end + ') '+         case  syc.isnullable when 1 then ' NULL' ELSE ' NOT NULL' END +';'    FROM sysobjects syo    JOIN syscolumns syc ON      syc.id = syo.id    JOIN systypes syt ON      syt.xtype = syc.xtype    WHERE      syt.name = 'varchar'     and syo.xtype='U'   which produced a series of ALTER statements which I could then execute the tables.  In some cases I had to drop indexes, alter the tables, and re-create the indexes.  There might have been a better way to do that, but manually dropping them got the job done.   use NWMerchandisingContent GO ALTER TABLE Locale Drop Constraint PK_Locale ALTER TABLE Country DROP CONSTRAINT PK_Country GO ALTER TABLE dbo.[Campaign]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[BundleLocalization]  ALTER COLUMN [Locale] NVARCHAR(8)  NOT NULL; ALTER TABLE dbo.[BundleLocalization]  ALTER COLUMN [UnitOfmeasure] NVARCHAR(200)  NULL; ALTER TABLE dbo.[BundleLocalization]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [Locale] NVARCHAR(8)  NOT NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [Imperative] NVARCHAR(MAX)  NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [Instructions] NVARCHAR(MAX)  NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[BundleComponent]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Bundle]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Banner]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Video]  ALTER COLUMN [Link] NVARCHAR(512)  NOT NULL; ALTER TABLE dbo.[Video]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[ProductUsage]  ALTER COLUMN [VideoLink] NVARCHAR(512)  NOT NULL; ALTER TABLE dbo.[ProductUsage]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Thumbnail]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[SkuLocalization]  ALTER COLUMN [Locale] NVARCHAR(8)  NOT NULL; ALTER TABLE dbo.[SkuLocalization]  ALTER COLUMN [UnitOfMeasure] NVARCHAR(150)  NOT NULL; ALTER TABLE dbo.[SkuLocalization]  ALTER COLUMN [SwatchColor] NVARCHAR(50)  NOT NULL; etc.. GO ALTER TABLE Locale ADD CONSTRAINT PK_Locale PRIMARY KEY (LocaleId) ALTER TABLE Country ADD CONSTRAINT PK_Country PRIMARY KEY (CountryId) Note that this alter is non-destructive to the data.   Hope this helps.

    Read the article

  • Setting user calendar permissions on Exchange 2007

    - by blizz
    We have Exchange 2007 with about 100 users. I would like to change everyone's free/busy permissions to grant Reviewer status to a specific AD group. I have tried PFDAVAdmin tool but when I commit any changes, they do not affect the users. If I grant myself Reviewer permissions to another user's calendar using the tool, I still cannot view that user's free/busy details, and I also don't show up on the list of people with permissions on that user's Outlook calendar options. It seems like PFDAVAdmin simply appears to do something, but doesn't actually change anything. Is there any other way for me to accomplish what I need to do? Or is there something I may not be doing right with PFDAVAdmin? FYI I have followed directions from this link: http://exchangeshare.wordpress.com/2008/05/27/faq-give-calendar-read-permission-on-all-mailboxes-pfdavadmin/

    Read the article

  • Edit Enterprise Calendar in Project Server 2010

    - by Chris W
    Has anyone been able to successfully edit the Standard calendar in 2010? I'm trying to change the working times as none of our admin accounts seem able to do it. We're running Project Server 2010 RTM on SharePoint 2010 RTM with Project Pro 2010. When I click the Edit Calendar button in PWA it triggers Project client to open up but it just opens up an empty project and I've not access to edit the Standard calendar using any of the published steps. It would be great to hear if someone has managed to do this so I can work out if it's a general glitch in this build or is it just a problem with out setup.

    Read the article

  • How do I open multiple windows when Outlook 2010 starts?

    - by Eric
    OS: Windows 7 64-bit App: Outlook 2010 32-bit Server: Exchange 2010 I'd like to modify Outlook's default startup behavior so that it shows both my Inbox and Calendar when I click my shortcut. I use both of them all day, and know how to just right-click the calendar and select "Open in New Window." I run my inbox on one screen and my calendar on another. I also configured my calendar to be the folder that opens by default when I start Outlook so I don't miss early appointments, but if I could somehow have BOTH open in two separate windows, that would be awesome. Is there a command-line interface or something that can accomplish this? Thanks in advance.

    Read the article

  • MySQL syntax: can't create table

    - by peng
    mysql> create table balance_sheet( -> Cash_and_cash_equivalents VARCHAR(20), -> Trading_financial_assets VARCHAR(20), -> Note_receivable VARCHAR(20), -> Account_receivable VARCHAR(20), -> Advance_money VARCHAR(20), -> Interest_receivable VARCHAR(20), -> Dividend_receivable VARCHAR(20), -> Other_notes_receivable VARCHAR(20), -> Due_from_related_parties VARCHAR(20), -> Inventory VARCHAR(20), -> Consumptive_biological_assets VARCHAR(20), -> Non_current_asset(expire_in_a_year) VARCHAR(20), -> Other_current_assets VARCHAR(20), -> Total_current_assets VARCHAR(20), -> Available_for_sale_financial_assets VARCHAR(20), -> Held_to_maturity_investment VARCHAR(20), -> Long_term_account_receivable VARCHAR(20), -> Long_term_equity_investment VARCHAR(20), -> Investment_real_eastate VARCHAR(20), -> Fixed_assets VARCHAR(20), -> Construction_in_progress VARCHAR(20), -> Project_material VARCHAR(20), -> Liquidation_of_fixed_assets VARCHAR(20), -> Capitalized_biological_assets VARCHAR(20), -> Oil_and_gas_assets VARCHAR(20), -> Intangible_assets VARCHAR(20), -> R&d_expense VARCHAR(20), -> Goodwill VARCHAR(20), -> Deferred_assets VARCHAR(20), -> Deferred_income_tax_assets VARCHAR(20), -> Other_non_current_assets VARCHAR(20), -> Total_non_current_assets VARCHAR(20), -> Total_assets VARCHAR(20), -> Short_term_borrowing VARCHAR(20), -> Transaction_financial_liabilities VARCHAR(20), -> Notes_payable VARCHAR(20), -> Account_payable VARCHAR(20), -> Item_received_in_advance VARCHAR(20), -> Employee_pay_payable VARCHAR(20), -> Tax_payable VARCHAR(20), -> Interest_payable VARCHAR(20), -> Dividend_payable VARCHAR(20), -> Other_account_payable VARCHAR(20), -> Due_to_related_parties VARCHAR(20), -> Non_current_liabilities_due_within_one_year VARCHAR(20), -> Other_current_liabilities VARCHAR(20), -> Total_current_liabilities VARCHAR(20), -> Long_term_loan VARCHAR(20), -> Bonds_payable VARCHAR(20), -> Long_term_payable VARCHAR(20), -> Specific_payable VARCHAR(20), -> Estimated_liabilities VARCHAR(20), -> Deferred_income_tax_liabilities VARCHAR(20), -> Other_non_current_liabilities VARCHAR(20), -> Total_non_current_liabilities VARCHAR(20), -> Total_liabilities VARCHAR(20), -> Paid_in_capital VARCHAR(20), -> Contributed_surplus VARCHAR(20), -> Treasury_stock VARCHAR(20), -> Earned_surplus VARCHAR(20), -> Retained_earnings VARCHAR(20), -> Translation_reserve VARCHAR(20), -> Nonrecurring_items VARCHAR(20), -> Total_equity(non) VARCHAR(20), -> Minority_interests VARCHAR(20), -> Total_equity VARCHAR(20), -> Total_liabilities_&_shareholder's_equity VARCHAR(20)); '> '> when i press enter,there is the output of ' ,no other reaction ,what's wrong?

    Read the article

  • SQL Cartesian product joining table to itself and inserting into existing table

    - by Emma
    I am working in phpMyadmin using SQL. I want to take the primary key (EntryID) from TableA and create a cartesian product (if I am using the term correctly) in TableB (empty table already created) for all entries which share the same value for FieldB in TableA, except where TableA.EntryID equals TableA.EntryID So, for example, if the values in TableA were: TableA.EntryID TableA.FieldB 1 23 2 23 3 23 4 25 5 25 6 25 The result in TableB would be: Primary key EntryID1 EntryID2 FieldD (Default or manually entered) 1 1 2 Default value 2 1 3 Default value 3 2 1 Default value 4 2 3 Default value 5 3 1 Default value 6 3 2 Default value 7 4 5 Default value 8 4 6 Default value 9 5 4 Default value 10 5 6 Default value 11 6 4 Default value 12 6 5 Default value I am used to working in Access and this is the first query I have attempted in SQL. I started trying to work out the query and got this far. I know it's not right yet, as I’m still trying to get used to the syntax and pieced this together from various articles I found online. In particular, I wasn’t sure where the INSERT INTO text went (to create what would be an Append Query in Access). SELECT EntryID FROM TableA.EntryID TableA.EntryID WHERE TableA.FieldB=TableA.FieldB TableA.EntryID<>TableA.EntryID INSERT INTO TableB.EntryID1 TableB.EntryID2 After I've got that query right, I need to do a TRIGGER query (I think), so if an entry changes it's value in TableA.FieldB (changing it’s membership of that grouping to another grouping), the cartesian product will be re-run on THAT entry, unless TableB.FieldD = valueA or valueB (manually entered values). I have been using the Designer Tab. Does there have to be a relationship link between TableA and TableB. If so, would it be two links from the EntryID Primary Key in TableA, one to each EntryID in TableB? I assume this would not work because they are numbered EntryID1 and EntryID2 and the name needs to be the same to set up a relationship? If you can offer any suggestions, I would be very grateful. Research: http://www.fluffycat.com/SQL/Cartesian-Joins/ Cartesian Join example two Q: You said you can have a Cartesian join by joining a table to itself. Show that! Select * From Film_Table T1, Film_Table T2;

    Read the article

  • Updating a sql server table with data from another table

    - by David G
    I have two basic SQL Server tables: Customer (ID [pk], AddressLine1, AddressLine2, AddressCity, AddressDistrict, AddressPostalCode) CustomerAddress(ID [pk], CustomerID [fk], Line1, Line2, City, District, PostalCode) CustomerAddress contains multiple addresses for the Customer record. For each Customer record I want to merge the most recent CustomerAddress record where most recent is determined by the highest CustomerAddress ID value. I've currently got the following: UPDATE Customer SET AddressLine1 = CustomerAddress.Line1, AddressPostalCode = CustomerAddress.PostalCode FROM Customer, CustomerAddress WHERE Customer.ID = CustomerAddress.CustomerID which works but how can I ensure that the most recent (highest ID) CustomerAddress record is selected to update the Customer table?

    Read the article

  • What advantages do we have when creating a separate mapping table for two relational tables

    - by Pankaj Upadhyay
    In various open source CMS, I have noticed that there is a separate table for mapping two relational tables. Like for categories and products, there is a separate product_category_mapping table. This table just has a primary key and two foreign keys from the categories and product tables. My question is what are the benefits of this database design rather than just linking the tables directly by defining a foreign key in either table? Is it just matter of convenience?

    Read the article

  • Sql Table Refactoring Challenge

    Ive been working a bit on cleaning up a large table to make it more efficient.  I pretty much know what I need to do at this point, but I figured Id offer up a challenge for my readers, to see if they can catch everything I have as well as to see if Ive missed anything.  So to that end, I give you my table: CREATE TABLE [dbo].[lq_ActivityLog]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [PlacementID] [int] NOT NULL, [CreativeID] [int] NOT NULL, [PublisherID] [int] NOT NULL, [CountryCode] [nvarchar](10) NOT NULL, [RequestedZoneID] [int] NOT NULL, [AboveFold] [int] NOT NULL, [Period] [datetime] NOT NULL, [Clicks] [int] NOT NULL, [Impressions] [int] NOT NULL, CONSTRAINT [PK_lq_ActivityLog2] PRIMARY KEY CLUSTERED ( [Period] ASC, [PlacementID] ASC, [CreativeID] ASC, [PublisherID] ASC, [RequestedZoneID] ASC, [AboveFold] ASC, [CountryCode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] And now some assumptions and additional information: The table has 200,000,000 rows currently PlacementID ranges from 1 to 5000 and should support at least 50,000 CreativeID ranges from 1 to 5000 and should support at least 50,000 PublisherID ranges from 1 to 500 and should support at least 50,000 CountryCode is a 2-character ISO standard (e.g. US) and there is a country table with an integer ID already.  There are < 300 rows. RequestedZoneID ranges from 1 to 100 and should support at least 50,000 AboveFold has values of 1, 0, or 1 only. Period is a date (no time). Clicks range from 0 to 5000. Impressions range from 0 to 5000000. The table is currently write-mostly.  Its primary purpose is to log advertising activity as quickly as possible.  Nothing in the rest of the system reads from it except for batch jobs that pull the data into summary tables. Heres the current information on the database tables size: Design Goals This table has been in use for about 5 years and has performed very well during that time.  The only complaints we have are that it is quite large and also there are occasionally timeouts for queries that reference it, particularly when batch jobs are pulling data from it.  Any changes should be made with an eye toward keeping write performance optimal  while trying to reduce space and improve read performance / eliminate timeouts during read operations. Refactor There are, I suggest to you, some glaringly obvious optimizations that can be made to this table.  And Im sure there are some ninja tweaks known to SQL gurus that would be a big help as well.  Ill post my own suggested changes in a follow-up post for now feel free to comment with your suggestions. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to access a row from af:table out of context

    - by Vijay Mohan
    Scenario : Lets say you have an adf table in a jsff and it is included as af:region inside other page(parent page).Now your requirement is to access some specific rows from the table and do some operations. Now, since you are aceessing the table outside the context in which it is present, so first you will have to setup the context and then you can use the visitCallback mechanism to do the opeartions on table. Here is the sample code: ================= final RichTable table = this.getRichTable();         FacesContext facesContext = FacesContext.getCurrentInstance();         VisitContext visitContext =   RequestContext.getCurrentInstance().createVisitContext(facesContext,null, EnumSet.of(VisitHint.SKIP_TRANSIENT,VisitHint.SKIP_UNRENDERED), null);         //Annonymous call         UIXComponent.visitTree(visitContext,facesContext.getViewRoot(),new VisitCallback(){             public VisitResult visit(VisitContext context, UIComponent target)               {                   if (table != target)                   {                     return VisitResult.ACCEPT;                   }                   else if(table == target)                   {                       //Here goes the Actual Logic                       Iterator selection = table.getSelectedRowKeys().iterator();                       while (selection.hasNext()) {                           Object key = selection.next();                           //store the original key                           Object origKey = table.getRowKey();                           try {                               table.setRowKey(key);                               Object o = table.getRowData();                               JUCtrlHierNodeBinding rowData = (JUCtrlHierNodeBinding)o;                               Row row = rowData.getRow();                               System.out.println(row.getAttribute(0));                           }                           catch(Exception ex){                               ex.printStackTrace();                           }                           finally {                               //restore original key                               table.setRowKey(origKey);                           }                       }                   }                   return VisitResult.COMPLETE;               }         }); 

    Read the article

  • Sql Table Refactoring Challenge

    Ive been working a bit on cleaning up a large table to make it more efficient.  I pretty much know what I need to do at this point, but I figured Id offer up a challenge for my readers, to see if they can catch everything I have as well as to see if Ive missed anything.  So to that end, I give you my table: CREATE TABLE [dbo].[lq_ActivityLog]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [PlacementID] [int] NOT NULL, [CreativeID] [int] NOT NULL, [PublisherID] [int] NOT NULL, [CountryCode] [nvarchar](10) NOT NULL, [RequestedZoneID] [int] NOT NULL, [AboveFold] [int] NOT NULL, [Period] [datetime] NOT NULL, [Clicks] [int] NOT NULL, [Impressions] [int] NOT NULL, CONSTRAINT [PK_lq_ActivityLog2] PRIMARY KEY CLUSTERED ( [Period] ASC, [PlacementID] ASC, [CreativeID] ASC, [PublisherID] ASC, [RequestedZoneID] ASC, [AboveFold] ASC, [CountryCode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] And now some assumptions and additional information: The table has 200,000,000 rows currently PlacementID ranges from 1 to 5000 and should support at least 50,000 CreativeID ranges from 1 to 5000 and should support at least 50,000 PublisherID ranges from 1 to 500 and should support at least 50,000 CountryCode is a 2-character ISO standard (e.g. US) and there is a country table with an integer ID already.  There are < 300 rows. RequestedZoneID ranges from 1 to 100 and should support at least 50,000 AboveFold has values of 1, 0, or 1 only. Period is a date (no time). Clicks range from 0 to 5000. Impressions range from 0 to 5000000. The table is currently write-mostly.  Its primary purpose is to log advertising activity as quickly as possible.  Nothing in the rest of the system reads from it except for batch jobs that pull the data into summary tables. Heres the current information on the database tables size: Design Goals This table has been in use for about 5 years and has performed very well during that time.  The only complaints we have are that it is quite large and also there are occasionally timeouts for queries that reference it, particularly when batch jobs are pulling data from it.  Any changes should be made with an eye toward keeping write performance optimal  while trying to reduce space and improve read performance / eliminate timeouts during read operations. Refactor There are, I suggest to you, some glaringly obvious optimizations that can be made to this table.  And Im sure there are some ninja tweaks known to SQL gurus that would be a big help as well.  Ill post my own suggested changes in a follow-up post for now feel free to comment with your suggestions. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • ProgrammingError: (1146, "Table 'test_<DB>.<TABLE>' doesn't exist") when running unit test for Djang

    - by abigblackman
    I'm running a unit test using the Django framework and get this error. Running the actual code does not have this problem, running the unit tests creates a test database on the fly so I suspect the issue lies there. The code that throws the error looks like this member = Member.objects.get(email=email_address) and the model looks like class Member(models.Model): member_id = models.IntegerField(primary_key=True) created_on = models.DateTimeField(editable=False, default=datetime.datetime.utcnow()) flags = models.IntegerField(default=0) email = models.CharField(max_length=150, blank=True) phone = models.CharField(max_length=150, blank=True) country_iso = models.CharField(max_length=6, blank=True) location_id = models.IntegerField(null=True, blank=True) facebook_uid = models.IntegerField(null=True, blank=True) utc_offset = models.IntegerField(null=True, blank=True) tokens = models.CharField(max_length=3000, blank=True) class Meta: db_table = u'member' there's nothing too odd there i can see. the user running the tests has the same permissions to the database server as the user that runs the website where else can I look to see what's going wrong, why is this table not being created?

    Read the article

  • How to rotate table-headline in Latex table

    - by pagid
    Hi, is there a way to rotate the "Demo 1", "Demo2" and "Demo 3" headlines 90° in the following LaText table? \documentclass[a4paper,twoside,10pt]{report} \begin{document} \begin{tabular}{|l|l|l|l|} \hline & Demo1 & Demo2 & Demo3 \\ \hline Person 1 & x & & \\ \hline Person 2 & x & & x \\ \hline Person 3 & x & x & \\ \hline Person 4 & & x & x \\ \hline \end{tabular} \end{document} Thanks

    Read the article

  • Incremental Statistics Maintenance – what statistics will be gathered after DML occurs on the table?

    - by Maria Colgan
    Incremental statistics maintenance was introduced in Oracle Database 11g to improve the performance of gathering statistics on large partitioned table. When incremental statistics maintenance is enabled for a partitioned table, oracle accurately generated global level  statistics by aggregating partition level statistics. As more people begin to adopt this functionality we have gotten more questions around how they expected incremental statistics to behave in a given scenario. For example, last week we got a question around what partitions should have statistics gathered on them after DML has occurred on the table? The person who asked the question assumed that statistics would only be gathered on partitions that had stale statistics (10% of the rows in the partition had changed). However, what they actually saw when they did a DBMS_STATS.GATHER_TABLE_STATS was all of the partitions that had been affected by the DML had statistics re-gathered on them. This is the expected behavior, incremental statistics maintenance is suppose to yield the same statistics as gathering table statistics from scratch, just faster. This means incremental statistics maintenance needs to gather statistics on any partition that will change the global or table level statistics. For instance, the min or max value for a column could change after just one row is inserted or updated in the table. It might easier to demonstrate this using an example. Let’s take the ORDERS2 table, which is partitioned by month on order_date.  We will begin by enabling incremental statistics for the table and gathering statistics on the table. After the statistics gather the last_analyzed date for the table and all of the partitions now show 13-Mar-12. And we now have the following column statistics for the ORDERS2 table. We can also confirm that we really did use incremental statistics by querying the dictionary table sys.HIST_HEAD$, which should have an entry for each column in the ORDERS2 table. So, now that we have established a good baseline, let’s move on to the DML. Information is loaded into the latest partition of the ORDERS2 table once a month. Existing orders maybe also be update to reflect changes in their status. Let’s assume the following transactions take place on the ORDERS2 table this month. After these transactions have occurred we need to re-gather statistic since the partition ORDERS_MAR_2012 now has rows in it and the number of distinct values and the maximum value for the STATUS column have also changed. Now if we look at the last_analyzed date for the table and the partitions, we will see that the global statistics and the statistics on the partitions where rows have changed due to the update (ORDERS_FEB_2012) and the data load (ORDERS_MAR_2012) have been updated. The column statistics also reflect the changes with the number of distinct values in the status column increase to reflect the update. So, incremental statistics maintenance will gather statistics on any partition, whose data has changed and that change will impact the global level statistics.

    Read the article

  • Table Alias in SubSonic

    - by rockacola
    How can I assign alias to tables with SubSonic 2.1? I am trying to reproduce the following query: SELECT * FROM posts P RIGHT OUTER JOIN post_meta X ON P.post_id = X.post_id RIGHT OUTER JOIN post_meta Y ON P.post_id = Y.post_id WHERE X.meta_key = "category" AND X.meta_value = "technology" AND Y.meta_key = "keyword" AND Y.meta_value = "cloud" I'm am using SubSonic 2.1 and upgrading to 2.2 isn't an option (yet). Thanks.

    Read the article

  • How to JOIN a COUNT from a table, and then effect that COUNT with another JOIN

    - by jakenoble
    Hi I have three tables Post ID Name 1 'Something' 2 'Something else' 3 'One more' Comment ID PostId ProfileID Comment 1 1 1 'Hi my name is' 2 2 2 'I like cakes' 3 3 3 'I hate cakes' Profile ID Approved 1 1 2 0 3 1 I want to count the comments for a post where the profile for the comment is approved I can select the data from Post and then join a count from Comment fine. But this count should be dependent on if the Profile is approved or not. The results I am expecting is CommentCount PostId Count 1 1 2 0 3 1 Thanks for any help.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >