Search Results

Search found 10101 results on 405 pages for 'temporary tables'.

Page 86/405 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • SQL SERVER – Get Schema Name from Object ID using OBJECT_SCHEMA_NAME

    - by pinaldave
    Sometime a simple solution have even simpler solutions but we often do not practice it as we do not see value in it or find it useful. Well, today’s blog post is also about something which I have seen not practiced much in codes. We are so much comfortable with alternative usage that we do not feel like switching how we query the data. I was going over forums and I noticed that at one place user has used following code to get Schema Name from ObjectID. USE AdventureWorks2012 GO SELECT s.name AS SchemaName, t.name AS TableName, s.schema_id, t.OBJECT_ID FROM sys.Tables t INNER JOIN sys.schemas s ON s.schema_id = t.schema_id WHERE t.name = OBJECT_NAME(46623209) GO Before I continue let me say I do not see anything wrong with this script. It is just fine and one of the way to get SchemaName from Object ID. However, I have been using function OBJECT_SCHEMA_NAME to get the schema name. If I have to write the same code from the beginning I would have written the same code as following. SELECT OBJECT_SCHEMA_NAME(46623209) AS SchemaName, t.name AS TableName, t.schema_id, t.OBJECT_ID FROM sys.tables t WHERE t.name = OBJECT_NAME(46623209) GO Now, both of the above code give you exact same result. If you remove the WHERE condition it will give you information of all the tables of the database. Now the question is which one is better – honestly – it is not about one is better than other. Use the one which you prefer to use. I prefer to use second one as it requires less typing. Let me ask you the same question to you – which method to get schema name do yo use? and Why? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • sp_help

    - by David-Betteridge
    One of the nice things about SQL Server Management Studio (SSMS) is that you can highlight a table name in a script and press Alt + F1 to perform sp_help on it. Unfortunately I've never been able to use that feature as the majority of the tables in our product belong to a schema other than dbo.    On a long train journey back to York I wondered if I could solve this problem by writing my own replacement for sp_help (which I’ve called sp_help_table_schemas).  My version works by first checking the system tables to find out which schemas the table belongs to SELECT s.Name   --Find the schema FROM sys.schemas s  JOIN sys.tables t on t.schema_id = s.schema_id  WHERE t.name = 'Orders'It then dynamically calls the standard sp_help method but this time supplying the table owner as well.SET @cmd = 'EXEC sp_help ''' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(@ObjectName) + ''' ;' ;           EXEC ( @cmd )Once I had proved the basics worked I wrapped it up into a stored procedure and deployed it to the master database on my laptop.  It was then just a question of going into Tools à Options within SSMS and defining the keyboard short cutA couple of notes You can’t amend the existing Alt+F1 entry to I went with Ctrl+F1.  You need to open new query window for the change to be picked upSo I can now highlight a table name and press Ctrl+F1 The completed script is attached.   Thanks go to Martin Bell who reviewed my stored procedure and give some valuable advice.

    Read the article

  • Can't get lines around table borders/cells [migrated]

    - by Ira Baxter
    I have several web pages containing tables, for which I'd like to have line-borders around the tables and the cells. In fact, some of these pages existed for several years already, and rendered acceptly in IE6, IE7. We switched about 6 months ago to a completely different set of style sheets to change our site look and feel. We also switched to "modern" browsers such as IE8 (and because I couldn't stop Vista) to IE9. Now the borders don't render at all. I spent a day fighting with this about a month ago, and failed to fix it. It seemed that I could reduce the page down to just the barest table and IE8 would still not render the border. I think I decided IE8 was just buggy, but I'm not an HTML expert so it is more likely that I'm buggy. (I'm just getting back to this; I'll go see if I can find that reduced page). Here is one such page: http://www.semdesigns.com/products/DMS/DMSComparison.html The tables should be obvious; you can tell them by their absence of lines :-{ The URI validates using the W3C service as HTML 4.01 Transitional. Any suggestions?

    Read the article

  • Getting the relational table data into XML recursively

    - by Tom
    I have levels of tables (Level1, Level2, Level3, ...) For simplicity, we'll say I have 3 levels. The rows in the higher level tables are parents of lower level table rows. The relationship does not skip levels however. E.g. Row1Level1 is parent of Row3Level2, Row2Level2 is parent of Row4Level3. Level(n-1)'s parent is always be in Level(n). Given these tables with data, I need to come up with a recursive function that generates an XML file to represent the relationship and the data. E.g. <data> <level levelid = 1 rowid=1> <level levelid = 2 rowid=3 /> </level> <level levelid = 2 rowid=2> <level levelid = 3 rowid=4 /> </level> </data> I would like help with coming up with a pseudo-code for this setup. This is what I have so far: XElement GetXMLData(Table table, string identifier, XElement data) { XElement xmlData = data; if (table != null) { foreach (row in the table) { // Get subordinate table Table subordinateTable = GetSubordinateTable(table); // Get the XML Data for the children of current row xmlData += GetXMLData(subordinateTable, row.Identifier, xmlData); } } return xmlData; }

    Read the article

  • Data that has been deleted in P6, how is it updated in Analytics

    - by Jeffrey McDaniel
    In P6 Reporting Database 2.0 the ETL process looked to the refrdel table in the P6 PMDB to determine which projects were deleted. The refrdel table could not be cleared out between ETL runs or those deletes would be lost. After the ETL process is run the refrdel can be cleared out. It is important to keep any purging of the refrdel in a consistent cycle so the ETL process can pick up these deletes and process them accordingly.  In P6 Reporting Database 2.2 and higher the Extended Schema is used as the data source. In the Extended Schema, deleted data is filtered out by the views. The Extended Schema services will handle any interaction with the refrdel table, this concern with timing refrdel cleanup and ETL runs is not applicable as of this release. In the Extended Schema tables (ex. TaskX) there can still be deleted data present. The Extended Schema views join on the primary PMDB tables (ex. Task) and filter out any deleted data.  Any data that was deleted that remains in the Extended Schema tables can be cleaned out at a designated time by running the clean up procedure as documented in the P6 Extended Schema white paper. This can be run occasionally but is not necessary to run often unless large amounts of data has been deleted.

    Read the article

  • Determine All SQL Server Table Sizes

    Im doing some work to migrate and optimize a large-ish (40GB) SQL Server database at the moment.  Moving such a database between data centers over the Internet is not without its challenges.  In my case, virtually all of the size of the database is the result of one table, which has over 200M rows of data.  To determine the size of this table on disk, you can run the sp_TableSize stored procedure, like so: EXEC sp_spaceused lq_ActivityLog This results in the following: Of course this is only showing one table if you have a lot of tables and need to know which ones are taking up the most space, it would be nice if you could run a query to list all of the tables, perhaps ordered by the space theyre taking up.  Thanks to Mitchel Sellers (and Gregg Starks CURSOR template) and a tiny bit of my own edits, now you can!  Create the stored procedure below and call it to see a listing of all user tables in your database, ordered by their reserved space. -- Lists Space Used for all user tablesCREATE PROCEDURE GetAllTableSizesASDECLARE @TableName VARCHAR(100)DECLARE tableCursor CURSOR FORWARD_ONLYFOR select [name]from dbo.sysobjects where OBJECTPROPERTY(id, N'IsUserTable') = 1FOR READ ONLYCREATE TABLE #TempTable( tableName varchar(100), numberofRows varchar(100), reservedSize varchar(50), dataSize varchar(50), indexSize varchar(50), unusedSize varchar(50))OPEN tableCursorWHILE (1=1)BEGIN FETCH NEXT FROM tableCursor INTO @TableName IF(@@FETCH_STATUS<>0) BREAK; INSERT #TempTable EXEC sp_spaceused @TableNameENDCLOSE tableCursorDEALLOCATE tableCursorUPDATE #TempTableSET reservedSize = REPLACE(reservedSize, ' KB', '')SELECT tableName 'Table Name',numberofRows 'Total Rows',reservedSize 'Reserved KB',dataSize 'Data Size',indexSize 'Index Size',unusedSize 'Unused Size'FROM #TempTableORDER BY CONVERT(bigint,reservedSize) DESCDROP TABLE #TempTableGO Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Multiplayer online game engine/pipeline

    - by Slav
    I am implementing online multiplayer game where client must be written in AS3 (Flash) to embed game into browser and server in C++ (abstract part of which is already written and used with other games). Networking models may differ from each other, but currently I'm looking toward game's logic run on both client and server parts but they're written on different languages while it's not the main problem. My previous game (pretty big one - was implemented with efforts of ~5 programmers in 1.5 years) was mainly "written" within electronic tables as structured objects with implemented inheritance: was written standalone tool which generated AS3 and C++ (languages of platforms to which the game was published) using specified electronic tables file (.xls or .ods). That file contained ~50 tables with ~50 rows and ~50 columns each and was mainly written by game designers which do not know any programming languages. But that game was single-player. Having declared problem with my currently implementing MMO, I'm looking toward some vast pipeline, where will be resolved such problems like: game objects descriptions (which starships exist within game, how much HP they have, how fast move, what damage deal...) actions descriptions (what players or NPCs can do: attack each other, collect resources, build structures, move, teleport, cast spells) - actions are transmitted through server between clients influences (what happens when specified action applied on specified object, e.i "Ship A attacked Ship B: field "HP" of Ship B reduced by amount of field "damage" of Ship A" Influences can be much more difficult, yes, e.i. "damage is twice it's size when Ship has =5 allies around him in a 200 units range during night" and so on. If to be able to write such logic within some "design document" it will be easily possible to: let designers to do their job without programmer's intervention or any bug-prone programming validate described logic transfer (transform, convert) to any programming language where it will be executed Did somebody worked on something like that? Is there some tools/engines/pipelines which concernes with it? How to handle all of this problems simultaneously in a best way or do I properly imagine my tasks and problems to myself?

    Read the article

  • Database Migration Scripts: Getting from place A to place B

    - by Phil Factor
    We’ll be looking at a typical database ‘migration’ script which uses an unusual technique to migrate existing ‘de-normalised’ data into a more correct form. So, the book-distribution business that uses the PUBS database has gradually grown organically, and has slipped into ‘de-normalisation’ habits. What’s this? A new column with a list of tags or ‘types’ assigned to books. Because books aren’t really in just one category, someone has ‘cured’ the mismatch between the database and the business requirements. This is fine, but it is now proving difficult for their new website that allows searches by tags. Any request for history book really has to look in the entire list of associated tags rather than the ‘Type’ field that only keeps the primary tag. We have other problems. The TypleList column has duplicates in there which will be affecting the reporting, and there is the danger of mis-spellings getting there. The reporting system can’t be persuaded to do reports based on the tags and the Database developers are complaining about the unCoddly things going on in their database. In your version of PUBS, this extra column doesn’t exist, so we’ve added it and put in 10,000 titles using SQL Data Generator. /* So how do we refactor this database? firstly, we create a table of all the tags. */IF  OBJECT_ID('TagName') IS NULL OR OBJECT_ID('TagTitle') IS NULL  BEGIN  CREATE TABLE  TagName (TagName_ID INT IDENTITY(1,1) PRIMARY KEY ,     Tag VARCHAR(20) NOT NULL UNIQUE)  /* ...and we insert into it all the tags from the list (remembering to take out any leading spaces */  INSERT INTO TagName (Tag)     SELECT DISTINCT LTRIM(x.y.value('.', 'Varchar(80)')) AS [Tag]     FROM     (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )  /* we can then use this table to provide a table that relates tags to articles */  CREATE TABLE TagTitle   (TagTitle_ID INT IDENTITY(1, 1),   [title_id] [dbo].[tid] NOT NULL REFERENCES titles (Title_ID),   TagName_ID INT NOT NULL REFERENCES TagName (Tagname_ID)   CONSTRAINT [PK_TagTitle]       PRIMARY KEY CLUSTERED ([title_id] ASC, TagName_ID)       ON [PRIMARY])        CREATE NONCLUSTERED INDEX idxTagName_ID  ON  TagTitle (TagName_ID)  INCLUDE (TagTitle_ID,title_id)        /* ...and it is easy to fill this with the tags for each title ... */        INSERT INTO TagTitle (Title_ID, TagName_ID)    SELECT DISTINCT Title_ID, TagName_ID      FROM        (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )    INNER JOIN TagName ON TagName.Tag=LTRIM(x.y.value('.', 'Varchar(80)'))    END    /* That's all there was to it. Now we can select all titles that have the military tag, just to try things out */SELECT Title FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE tagname.tag='Military'/* and see the top ten most popular tags for titles */SELECT Tag, COUNT(*) FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  GROUP BY Tag ORDER BY COUNT(*) DESC/* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID So we’ve refactored our PUBS database without pain. We’ve even put in a check to prevent it being re-run once the new tables are created. Here is the diagram of the new tag relationship We’ve done both the DDL to create the tables and their associated components, and the DML to put the data in them. I could have also included the script to remove the de-normalised TypeList column, but I’d do a whole lot of tests first before doing that. Yes, I’ve left out the assertion tests too, which should check the edge cases and make sure the result is what you’d expect. One thing I can’t quite figure out is how to deal with an ordered list using this simple XML-based technique. We can ensure that, if we have to produce a list of tags, we can get the primary ‘type’ to be first in the list, but what if the entire order is significant? Thank goodness it isn’t in this case. If it were, we might have to revisit a string-splitter function that returns the ordinal position of each component in the sequence. You’ll see immediately that we can create a synchronisation script for deployment from a comparison tool such as SQL Compare, to change the schema (DDL). On the other hand, no tool could do the DML to stuff the data into the new table, since there is no way that any tool will be able to work out where the data should go. We used some pretty hairy code to deal with a slightly untypical problem. We would have to do this migration by hand, and it has to go into source control as a batch. If most of your database changes are to be deployed by an automated process, then there must be a way of over-riding this part of the data synchronisation process to do this part of the process taking the part of the script that fills the tables, Checking that the tables have not already been filled, and executing it as part of the transaction. Of course, you might prefer the approach I’ve taken with the script of creating the tables in the same batch as the data conversion process, and then using the presence of the tables to prevent the script from being re-run. The problem with scripting a refactoring change to a database is that it has to work both ways. If we install the new system and then have to rollback the changes, several books may have been added, or had their tags changed, in the meantime. Yes, you have to script any rollback! These have to be mercilessly tested, and put in source control just in case of the rollback of a deployment after it has been in place for any length of time. I’ve shown you how to do this with the part of the script .. /* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID …which would be turned into an UPDATE … FROM script. UPDATE titles SET  typelist= ThisTaglistFROM     (SELECT title_ID, title, STUFF(    (SELECT ','+tagname.tag FROM titles thisTitle      INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID      INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID    WHERE ThisTitle.title_id=titles.title_ID    ORDER BY CASE WHEN tagname.tag=titles.[type] THEN 1 ELSE 0  END DESC    FOR XML PATH(''), TYPE).value('.', 'varchar(max)')    ,1,1,'')  AS ThisTagList  FROM titles)fINNER JOIN Titles ON f.title_ID=Titles.title_ID You’ll notice that it isn’t quite a round trip because the tags are in a different order, though we’ve managed to make sure that the primary tag is the first one as originally. So, we’ve improved the database for the poor book distributors using PUBS. It is not a major deal but you’ve got to be prepared to provide a migration script that will go both forwards and backwards. Ideally, database refactoring scripts should be able to go from any version to any other. Schema synchronization scripts can do this pretty easily, but no data synchronisation scripts can deal with serious refactoring jobs without the developers being able to specify how to deal with cases like this.

    Read the article

  • Structure of a .NET Assembly

    - by Om Talsania
    Assembly is the smallest unit of deployment in .NET Framework.When you compile your C# code, it will get converted into a managed module. A managed module is a standard EXE or DLL. This managed module will have the IL (Microsoft Intermediate Language) code and the metadata. Apart from this it will also have header information.The following table describes parts of a managed module.PartDescriptionPE HeaderPE32 Header for 32-bit PE32+ Header for 64-bit This is a standard Windows PE header which indicates the type of the file, i.e. whether it is an EXE or a DLL. It also contains the timestamp of the file creation date and time. It also contains some other fields which might be needed for an unmanaged PE (Portable Executable), but not important for a managed one. For managed PE, the next header i.e. CLR header is more importantCLR HeaderContains the version of the CLR required, some flags, token of the entry point method (Main), size and location of the metadata, resources, strong name, etc.MetadataThere can be many metadata tables. They can be categorized into 2 major categories.1. Tables that describe the types and members defined in your code2. Tables that describe the types and members referenced by your codeIL CodeMSIL representation of the C# code. At runtime, the CLR converts it into native instructions

    Read the article

  • Problems with LibreOffice Impress?

    - by kkp
    I am a big fan of Ubuntu. I have recently switched from Latex to LibreOffice Impress for PPT because it seems that with Impress you can quickly create slides. I am using LibreOffice 3.4.4 OOO340m1 (Build:402) on Ubuntu 11.04. I am saving them as .ppt format. However, facing the following problems. It is really frustrating and forcing me to use Windows/MAC just for making slides. Font is changing from Arial to TimesNewRoman specifically for Tables when I reopen an Impress presentation. Tables are stretched if I reopen an Impress presentation. Impress is not allowing me to compress them (the stretched tables) and I have needed to make a fresh table. Several other modifications are reverting back when I reopen an Impress presentation. For example, the "transparency" change to the borders (e.g., for rectangular shape). Arbitrarily Impress is crashing. It would be great if someone help me to resolve these issues. Thanks.

    Read the article

  • MVC 4 Authentication

    - by Aligned
    First: After searching for awhile to figure out what’s new/different with MVC 4 and forms authentication, this is the best article I've found on the subject: http://weblogs.asp.net/jgalloway/archive/2012/08/29/simplemembership-membership-providers-universal-providers-and-the-new-asp-net-4-5-web-forms-and-asp-net-mvc-4-templates.aspx Some quotes from the article: “The ASP.NET Web Pages team designed SimpleMembership to (wait for it) simplify the task of dealing with membership” "WSAT is built to work with ASP.NET Membership, and is not compatible with Simple Membership. There are two main options there: Use the WebSecurity and OAuthWebSecurity API to manage the users and roles Create a web admin using the above APIs Since SimpleMembership runs on top of your database, you can update your users as you would any other data - via EF or even in direct database edits (in development, of course)" “If you want to use an existing ASP.NET Membership Provider in ASP.NET MVC 4, you can't use the new AccountController. You can do a few things:” “Universal Providers do not work with Simple Membership.” ~ this post (look for Bob.at.SBS’s answer) says Universal Providers is not needed for MVC 4 to work in Azure)   I've been trying to figure out the Forms Authentication in MVC4. It's different than the past approach (aspnet_regsql). If you do file new project -> MVC 4 -> internet application, you get a really nice template with the controller and model setup for you. However, the tables are different than using aspnet_regsql and the ASP.Net Configuration tool (WSAT) wasn’t connecting to the data I had (it was creating an App_Data/aspnet.mdf file, which I didn’t see right away). Points of Note The database tables are created in the SimpleMembershipInitializer class, when you first run your app using Entity Framework 5 migration functionality. The tables created are webpages_Membership, webpages_OAuthMembership, webpages_Roles, webpages_UsersInRoles, UserProfile. Web.config settings don’t seem to be needed.   Scott Hanselman on Universal Providers was also useful if not somewhat out dated. Universal Providers and SimpleMembership are not compatible. http://www.asp.net/web-pages/tutorials/security/16-adding-security-and-membership – walk-through

    Read the article

  • Cannot Start Nginx Compiled from Source

    - by Jason Alan Kennedy
    I am trying to compile Nginx from source based on the original compiled Nginx server running on my DigitalOcean server ( Ubuntu-14.04 64x ) but with a few extra modules. I can get everything installed smoothly but I can not get it to start. I am sure the ini is correct because I copied the original source off the current running Nginx server [ Even though I see that Nginx now adds the ini when compiling fron source ]. Below is the [ lengthy process ] that I am performing - add sorry but I wanted to be thorough for those who are in need of the info ]. Because I am a newB to Nginx, I am sure I am missing something or just have it all wrong. If you may look over what I have done and see if you spot anything I need/need to change, I will greatly appreciate it. Thnx! With the original Nginx server still running: I check the current/running Nginx configuration so I can build the new Nginx instance the same but with the added modules: nginx -V # The out-put: configure arguments: --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_spdy_module --with-http_sub_module --with-http_xslt_module NOTE: The configure arguments below return errors during 'make' so I removed them. I don't know what they are - could this be related to my issue??? --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro' Moving on: # So I don't have to sudo every line: sudo bash # Check for updates first thing: apt-get update # Install various prerequisites needed to compile Nginx: apt-get install build-essential libgd2-xpm-dev lsb-base zlib1g-dev libpcre3 libpcre3-dev libbz2-dev libxslt1-dev libxml2 libssl-dev libgeoip-dev tar unzip openssl # Create System users [ if it doesn't exist - but I see its there on DigitalOceans' Droplets all-ready ]: adduser --system --no-create-home --disabled-login --disabled-password --group www-data # Download NGINX wget http://nginx.org/download/nginx-1.7.4.tar.gz tar -xvzf nginx-1.7.4.tar.gz # Then Google PageSpeed: wget https://github.com/pagespeed/ngx_pagespeed/archive/release-1.8.31.4-beta.zip unzip release-1.8.31.4-beta.zip # cd into the PageSpeed Directory cd ngx_pagespeed-release-1.8.31.4-beta/ # and add the PSOL files in there: wget https://dl.google.com/dl/page-speed/psol/1.8.31.4.tar.gz tar -xzvf 1.8.31.4.tar.gz # Get back to the root directory: cd # I add the ngx_cache_purge module and will install the Nginx Helper plugin for WP later: wget https://github.com/FRiCKLE/ngx_cache_purge/archive/2.1.zip unzip 2.1.zip # Add the headers-more-nginx-module: wget https://github.com/openresty/headers-more-nginx-module/archive/v0.25.zip unzip v0.25.zip # and the naxsi module for added security: wget https://github.com/nbs-system/naxsi/archive/0.53-2.tar.gz tar -xvzf 0.53-2.tar.gz # cd to the new Nginx directory cd nginx-1.7.4 # Set up the configuration build based on the current running Nginx config args and add my additional modules: ./configure \ --add-module=$HOME/naxsi-0.53-2/naxsi_src \ --prefix=/usr/share/nginx \ --conf-path=/etc/nginx/nginx.conf \ --http-log-path=/var/log/nginx/access.log \ --error-log-path=/var/log/nginx/error.log \ --lock-path=/var/lock/nginx.lock \ --pid-path=/run/nginx.pid \ --http-client-body-temp-path=/var/lib/nginx/body \ --http-fastcgi-temp-path=/var/lib/nginx/fastcgi \ --http-proxy-temp-path=/var/lib/nginx/proxy \ --http-scgi-temp-path=/var/lib/nginx/scgi \ --http-uwsgi-temp-path=/var/lib/nginx/uwsgi \ --user=www-data \ --group=www-data \ --with-debug \ --with-pcre-jit \ --with-ipv6 \ --with-http_ssl_module \ --with-http_stub_status_module \ --with-http_realip_module \ --with-http_addition_module \ --with-http_dav_module \ --with-http_geoip_module \ --with-http_gzip_static_module \ --with-http_image_filter_module \ --with-http_spdy_module \ --with-http_sub_module \ --with-http_xslt_module \ --with-mail \ --with-mail_ssl_module \ --add-module=$HOME/ngx_pagespeed-release-1.8.31.4-beta \ --add-module=$HOME/ngx_cache_purge-2.1 \ --add-module=$HOME/headers-more-nginx-module-0.25 [ENTER] Configuration Summary: Configuration summary + using system PCRE library + using system OpenSSL library + md5: using OpenSSL library + sha1: using OpenSSL library + using system zlib library nginx path prefix: "/usr/share/nginx" nginx binary file: "/usr/share/nginx/sbin/nginx" nginx configuration prefix: "/etc/nginx" nginx configuration file: "/etc/nginx/nginx.conf" nginx pid file: "/run/nginx.pid" nginx error log file: "/var/log/nginx/error.log" nginx http access log file: "/var/log/nginx/access.log" nginx http client request body temporary files: "/var/lib/nginx/body" nginx http proxy temporary files: "/var/lib/nginx/proxy" nginx http fastcgi temporary files: "/var/lib/nginx/fastcgi" nginx http uwsgi temporary files: "/var/lib/nginx/uwsgi" nginx http scgi temporary files: "/var/lib/nginx/scgi" Next step: I cd to root and I check the old Nginx folder locations and double checked the 'make' output to see that they are the same: whereis nginx #Output: nginx: /usr/sbin/nginx /etc/nginx /usr/share/nginx NOTE: Not sure about the '/usr/sbin/nginx' - Possible issue??? Next I copy the old /etc/nginx/nginx.conf, /etc/nginx/sites-available/default, /etc/nginx/sites-enabled/default, /etc/init.d/nginx to a text file locally for safe keeping to use in the new Nginx server. Then stop the running Nginx server: service nginx stop , verify it's stopped: service --status-all and the output is: [ - ] nginx To verify that there are two Nginx directories, I cd to: cd nginx* and the output is an error indicating there are two nginx folders - Cool Beans! :) Now Install the new Nginx server: cd nginx-1.7.4 make install # INSTALL OUTPUT ######################################## make -f objs/Makefile install make[1]: Entering directory `/home/walkingfish/nginx-1.7.4' test -d '/usr/share/nginx' || mkdir -p '/usr/share/nginx' test -d '/usr/share/nginx/sbin' || mkdir -p '/usr/share/nginx/sbin' test ! -f '/usr/share/nginx/sbin/nginx' || mv '/usr/share/nginx/sbin/nginx' '/usr/share/nginx/sbin/nginx.old' cp objs/nginx '/usr/share/nginx/sbin/nginx' test -d '/etc/nginx' || mkdir -p '/etc/nginx' cp conf/koi-win '/etc/nginx' cp conf/koi-utf '/etc/nginx' cp conf/win-utf '/etc/nginx' test -f '/etc/nginx/mime.types' || cp conf/mime.types '/etc/nginx' cp conf/mime.types '/etc/nginx/mime.types.default' test -f '/etc/nginx/fastcgi_params' || cp conf/fastcgi_params '/etc/nginx' cp conf/fastcgi_params '/etc/nginx/fastcgi_params.default' test -f '/etc/nginx/fastcgi.conf' || cp conf/fastcgi.conf '/etc/nginx' cp conf/fastcgi.conf '/etc/nginx/fastcgi.conf.default' test -f '/etc/nginx/uwsgi_params' || cp conf/uwsgi_params '/etc/nginx' cp conf/uwsgi_params '/etc/nginx/uwsgi_params.default' test -f '/etc/nginx/scgi_params' || cp conf/scgi_params '/etc/nginx' cp conf/scgi_params '/etc/nginx/scgi_params.default' test -f '/etc/nginx/nginx.conf' || cp conf/nginx.conf '/etc/nginx/nginx.conf' cp conf/nginx.conf '/etc/nginx/nginx.conf.default' test -d '/run' || mkdir -p '/run' test -d '/var/log/nginx' || mkdir -p '/var/log/nginx' test -d '/usr/share/nginx/html' || cp -R html '/usr/share/nginx' test -d '/var/log/nginx' || mkdir -p '/var/log/nginx' ######################################################### I copy/create the files that I saved earlier to txt files in sites-available, the config, default and ini files then symlink them to sites-enabled, and so on. And now to start the server: service nginx start And this is where s#!+ hits the fan - Nada. I check to see if Nginx is running with service --status-all and its not. Also with nginx -V and its not installed??? I reboot the system too and still nothing. So I am not sure what is wrong here. The ini was copied over from the old server along with all the other config files after deleting the old files. When I opened the new compiled files, the nginx default data was present so I replaced them with my old original data prior to starting the new server for the first time. Also to be safe, I rm /etc/nginx/sites-enabled/default and symlinked with ln -s /etc/nginx/sites-available/default /etc/nginx/sites-enabled/default with no errors and I verified that the data was in the sites-enabled/default file. I don't think the server really/fully installed because of the nginx -V result: The program 'nginx' can be found in the following packages: * nginx-core * nginx-extras * nginx-full * nginx-light * nginx-naxsi Try: apt-get install <selected package> Do/should I apt-get install nginx-1.7.4 ?? Or what package do I use being that its a custom package and make install earlier did nothing?? If you need to see the conf files I copied over from the old to the custom server, LMK and I'll post them. Again your help here would be appreciated!

    Read the article

  • --log-slave-updates is OFF but some updates are still logged to the slave binary log?

    - by quanta
    MySQL version 5.5.14 According to the document, by the default, slave does not log to its binary log any updates that are received from a master server. Here are my config. on the slave: # egrep 'bin|slave' /etc/my.cnf relay-log=mysqld-relay-bin log-bin = /var/log/mysql/mysql-bin binlog-format=MIXED sync_binlog = 1 log-bin-trust-function-creators = 1 mysql> show global variables like 'log_slave%'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | log_slave_updates | OFF | +-------------------+-------+ 1 row in set (0.01 sec) mysql> select @@log_slave_updates; +---------------------+ | @@log_slave_updates | +---------------------+ | 0 | +---------------------+ 1 row in set (0.00 sec) but slave still logs the some changes to its binary logs, let's see the file size: -rw-rw---- 1 mysql mysql 37M Apr 1 01:00 /var/log/mysql/mysql-bin.001256 -rw-rw---- 1 mysql mysql 25M Apr 2 01:00 /var/log/mysql/mysql-bin.001257 -rw-rw---- 1 mysql mysql 46M Apr 3 01:00 /var/log/mysql/mysql-bin.001258 -rw-rw---- 1 mysql mysql 115M Apr 4 01:00 /var/log/mysql/mysql-bin.001259 -rw-rw---- 1 mysql mysql 105M Apr 4 18:54 /var/log/mysql/mysql-bin.001260 and the sample query when reading these binary files with mysqlbinlog utility: #120404 19:08:57 server id 3 end_log_pos 110324763 Query thread_id=382435 exec_time=0 error_code=0 SET TIMESTAMP=1333541337/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'118212' COLLATE 'utf8_general_ci')) /*!*/; # at 110324763 Did I miss something? Reply to @RolandoMySQLDBA: If replication brought this over, then the same query has to be in the relay logs. Please go find the relay log that has the INSERT query with the same TIMESTAMP (1333541337). There is no such query with the same TIMESTAMP in the relay logs. If you cannot find it in the relay logs, then look and see if Infobright is posting the INSERT query. In that instance, the INSERT should be recorded in the binary logs of the Slave. Looking more deeply into the binary logs, I see that almost of the queries are CREATE/INSERT/UPDATE/DROP "temporary" tables, something like this: # at 123873315 #120405 0:42:04 server id 3 end_log_pos 123873618 Query thread_id=395373 exec_time=0 error_code=0 SET TIMESTAMP=1333561324/*!*/; SET @@session.pseudo_thread_id=395373/*!*/; CREATE TEMPORARY TABLE `norep_tmpcampaign` ( `campaignid` INTEGER(11) NOT NULL DEFAULT '0' , `status` INTEGER(11) NOT NULL DEFAULT '0' , `updated` DATETIME, KEY `campaignid` (`campaignid`) )ENGINE=MEMORY /*!*/; # at 123873618 #120405 0:42:04 server id 3 end_log_pos 123873755 Query thread_id=395373 exec_time=0 error_code=0 SET TIMESTAMP=1333561324/*!*/; DROP TABLE IF EXISTS `norep_tmpcampaign1` /* generated by server */ "temporary" here means that they are dropped after calculation is done. I also tells the slave not to replicate any statement matches the norep_ wildcard pattern: replicate-wild-ignore-table=%.norep_% But there is an exception table in the binary logs: # at 123828094 #120405 0:37:21 server id 3 end_log_pos 123828495 Query thread_id=395209 exec_time=0 error_code=0 SET TIMESTAMP=1333561041/*!*/; INSERT INTO sessions (SessionId, ApplicationName, Created, Expires, LockDate, LockId, Timeout, Locked, SessionItems, Fla gs) Values('pgv2exo4y4vo4ccz44vwznu0', '/', '2012-04-05 00:37:21', '2012-04-05 00:57:21', '2012-04-05 00:37:21', 0, 20, 0, 'AwAAAP////8IdXNlcm5hbWUGdXNlcmlkCHBlcm1pdGlkAgAAAAQAAAAGAAAAAQABAAEA', 0) /*!*/; Description: mysql> desc reportingdb.sessions; +-----------------+------------------+------+-----+---------------------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------------------+-------+ | SessionId | varchar(64) | NO | PRI | | | | ApplicationName | varchar(255) | NO | | | | | Created | timestamp | NO | | 0000-00-00 00:00:00 | | | Expires | timestamp | NO | | 0000-00-00 00:00:00 | | | LockDate | timestamp | NO | | 0000-00-00 00:00:00 | | | LockId | int(11) unsigned | NO | | NULL | | | Timeout | int(11) unsigned | NO | | NULL | | | Locked | bit(1) | NO | | NULL | | | SessionItems | varchar(255) | YES | | NULL | | | Flags | int(11) | NO | | NULL | | +-----------------+------------------+------+-----+---------------------+-------+ I'm sure all these queries are posting by MySQL, not Infobright: $ mysql-ib -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 48971 Server version: 5.1.40 build number (revision)=IB_4.0.5_r15240_15370(ice) (static) Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> select * from information_schema.tables where table_name='sessions'; Empty set (0.02 sec) I've been trying some INSERT/UPDATE queries with testing tables on the master, it is copied to the relay logs, not binary logs on slave: # at 311664029 #120405 0:15:23 server id 1 end_log_pos 311664006 Query thread_id=10458250 exec_time=0 error_code=0 use testuser/*!*/; SET TIMESTAMP=1333559723/*!*/; update users set email2='[email protected]' where id=22 /*!*/; Pay attention to the server id, in the relay logs, server id is master's (1) and in the binary log, server id is slave's (3 in this case). Reply to @RolandoMySQLDBA: Thu Apr 5 10:06:00 ICT 2012 Run CREATE DATABASE quantatest; on the Master now, please. Tell me if CREATE DATABASE quantatest; showed up in the Slave's Binary Logs. As I said above: I've been trying some INSERT/UPDATE queries with testing tables on the master, it is copied to the relay logs, not binary logs and you can guess, IO thread copied it to the relay logs, not binary logs. #120405 10:07:25 server id 1 end_log_pos 347573819 Query thread_id=10480775 exec_time=0 error_code=0 SET TIMESTAMP=1333595245/*!*/; /*!\C latin1 *//*!*/; SET @@session.character_set_client=8,@@session.collation_connection=8,@@session.collation_server=8/*!*/; create database quantatest /*!*/; The question must probably change to: why some update queries still logged to the slave binary logs althrough --log-slave-updates is disabled? Where they come from? Here are few last: /*!*/; # at 27492197 #120405 10:12:45 server id 3 end_log_pos 27492370 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; CREATE TEMPORARY TABLE norep_SplitValues ( value VARCHAR(1000) NOT NULL ) ENGINE=MEMORY /*!*/; # at 27492370 #120405 10:12:45 server id 3 end_log_pos 27492445 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; BEGIN /*!*/; # at 27492445 #120405 10:12:45 server id 3 end_log_pos 27492619 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'119577' COLLATE 'utf8_general_ci')) /*!*/; # at 27492619 #120405 10:12:45 server id 3 end_log_pos 27492695 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; COMMIT /*!*/; # at 27492918 #120405 10:12:46 server id 3 end_log_pos 27493115 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595566/*!*/; SELECT `reportingdb`.`selfserving_get_locationad`(_utf8'3' COLLATE 'utf8_general_ci',_utf8'' COLLATE 'utf8_general_ci') /*!*/; # at 27493199 #120405 10:12:46 server id 3 end_log_pos 27493353 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595566/*!*/; /*!\C utf8 *//*!*/; SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/; DROP TEMPORARY TABLE IF EXISTS `norep_SplitValues` /* generated by server */ /*!*/;

    Read the article

  • What Every Developer Should Know About MSI Components

    - by Alois Kraus
    Hopefully nothing. But if you have to do more than simple XCopy deployment and you need to support updates, upgrades and perhaps side by side scenarios there is no way around MSI. You can create Msi files with a Visual Studio Setup project which is severely limited or you can use the Windows Installer Toolset. I cannot talk about WIX with my German colleagues because WIX has a very special meaning. It is funny to always use the long name when I talk about deployment possibilities. Alternatively you can buy commercial tools which help you to author Msi files but I am not sure how good they are. Given enough pain with existing solutions you can also learn the MSI Apis and create your own packaging solution. If I were you I would use either a commercial visual tool when you do easy deployments or use the free Windows Installer Toolset. Once you know the WIX schema you can create well formed wix xml files easily with any editor. Then you can “compile” from the wxs files your Msi package. Recently I had the “pleasure” to get my hands dirty with C++ (again) and the MSI technology. Installation is a complex topic but after several month of digging into arcane MSI issues I can safely say that there should exist an easier way to install and update files as today. I am not alone with this statement as John Robbins (creator of the cool tool Paraffin) states: “.. It's a brittle and scary API in Windows …”. To help other people struggling with installation issues I present you the advice I (and others) found useful and what will happen if you ignore this advice. What is a MSI file? A MSI file is basically a database with tables which reference each other to control how your un/installation should work. The basic idea is that you declare via these tables what you want to install and MSI controls the how to get your stuff onto or off your machine. Your “stuff” consists usually of files, registry keys, shortcuts and environment variables. Therefore the most important tables are File, Registry, Environment and Shortcut table which define what will be un/installed. The key to master MSI is that every resource (file, registry key ,…) is associated with a MSI component. The actual payload consists of compressed files in the CAB format which can either be embedded into the MSI file or reside beside the MSI file or in a subdirectory below it. To examine MSI files you need Orca a free MSI editor provided by MS. There is also another free editor called Super Orca which does support diffs between MSI and it does not lock the MSI files. But since Orca comes with a shell extension I tend to use only Orca because it is so easy to right click on a MSI file and open it with this tool. How Do I Install It? Double click it. This does work for fresh installations as well as major upgrades. Updates need to be installed via the command line via msiexec /i <msi> REINSTALL=ALL REINSTALLMODE=vomus   This tells the installer to reinstall all already installed features (new features will NOT be installed). The reinstallmode letters do force an overwrite of the old cached package in the %WINDIR%\Installer folder. All files, shortcuts and registry keys are redeployed if they are missing or need to be replaced with a newer version. When things did go really wrong and you want to overwrite everything unconditionally use REINSTALLMODE=vamus. How To Enable MSI Logs? You can download a MSI from Microsoft which installs some registry keys to enable full MSI logging. The log files can be found in your %TEMP% folder and are called MSIxxxx.log. Alternatively you can add to your msiexec command line the option msiexec …. /l*vx <LogFileName> Personally I find it rather strange that * does not mean full logging. To really get all logs I need to add v and x which is documented in the msiexec help but I still find this behavior unintuitive. What are MSI components? The whole MSI logic is bound to the concept of MSI components. Nearly every msi table has a Component column which binds an installable resource to a component. Below are the screenshots of the FeatureComponents and Component table of an example MSI. The Feature table defines basically the feature hierarchy.  To find out what belongs to a feature you need to look at the FeatureComponents table where for each feature the components are listed which will be installed when a feature is installed. The MSI components are defined in the  Component table. This table has as first column the component name and as second column the component id which is a GUID. All resources you want to install belong to a MSI component. Therefore nearly all MSI tables have a Component_ column which contains the component name. If you look e.g. a the File table you see that every file belongs to a component which is true for all other tables which install resources. The component table is the glue between all other tables which contain the resources you want to install. So far so easy. Why is MSI then so complex? Most MSI problems arise from the fact that you did violate a MSI component rule in one or the other way. When you install a feature the reference count for all components belonging to this feature will increase by one. If your component is installed by more than one feature it will get a higher refcount. When you uninstall a feature its refcount will drop by one. Interesting things happen if the component reference count reaches zero: Then all associated resources will be deleted. That looks like a reasonable thing and it is. What it makes complex are the strange component rules you have to follow. Below are some important component rules from the Tao of the Windows Installer … Rule 16: Follow Component Rules Components are a very important part of the Installer technology. They are the means whereby the Installer manages the resources that make up your application. The SDK provides the following guidelines for creating components in your package: Never create two components that install a resource under the same name and target location. If a resource must be duplicated in multiple components, change its name or target location in each component. This rule should be applied across applications, products, product versions, and companies. Two components must not have the same key path file. This is a consequence of the previous rule. The key path value points to a particular file or folder belonging to the component that the installer uses to detect the component. If two components had the same key path file, the installer would be unable to distinguish which component is installed. Two components however may share a key path folder. Do not create a version of a component that is incompatible with all previous versions of the component. This rule should be applied across applications, products, product versions, and companies. Do not create components containing resources that will need to be installed into more than one directory on the user’s system. The installer installs all of the resources in a component into the same directory. It is not possible to install some resources into subdirectories. Do not include more than one COM server per component. If a component contains a COM server, this must be the key path for the component. Do not specify more than one file per component as a target for the Start menu or a Desktop shortcut. … And these rules do not even talk about component ids, update packages and upgrades which you need to understand as well. Lets suppose you install two MSIs (MSI1 and MSI2) which have the same ComponentId but different component names. Both do install the same file. What will happen when you uninstall MSI2?   Hm the file should stay there. But the component names are different. Yes and yes. But MSI uses not use the component name as key for the refcount. Instead the ComponentId column of the Component table which contains a GUID is used as identifier under which the refcount is stored. The components Comp1 and Comp2 are identical from the MSI perspective. After the installation of both MSIs the Component with the Id {100000….} has a refcount of two. After uninstallation of one MSI there is still a refcount of one which drops to zero just as expected when we uninstall the last msi. Then the file which was the same for both MSIs is deleted. You should remember that MSI keeps a refcount across MSIs for components with the same component id. MSI does manage components not the resources you did install. The resources associated with a component are then and only then deleted when the refcount of the component reaches zero.   The dependencies between features, components and resources can be described as relations. m,k are numbers >= 1, n can be 0. Inside a MSI the following relations are valid Feature    1  –> n Components Component    1 –> m Features Component      1  –>  k Resources These relations express that one feature can install several components and features can share components between them. Every (meaningful) component will install at least one resource which means that its name (primary key to stay in database speak) does occur in some other table in the Component column as value which installs some resource. Lets make it clear with an example. We want to install with the feature MainFeature some files a registry key and a shortcut. We can then create components Comp1..3 which are referenced by the resources defined in the corresponding tables.   Feature Component Registry File Shortcuts MainFeature Comp1 RegistryKey1     MainFeature Comp2   File.txt   MainFeature Comp3   File2.txt Shortcut to File2.txt   It is illegal that the same resource is part of more than one component since this would break the refcount mechanism. Lets illustrate this:            Feature ComponentId Resource Reference Count Feature1 {1000-…} File1.txt 1 Feature2 {2000-….} File1.txt 1 The installation part works well but what happens when you uninstall Feature2? Component {20000…} gets a refcount of zero where MSI deletes all resources belonging to this component. In this case File1.txt will be deleted. But Feature1 still has another component {10000…} with a refcount of one which means that the file was deleted too early. You just have ruined your installation. To fix it you then need to click on the Repair button under Add/Remove Programs to let MSI reinstall any missing registry keys, files or shortcuts. The vigilant reader might has noticed that there is more in the Component table. Beside its name and GUID it has also an installation directory, attributes and a KeyPath. The KeyPath is a reference to a file or registry key which is used to detect if the component is already installed. This becomes important when you repair or uninstall a component. To find out if the component is already installed MSI checks if the registry key or file referenced by the KeyPath property does exist. When it does not exist it assumes that it was either already uninstalled (can lead to problems during uninstall) or that it is already installed and all is fine. Why is this detail so important? Lets put all files into one component. The KeyPath should be then one of the files of your component to check if it was installed or not. When your installation becomes corrupt because a file was deleted you cannot repair it with the Repair button under Add/Remove Programs because MSI checks the component integrity via the Resource referenced by its KeyPath. As long as you did not delete the KeyPath file MSI thinks all resources with your component are installed and never executes any repair action. You get even more trouble when you try to remove files during an upgrade (you cannot remove files during an update) from your super component which contains all files. The only way out and therefore best practice is to assign for every resource you want to install an extra component. This ensures painless updatability and repairs and you have much less effort to remove specific files during an upgrade. In effect you get this best practice relation Feature 1  –> n Components Component   1  –>  1 Resources MSI Component Rules Rule 1 – One component per resource Every resource you want to install (file, registry key, value, environment value, shortcut, directory, …) must get its own component which does never change between versions as long as the install location is the same. Penalty If you add more than one resources to a component you will break the repair capability of MSI because the KeyPath is used to check if the component needs repair. MSI ComponentId Files MSI 1.0 {1000} File1-5 MSI 2.0 {2000} File2-5 You want to remove File1 in version 2.0 of your MSI. Since you want to keep the other files you create a new component and add them there. MSI will delete all files if the component refcount of {1000} drops to zero. The files you want to keep are added to the new component {2000}. Ok that does work if your upgrade does uninstall the old MSI first. This will cause the refcount of all previously installed components to reach zero which means that all files present in version 1.0 are deleted. But there is a faster way to perform your upgrade by first installing your new MSI and then remove the old one.  If you choose this upgrade path then you will loose File1-5 after your upgrade and not only File1 as intended by your new component design.   Rule 2 – Only add, never remove resources from a component If you did follow rule 1 you will not need Rule 2. You can add in a patch more resources to one component. That is ok. But you can never remove anything from it. There are tricky ways around that but I do not want to encourage bad component design. Penalty Lets assume you have 2 MSI files which install under the same component one file   MSI1 MSI2 {1000} - ComponentId {1000} – ComponentId File1.txt File2.txt   When you install and uninstall both MSIs you will end up with an installation where either File1 or File2 will be left. Why? It seems that MSI does not store the resources associated with each component in its internal database. Instead Windows will simply query the MSI that is currently uninstalled for all resources belonging to this component. Since it will find only one file and not two it will only uninstall one file. That is the main reason why you never can remove resources from a component!   Rule 3 Never Remove A Component From an Update MSI. This is the same as if you change the GUID of a component by accident for your new update package. The resulting update package will not contain all components from the previously installed package. Penalty When you remove a component from a feature MSI will set the feature state during update to Advertised and log a warning message into its log file when you did enable MSI logging. SELMGR: ComponentId '{2DCEA1BA-3E27-E222-484C-D0D66AEA4F62}' is registered to feature 'xxxxxxx, but is not present in the Component table.  Removal of components from a feature is not supported! MSI (c) (24:44) [07:53:13:436]: SELMGR: Removal of a component from a feature is not supported Advertised means that MSI treats all components of this feature as not installed. As a consequence during uninstall nothing will be removed since it is not installed! This is not only bad because uninstall does no longer work but this feature will also not get the required patches. All other features which have followed component versioning rules for update packages will be updated but the one faulty feature will not. This results in very hard to find bugs why an update was only partially successful. Things got better with Windows Installer 4.5 but you cannot rely on that nobody will use an older installer. It is a good idea to add to your update msiexec call MSIENFORCEUPGRADECOMPONENTRULES=1 which will abort the installation if you did violate this rule.

    Read the article

  • Any simple approaches for managing customer data change requests for global reference files?

    - by Kelly Duke
    For the first time, I am developing in an environment in which there is a central repository for a number of different industry standard reference data tables and many different customers who need to select records from these industry standard reference data tables to fill in foreign key information for their customer specific records. Because these industry standard reference files are utilized by all customers, I want to reserve Create/Update/Delete access to these records for global product administrators. However, I would like to implement a (semi-)automated interface by which specific customers could request record additions, deletions or modifications to any of the industry standard reference files that are shared among all customers. I know I need something like a "data change request" table specifying: user id, user request datetime, request type (insert, modify, delete), a user entered text explanation of the change request, the user request's current status (pending, declined, completed), admin resolution datetime, admin id, an admin entered text description of the resolution, etc. What I can't figure out is how to elegantly handle the fact that these data change requests could apply to dozens of different tables with differing table column definitions. I would like to give the customer users making these data change requests a convenient way to enter their proposed record additions/modifications directly into CRUD screens that look very much like the reference table CRUD screens they don't have write/delete permissions for (with an additional text explanation and perhaps request priority field). I would also like to give the global admins a tool that allows them to view all the outstanding data change requests for the users they oversee sorted by date requested or user/date requested. Upon selecting a data change request record off the list, the admin would be directed to another CRUD screen that would be populated with the fields the customer users requested for the new/modified industry standard reference table record along with customer's text explanation, the request status and the text resolution explanation field. At this point the admin could accept/edit/reject the requested change and if accepted the affected industry standard reference file would be automatically updated with the appropriate fields and the data change request record's status, text resolution explanation and resolution datetime would all also be appropriately updated. However, I want to keep the actual production reference tables as simple as possible and free from these extraneous and typically null customer change request fields. I'd also like the data change request file to aggregate all data change requests across all the reference tables yet somehow "point to" the specific reference table and primary key in question for modification & deletion requests or the specific reference table and associated customer user entered field values in question for record creation requests. Does anybody have any ideas of how to design something like this effectively? Is there a cleaner, simpler way I am missing? Thank you so much for reading.

    Read the article

  • How to create Query syntax for multiple DataTable for implementing IN operator of Sql Server

    - by Shantanu Gupta
    I have fetched 3-4 tables by executing my stored procedure. Now they resides on my dataset. I have to maintain this dataset for multiple forms and I am not doing any DML operation on this dataset. Now this dataset contains 4 tables out of which i have to fetch some records to display data. Data stored in tables are in form of one to many relationship. i.e. In case of transactions. N records per record. Then these N records are further mapped to M records of 3rd table. Table 1 MAP_ID GUEST_ID DEPARTMENT_ID PARENT_ID PREFERENCE_ID -------------------- -------------------- -------------------- -------------------- -------------------- 19 61 1 1 5 14 61 1 5 15 15 61 2 4 10 18 61 2 13 23 17 61 2 20 26 16 61 40 40 41 20 62 1 5 14 21 62 1 5 15 22 62 1 6 16 24 62 2 3 4 23 62 2 4 9 27 62 2 13 23 25 62 2 20 24 26 62 2 20 25 28 63 1 1 5 29 63 1 1 8 34 63 1 5 15 30 63 2 4 10 33 63 2 4 11 31 63 2 13 23 32 63 40 40 41 35 65 1 NULL 1 36 65 1 NULL 1 38 68 2 13 22 37 68 2 20 25 39 68 2 23 27 40 92 1 NULL 1 Table 2 Department_ID Department_Name Parent_Id Parent_Name -------------------- ----------------------- --------------- ---------------------------------------------------------------------------------- 1 Food 1, 5, 6 Food, North Indian, South Indian 2 Lodging 3, 4, 13, 20, 23 Room, Floor, Non Air Conditioned, With Balcony, Without Balcony 40 New 40 SubNew TABLE 3 Parent_Id Parent_Name Preference_ID Preference_Name -------------------- ----------------------------------------------- ----------------------- ------------------- NULL NULL NULL NULL 1 Food 5, 8 North Indian, Italian 3 Room 4 Floor 4 Floor 9, 10, 11 First, Second, Third 5 North Indian 14, 15 X, Y 6 South Indian 16 Dosa 13 Non Air Conditioned 22, 23 With Balcony, Without Balcony 20 With Balcony 24, 25, 26 Mountain View, Ocean View, Garden View 23 Without Balcony 27 Mountain View 40 New 41 SubNew I have these 3 tables that are related in some fashion like this. Table 1 will be the master for these 2 tables i.e. table 2 and table 3. I need to query on them as SELECT Department_Id, Department_Name, Parent_Name FROM Table2 WHERE Department_Id in ( SELECT Department_Id FROM Table1 WHERE guest_id=65 ) SELECT Parent_Id, Parent_Name, Preference_Name FROM Table3 WHERE PARENT_ID in ( SELECT parent_id FROM Table1 WHERE guest_id=65 ) Now I need to use these queries on DataTables. So I am using Query Syntax for this and reached up to this point. var dept_list= from dept in DtMapGuestDepartment.AsEnumerable() where dept.Field<long>("PK_GUEST_ID")==long.Parse(63) select dept; This should give me the list of all departments that has guest id =63 Now I want to select all departments_name and parent_name from Table 2 where guest_id=63 i.e. departments that i fetched above. This same case will be followed for Table3. Please suggest how to do this. Thanks for keeping up patience for reading my question.

    Read the article

  • Rails / ActiveRecord Modeling Help

    - by JM
    I’m trying to model a relationship in ActiveRecord and I think it’s a little beyond my skill level. Here’s the background. This is a horse racing project and I’m trying to model a horses Connections over time. Connections are defined as the Horse’s Current: Owner, Trainer and Jockey. Over time, a horse’s connections can change for a lot of different reasons: The owner sells the horse in a private sale The horse is claimed (purchase in a public sale) The Trainer switches jockeys The owner switches trainers In my first attempt at modeling this, I created the following tables: Horses, Owners, Trainers, Jockeys and Connections. Essentially, the Connections table was the has-many-through join table and was structured as follows: Connections Table 1 Id Horse_id Owner_id Trainer_id Jockey_id Status_Code Status_Date Change_Code The Horse, Owner, Trainer and Jockey foreign keys are self explanatory. The status code is 1 or 0 (1 active, 0 inactive) and the status date is the date the status changed. Change_code is and integer or string value that represent the reason for the change (private sale, claim, jockey change, etc) The key benefit of this approach is that the Connection is represented as one record in the connections table. The downside is that I have to have a table for Owner (1), Trainer (2) and Jockey (3) when one table could due. In my second attempt at modeling this I created the following tables: Horses, Connections, Entities The Entities tables has the following structure Entities Table id First_name Last_name Role where Role represents if the entity is a Owner, Trainer or Jockey. Under this approach, my Connections table has the following structure Connections Table 2 id Horse_id Entity_id Role Status_Code Status_Date Change_Code 1 1 1 1 1 1/1/2010 2 1 4 2 1 1/1/2010 3 1 10 3 1 1/1/2010 This approach has the benefit of eliminating two tables, but on the other hand the Connection is now comprised of three different records as opposed to one in the first approach. What believe I’m looking for is an approach that allows me to capture the Connection in one record, but also uses an Entities table with roles instead of the Owner, Trainer and Jockey tables. I’m new to ActiveRecord and rails so any and all input would be greatly appreciated. Perhaps there are other ways that would even be better. Thanks!

    Read the article

  • ASP.NET Repeater and DataBinder.Eval

    - by Fernando
    I've got a <asp:Repeater> in my webpage, which is bound to a programatically created dataset. The purpose of this repeater is to create an index from A-Z, which, when clicked, refreshes the information on the page. The repeater has a link button like so: <asp:LinkButton ID="indexLetter" Text='<%#DataBinder.Eval(Container.DataItem,"letter")%>' runat="server" CssClass='<%#DataBinder.Eval(Container.DataItem, "cssclass")%>' Enabled='<%#DataBinder.Eval(Container.DataItem,"enabled")%>'></asp:LinkButton> The dataset is created the following way: protected DataSet getIndex(String index) { DataSet ds = new DataSet(); ds.Tables.Add("index"); ds.Tables["index"].Columns.Add("letter"); ds.Tables["index"].Columns.Add("cssclass"); ds.Tables["index"].Columns.Add("enabled"); char alphaStart = Char.Parse("A"); char alphaEnd = Char.Parse("Z"); for (char i = alphaStart; i <= alphaEnd; i++) { String cssclass="", enabled="true"; if (index == i.ToString()) { cssclass = "selected"; enabled = "false"; } ds.Tables["index"].Rows.Add(new Object[3] {i.ToString(),cssclass,enabled }); } return ds; } However, when I run the page, a "Specified cast is not valid exception" is thrown in Text='<%#DataBinder.Eval(Container.DataItem,"letter")'. I have no idea why, I have tried manually casting to String with (String), I've tried a ToString() method, I've tried everything. Also, if in the debugger I add a watch for DataBinder.Eval(Container.DataItem,"letter"), the value it returns is "A", which according to me, should be fine for the Text Property. EDIT: Here is the exception: System.InvalidCastException was unhandled by user code Message="Specified cast is not valid." Source="App_Web_cmu9mtyc" StackTrace: at ASP.savecondition_aspx._DataBinding_control7(Object sender, EventArgs e) in e:\Documents and Settings\Fernando\My Documents\Visual Studio 2008\Projects\mediTrack\mediTrack\saveCondition.aspx:line 45 at System.Web.UI.Control.OnDataBinding(EventArgs e) at System.Web.UI.Control.DataBind(Boolean raiseOnDataBinding) at System.Web.UI.Control.DataBind() at System.Web.UI.Control.DataBindChildren() InnerException: Any advice will be greatly appreciated, thank you EDIT 2: Fixed! The problem was not in the Text or CSS tags, but in the Enabled tag, I had to cast it to a Boolean value. The problem was that the exception was signaled at the Text tag, I don't know why

    Read the article

  • Entity Framework - Single EMDX Mapping Multiple Database

    - by michaelalisonalviar
    Because of my recent craze on Entity Framework thanks to Sir Humprey, I have continuously searched the Internet for tutorials on how to apply it to our current system. So I've come to learn that with EF, I can eliminate the numerous coding of methods/functions for CRUD operations, my overly used assigning of connection strings, Data Adapters or Data Readers as Entity Framework will map my desired database and will do its magic to create entities for each table I want (using EF Powertool) and does all the methods/functions for my Crud Operations. But as I begin applying it to a new project I was assigned to, I realized our current server is designed to contain each similar entities in different databases. For example Our lookup tables are stored in LookupDb, Accounting-related tables are in AccountingDb, Sales-related tables in SalesDb. My dilemma is I have to use an existing table from LookupDB and use it as a look-up for my new table. Then I have found Miss Rachel's Blog (here)Thank You Miss Rachel!  which enables me to let EF think that my TableLookup1 is in the AccountingDB using the following steps. Im on VS 2010, I am using C# , Using Entity Framework 5, SQL Server 2008 as our DB ServerStep 1:Creating A SQL Synonym. If you want a more detailed discussion on synonyms, this was what i have read -> (link here). To simply put it, A synonym enabled me to simplify my query for the Look-up table when I'm using the AccountingDB fromSELECT [columns] FROM LookupDB.dbo.TableLookup1toSELECT [columns] FROM TableLookup1Syntax: CREATE SYNONYM  TableLookup1(1) FOR LookupDB.dbo.TableLookup1 (2)1. What you want to call the table on your other DB2. DataBaseName.schema.TableNameStep 2: We will now follow Miss Rachel's steps. you can either visit the link on the original topic I posted earlier or just follow the step I made.1. I created a Visual Basic Solution that will contain the 4 projects needed to complete the merging2. First project will contain the edmx file pointing to the AccountingDB3. Second project will contain the edmx file pointing to the LookupDB4. Third Project will will be our repository of the merged edmx file. Create an edmx file pointing To AccountingDB as this the database that we created the Synonym on.Reminder: Aside from using the same name for the Entities, please make sure that you have the same Model Namespace for all your Entities  5. Fourth project that will contain the beautiful EDMX merger that Miss Rachel created that will free you from Hard coding of the merge/recoding the Edmx File of the third project everytime a change is done on either one of the first two projects' Edmx File.6. Run the solution, but make sure that on the solutions properties Single startup project is selected and the project containing the EDMX merger is selected.7. After running the solution, double click on the EDMX file of the 3rd project and set Lazy Loading Enabled = False. This will let you use the tables/entities that you see in that EDMX File.8. Feel free to do your CRUD Operations.I don't know if EF 5 already has a feature to support synonyms as I am still a newbie on that aspect but I have seen a linked where there are supposed suggestions on Entity Framework upgrades and one is the "Support for multiple databases"  So that's it! Thanks for reading!

    Read the article

  • News From EAP Testing

    - by Fatherjack
    There is a phrase that goes something like “Watch the pennies and the pounds/dollars will take care of themselves”, meaning that if you pay attention to the small things then the larger things are going to fare well too. I am lucky enough to be a Friend of Red Gate and once in a while I get told about new features in their tools and have a test copy of the software to trial. I got one of those emails a week or so ago and I have been exploring the SQL Prompt 6 EAP since then. One really useful feature of long standing in SQL Prompt is the idea of a code snippet that is automatically pasted into the SSMS editor when you type a few key letters. For example I can type “ssf” and then press the tab key and the text is expanded to SELECT * FROM. There are lots of these combinations and it is possible to create your own really easily. To create your own you use the Snippet Manager interface to define the shortcut letters and the code that you want to have put in their place. Let’s look at an example. Say I am writing a blog about something and want to have the demo code create a temporary table. It might looks like this; The first time you run the code everything is fine, a lovely set of dates fill the results grid but run it a second time and this happens.   Yep, we didn’t destroy the temporary table so the CREATE statement fails when it finds the table already exists. No matter, I have a snippet created that takes care of this.   Nothing too technical here but you will see that in the Code section there is $CURSOR$, this isn’t a TSQL keyword but a marker for SQL Prompt to place the cursor in that position when the Code is pasted into the SSMS Editor. I just place my cursor above the CREATE statement and type “ifobj” – the shortcut for my code to DROP the temporary table – which has been defined in the Snippet Manager as below. This means I am right-away ready to type the name of the offending table. Pretty neat and it’s been very useful in saving me lots of time over many years.   The news for SQL Prompt 6 is that Red Gate have added a new Snippet Command of $PASTE$. Let’s alter our snippet to the following and try it out   Once again, we will type type “ifobj” in the SSMS Editor but first of all, highlight the name of the table #TestTable and copy it to your clipboard. Now type “ifobj” and press Tab… Wherever the string $PASTE$ is placed in the snippet, the contents of your clipboard are merged into the pasted TSQL. This means I don’t need to type the table name into the code snippet, it’s already there and I am seeing a fully functioning piece of TSQL ready to run. This means it is it even easier to write TSQL quickly and consistently. Attention to detail like this from Red Gate means that their developer tools stay on track to keep winning awards year after year and help take the hard work out of writing neat, accurate TSQL. If you want to try out SQL Prompt all the details are at http://www.red-gate.com/products/sql-development/sql-prompt/.

    Read the article

  • The five steps of business intelligence adoption: where are you?

    - by Red Gate Software BI Tools Team
    When I was in Orlando and New York last month, I spoke to a lot of business intelligence users. What they told me suggested a path of BI adoption. The user’s place on the path depends on the size and sophistication of their organisation. Step 1: A company with a database of customer transactions will often want to examine particular data, like revenue and unit sales over the last period for each product and territory. To do this, they probably use simple SQL queries or stored procedures to produce data on demand. Step 2: The results from step one are saved in an Excel document, so business users can analyse them with filters or pivot tables. Alternatively, SQL Server Reporting Services (SSRS) might be used to generate a report of the SQL query for display on an intranet page. Step 3: If these queries are run frequently, or business users want to explore data from multiple sources more freely, it may become necessary to create a new database structured for analysis rather than CRUD (create, retrieve, update, and delete). For example, data from more than one system — plus external information — may be incorporated into a data warehouse. This can become ‘one source of truth’ for the business’s operational activities. The warehouse will probably have a simple ‘star’ schema, with fact tables representing the measures to be analysed (e.g. unit sales, revenue) and dimension tables defining how this data is aggregated (e.g. by time, region or product). Reports can be generated from the warehouse with Excel, SSRS or other tools. Step 4: Not too long ago, Microsoft introduced an Excel plug-in, PowerPivot, which allows users to bring larger volumes of data into Excel documents and create links between multiple tables.  These BISM Tabular documents can be created by the database owners or other expert Excel users and viewed by anyone with Excel PowerPivot. Sometimes, business users may use PowerPivot to create reports directly from the primary database, bypassing the need for a data warehouse. This can introduce problems when there are misunderstandings of the database structure or no single ‘source of truth’ for key data. Step 5: Steps three or four are often enough to satisfy business intelligence needs, especially if users are sophisticated enough to work with the warehouse in Excel or SSRS. However, sometimes the relationships between data are too complex or the queries which aggregate across periods, regions etc are too slow. In these cases, it can be necessary to formalise how the data is analysed and pre-build some of the aggregations. To do this, a business intelligence professional will typically use SQL Server Analysis Services (SSAS) to create a multidimensional model — or “cube” — that more simply represents key measures and aggregates them across specified dimensions. Step five is where our tool, SSAS Compare, becomes useful, as it helps review and deploy changes from development to production. For us at Red Gate, the primary value of SSAS Compare is to establish a dialog with BI users, so we can develop a portfolio of products that support creation and deployment across a range of report and model types. For example, PowerPivot and the new BISM Tabular model create a potential customer base for tools that extend beyond BI professionals. We’re interested in learning where people are in this story, so we’ve created a six-question survey to find out. Whether you’re at step one or step five, we’d love to know how you use BI so we can decide how to build tools that solve your problems. So if you have a sixty seconds to spare, tell us on the survey!

    Read the article

  • datagrid filter in c# using sql server

    - by malou17
    How to filter data in datagrid for example if u select the combo box in student number then input 1001 in the text field...all records in 1001 will appear in datagrid.....we are using sql server private void button2_Click(object sender, EventArgs e) { if (cbofilter.SelectedIndex == 0) { string sql; SqlConnection conn = new SqlConnection(); conn.ConnectionString = "Server= " + Environment.MachineName.ToString() + @"\; Initial Catalog=TEST;Integrated Security = true"; SqlDataAdapter da = new SqlDataAdapter(); DataSet ds1 = new DataSet(); ds1 = DBConn.getStudentDetails("sp_RetrieveSTUDNO"); sql = "Select * from Test where STUDNO like '" + txtvalue.Text + "'"; SqlCommand cmd = new SqlCommand(sql, conn); cmd.CommandType = CommandType.Text; da.SelectCommand = cmd; da.Fill(ds1); dbgStudentDetails.DataSource = ds1; dbgStudentDetails.DataMember = ds1.Tables[0].TableName; dbgStudentDetails.Refresh(); } else if (cbofilter.SelectedIndex == 1) { //string sql; //SqlConnection conn = new SqlConnection(); //conn.ConnectionString = "Server= " + Environment.MachineName.ToString() + @"\; Initial Catalog=TEST;Integrated Security = true"; //SqlDataAdapter da = new SqlDataAdapter(); //DataSet ds1 = new DataSet(); //ds1 = DBConn.getStudentDetails("sp_RetrieveSTUDNO"); //sql = "Select * from Test where Name like '" + txtvalue.Text + "'"; //SqlCommand cmd = new SqlCommand(sql,conn); //cmd.CommandType = CommandType.Text; //da.SelectCommand = cmd; //da.Fill(ds1); // dbgStudentDetails.DataSource = ds1; //dbgStudentDetails.DataMember = ds1.Tables[0].TableName; //ds.Tables[0].DefaultView.RowFilter = "Studno = + txtvalue.text + "; dbgStudentDetails.DataSource = ds.Tables[0]; dbgStudentDetails.Refresh(); } }

    Read the article

  • [Linq to SQL] Multiple foreign keys to the same table

    - by cdonner
    I have a reference table with all sorts of controlled value lookup data for gender, address type, contact type, etc. Many tables have multiple foreign keys to this reference table I also have many-to-many association tables that have two foreign keys to the same table. Unfortunately, when these tables are pulled into a Linq model and the DBML is generated, SQLMetal does not look at the names of the foreign key columns, or the names of the constraints, but only at the target table. So I end up with members called Reference1, Reference2, ... not very maintenance-friendly. Example: <Association Name="tb_reference_tb_account" Member="tb_reference" <====== ThisKey="shipping_preference_type_id" OtherKey="id" Type="tb_reference" IsForeignKey="true" /> <Association Name="tb_reference_tb_account1" Member="tb_reference1" <====== ThisKey="status_type_id" OtherKey="id" Type="tb_reference" IsForeignKey="true" /> I can go into the DBML and manually change the member names, of course, but this would mean I can no longer round-trip my database schema. This is not an option at the current stage of the model, which is still evolving. Splitting the reference table into n individual tables is also not desirable. I can probably write a script that runs against the XML after each generation and replaces the member name with something derived from ThisKey (since I adhere to a naming convention for these types of keys). Has anybody found a better solution to this problem?

    Read the article

  • Schema objects not visible in SQL Server Management Studio 2008

    - by Germ
    I'm experiencing a weird problem with a SQL login. When I connect to the server in Microsoft SQL Server Management Studio (2008) using this account, I cannot see any of the tables, stored procedures etc. that this account should have access to on a particular database. When I connect to the same server within Visual Studio (2008) with the same account everything is there. When I connect with the same account on a Virtual Machine everything is there. I've also had a co-worker connect to the server using the same login and he's able to view everything as well. I use Microsoft SQL Server Management Studio all day connecting to different servers and databases and I've never experienced this problem. Does anyone have any suggestions on how I can diagnose this problem? I've checked to make sure I don't have any Table filters etc. There's several database on this server and I'm able to see the correct tables that this account has access to in the other databases just fine. Running this query lists the tables I'm expecting to see. SELECT * FROM INFORMATION_SCHEMA.TABLES

    Read the article

  • Rails Fixtures Don't Seem to Support Transitive Associations

    - by Rick Wayne
    So I've got 3 models in my app, connected by HABTM relations: class Member < ActiveRecord::Base has_and_belongs_to_many :groups end class Group < ActiveRecord::Base has_and_belongs_to_many :software_instances end class SoftwareInstance < ActiveRecord::Base end And I have the appropriate join tables, and use foxy fixtures to load them: -- members.yml -- rick: name: Rick groups: cals -- groups.yml -- cals: name: CALS software_instances: delphi -- software_instances.yml -- delphi: name: No, this is not a joke, someone in our group is still maintaining Delphi apps... And I do a rake db:fixtures:load, and examine the tables. Yep, the join tables groups_members and groups_software_instances have appropriate IDs in, big numbers generated by hashing the fixture names. Yay! And if I run script/console, I can look at rick's groups and see that cals is in there, or cals's software_instances and delphi shows up. (Likewise delphi knows that it's in the group cals.) Yay! But if I look at rick.groups[0].software_instances...nada. []. And, oddly enough, rick.groups[0].id is something like 30 or 10, not 398398439. I've tried swapping around where the association is defined in the fixtures, e.g. -- members.yml -- rick name: rick -- groups.yml -- cals name: CALS members: rick No error, but same result. (Later: Confirmed, same behavior on MySQL. In fact I get the same thing if I take the foxification out and use ERB to directly create the join tables, so: -- license_groups_software_instances -- cals_delphi: group_id: <%= Fixtures.identify 'cals' % software_instance_id: <%= Fixtures.identify 'delphi' $ Before I chuck it all, reject foxy fixtures and all who sail in her and just hand-code IDs...anybody got a suggestion as to why this is happening? I'm currently working in Rails 2.3.2 with sqlite3, this occurs on both OS X and Ubuntu (and I think with MySQL too). TIA, my compadres.

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >