Search Results

Search found 3729 results on 150 pages for 'sqlserver reporting servi'.

Page 90/150 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • How do you tell if your migrations are up to date with migratordotnet?

    - by Lance Fisher
    I'm using migratordotnet to manage my database migrations. I'm running them on application setup like this, but I would also like to check on application startup that the migrations are up to date, and provide the option to migrate to latest. How do I tell if there are available migrations that need to be applied? I see that I can get the migrations that were applied like this var asm = Assembly.GetAssembly(typeof(Migration_0001)); var migrator = new Migrator.Migrator("SqlServer", setupInfo.DatabaseConnectionString, asm); var applied = migrator.AppliedMigrations; I like to do something like this: var available = migrator.AvailableMigrations; //this property does not exist.

    Read the article

  • updateBinaryStream on ResultSet with FileStream

    - by Lieven Cardoen
    If I try to update a FileStream column I get following error: com.microsoft.sqlserver.jdbc.SQLServerException: The result set is not updatable. Code: System.out.print("Now, let's update the filestream data."); FileInputStream iStream = new FileInputStream("c:\\testFile.mp3"); rs.updateBinaryStream(2, iStream, -1); rs.updateRow(); iStream.close(); Why is this? Table in Sql Server 2008: CREATE TABLE [BinaryAssets].[BinaryAssetFiles]( [BinaryAssetFileId] [int] IDENTITY(1,1) NOT NULL, [FileStreamId] [uniqueidentifier] ROWGUIDCOL NOT NULL, [Blob] [varbinary](max) FILESTREAM NULL, [HashCode] [nvarchar](100) NOT NULL, [Size] [int] NOT NULL, [BinaryAssetExtensionId] [int] NOT NULL, Query used in Java: String strCmd = "select BinaryAssetFileId, Blob from BinaryAssets.BinaryAssetFiles where BinaryAssetFileId = 1"; stmt = con.createStatement(); rs = stmt.executeQuery(strCmd);

    Read the article

  • Compare two String with MySQL

    - by Scorpi0
    Hi, I wan't to compare two strings in a SQL request so I can retrieve the best match, the aim is to propose to an operator the best zip code possible. For example, in France, we have Integer Zip code, so I made an easy request : SELECT * FROM myTable ORDER BY abs(zip_code - 75000) This request returns first the data closest of Paris. Unfortunatelly, United Kingdom have zip code like AB421RS, so my request can't do it. I see in SQL Server a function 'Difference' : http://www.java2s.com/Code/SQLServer/String-Functions/DIFFERENCEworkoutwhenonestringsoundssimilartoanotherstring.htm But I use MySQL.. Is there anyone who have a good idea to do the trick in one simple request ? PS : the Levenshtein Distance will not do it, as I really wan't to compare string like if they were number. ABCDEF have to be closer to AWXYZ than to ZBCDEF.

    Read the article

  • Connect rails application to MsSQL 2005 from Windows

    - by Enrico Carlesso
    Hi guys. I (sadly) have to deploy a rails application on Windows XP which has to connect to Microsoft SQL Server 2005. Surfing in the web there are a lot of hits for connect from Linux to MsSQL, but cannot find out how to do it from Windows. Basically I followed these steps: Install dbi gem Install activerecord-sql-server-adapter gem My database.yml now looks like this: development: adapter: sqlserver mode: odbc dsn: test_dj host: HOSTNAME\SQLEXPRESS database: test_dj username: guest password: guest But I'm unable to connect it. When I run rake db:migrate I get IM002 (0) [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified I'm not a Windows user, so cannot understand really well the meaning of dsn element or so. Does someone have an idea how to solve this? Thanks in advance

    Read the article

  • How to execute T-SQL for several databases whose names are stored in a table

    - by emzero
    Hey guys, so here's the deal. I have several databases (SqlServer 2005) on the same server with the same schema but different data. I have one extra database which has one table storing the names of the mentioned databases. So what I need to do is to iterate over those databases name and actually "switch" to each one (use [dbname]) and execute a T-SQL script. Am I clear? Let me give you an example (simplified from the real one): CREATE TABLE DatabaseNames ( Id int, Name varchar(50) ) INSERT INTO DatabaseNames SELECT 'DatabaseA' INSERT INTO DatabaseNames SELECT 'DatabaseB' INSERT INTO DatabaseNames SELECT 'DatabaseC' Assume that DatabaseA, DatabaseB and DatabaseC are real existing databases. So let's say I need to create a new SP on those DBs. I need some script that loops over those databases and executes the T-SQL script I specify (maybe stored on a varchar variable or wherever). Any ideas? Thanks!

    Read the article

  • How to implement paging in NHibernate with a left join query

    - by Gabe Moothart
    I have an NHibernate query that looks like this: var query = Session.CreateQuery(@" select o from Order o left join o.Products p where (o.CompanyId = :companyId) AND (p.Status = :processing) order by o.UpdatedOn desc") .SetParameter("companyId", companyId) .SetParameter("processing", Status.Processing) .SetResultTransformer(Transformers.DistinctRootEntity); var data = query.List<Order>(); I want to implement paging for this query, so I only return x rows instead of the entire result set. I know about SetMaxResults() and SetFirstResult(), but because of the left join and DistinctRootEntity, that could return less than x Orders. I tried "select distinct o" as well, but the sql that is generated for that (using the sqlserver 2008 dialect) seems to ignore the distinct for pages after the first one (I think this is the problem). What is the best way to accomplish this?

    Read the article

  • Building dynamic OLAP data marts on-the-fly

    - by DrJohn
    At the forthcoming SQLBits conference, I will be presenting a session on how to dynamically build an OLAP data mart on-the-fly. This blog entry is intended to clarify exactly what I mean by an OLAP data mart, why you may need to build them on-the-fly and finally outline the steps needed to build them dynamically. In subsequent blog entries, I will present exactly how to implement some of the techniques involved. What is an OLAP data mart? In data warehousing parlance, a data mart is a subset of the overall corporate data provided to business users to meet specific business needs. Of course, the term does not specify the technology involved, so I coined the term "OLAP data mart" to identify a subset of data which is delivered in the form of an OLAP cube which may be accompanied by the relational database upon which it was built. To clarify, the relational database is specifically create and loaded with the subset of data and then the OLAP cube is built and processed to make the data available to the end-users via standard OLAP client tools. Why build OLAP data marts? Market research companies sell data to their clients to make money. To gain competitive advantage, market research providers like to "add value" to their data by providing systems that enhance analytics, thereby allowing clients to make best use of the data. As such, OLAP cubes have become a standard way of delivering added value to clients. They can be built on-the-fly to hold specific data sets and meet particular needs and then hosted on a secure intranet site for remote access, or shipped to clients' own infrastructure for hosting. Even better, they support a wide range of different tools for analytical purposes, including the ever popular Microsoft Excel. Extension Attributes: The Challenge One of the key challenges in building multiple OLAP data marts based on the same 'template' is handling extension attributes. These are attributes that meet the client's specific reporting needs, but do not form part of the standard template. Now clearly, these extension attributes have to come into the system via additional files and ultimately be added to relational tables so they can end up in the OLAP cube. However, processing these files and filling dynamically altered tables with SSIS is a challenge as SSIS packages tend to break as soon as the database schema changes. There are two approaches to this: (1) dynamically build an SSIS package in memory to match the new database schema using C#, or (2) have the extension attributes provided as name/value pairs so the file's schema does not change and can easily be loaded using SSIS. The problem with the first approach is the complexity of writing an awful lot of complex C# code. The problem of the second approach is that name/value pairs are useless to an OLAP cube; so they have to be pivoted back into a proper relational table somewhere in the data load process WITHOUT breaking SSIS. How this can be done will be part of future blog entry. What is involved in building an OLAP data mart? There are a great many steps involved in building OLAP data marts on-the-fly. The key point is that all the steps must be automated to allow for the production of multiple OLAP data marts per day (i.e. many thousands, each with its own specific data set and attributes). Now most of these steps have a great deal in common with standard data warehouse practices. The key difference is that the databases are all built to order. The only permanent database is the metadata database (shown in orange) which holds all the metadata needed to build everything else (i.e. client orders, configuration information, connection strings, client specific requirements and attributes etc.). The staging database (shown in red) has a short life: it is built, populated and then ripped down as soon as the OLAP Data Mart has been populated. In the diagram below, the OLAP data mart comprises the two blue components: the Data Mart which is a relational database and the OLAP Cube which is an OLAP database implemented using Microsoft Analysis Services (SSAS). The client may receive just the OLAP cube or both components together depending on their reporting requirements.  So, in broad terms the steps required to fulfil a client order are as follows: Step 1: Prepare metadata Create a set of database names unique to the client's order Modify all package connection strings to be used by SSIS to point to new databases and file locations. Step 2: Create relational databases Create the staging and data mart relational databases using dynamic SQL and set the database recovery mode to SIMPLE as we do not need the overhead of logging anything Execute SQL scripts to build all database objects (tables, views, functions and stored procedures) in the two databases Step 3: Load staging database Use SSIS to load all data files into the staging database in a parallel operation Load extension files containing name/value pairs. These will provide client-specific attributes in the OLAP cube. Step 4: Load data mart relational database Load the data from staging into the data mart relational database, again in parallel where possible Allocate surrogate keys and use SSIS to perform surrogate key lookup during the load of fact tables Step 5: Load extension tables & attributes Pivot the extension attributes from their native name/value pairs into proper relational tables Add the extension attributes to the views used by OLAP cube Step 6: Deploy & Process OLAP cube Deploy the OLAP database directly to the server using a C# script task in SSIS Modify the connection string used by the OLAP cube to point to the data mart relational database Modify the cube structure to add the extension attributes to both the data source view and the relevant dimensions Remove any standard attributes that not required Process the OLAP cube Step 7: Backup and drop databases Drop staging database as it is no longer required Backup data mart relational and OLAP database and ship these to the client's infrastructure Drop data mart relational and OLAP database from the build server Mark order complete Start processing the next order, ad infinitum. So my future blog posts and my forthcoming session at the SQLBits conference will all focus on some of the more interesting aspects of building OLAP data marts on-the-fly such as handling the load of extension attributes and how to dynamically alter the structure of an OLAP cube using C#.

    Read the article

  • Server database -> client database update based on version

    - by user296191
    Hi, What is the recommended method of collecting items in a server database, versioning the database then deploying only the version differences to a client ? Should it by a field in the table (ie. Version: 3.3.9876) against each record ? Should it be DateTime (server based) in each record ? And whats the best way to just deploy the changes to a client with an older version of the database ? Is it a DUMP to a file with a Bulk import of some description ? Open to comments.. Suggestions. Database can be anything (firebird, mysql, sqlserver, sqlite)... Any info greatly appreciated.

    Read the article

  • Changing the Column order/adding Newcolumn for existing Table in SQLServer2008

    - by rmdussa
    Hi I have situation where I need to change the order of the columns/adding new columns for existing Table in SQLServer2008, It is not allowing me to do without drop and recreate. But That is in production system and having data in that table. I can take backup of the data, and Drop the existing table and change the order/add new columns and recreate it, insert the backup data into new Table. Is there any best way to do this without dropping and recreating.I think SQLServer 2005 will allow this process without dropping and recreating while changing to existing table structure. Thanks

    Read the article

  • Dataset field DBNull -> int?

    - by BobClegg
    SQLServer int field. Value sometimes null. DataAdapter fills dataset OK and can display data in DatagridView OK. When trying to retrieve the data programmatically from the dataset the Dataset field retrieval code throws a StronglyTypedException error. [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] public int curr_reading { get { try { return ((int)(this[this.tableHistory.curr_readingColumn])); } catch (global::System.InvalidCastException e) { throw new global::System.Data.StrongTypingException("The value for column \'curr_reading\' in table \'History\' is DBNull.", e); } Got past this by checking for DBNull in the get accessor and returning null but... When the dataset structure is modified (Still developing) my changes (unsurprisingly) are gone. What is the best way to handle this situation? It seems I am stuck with dealing with it at the dataset level. Is there some sort of attribute that can tell the auto code generator to leave the changes in place?

    Read the article

  • Data import wizard library for .Net?

    - by Phil
    Does anyone know of a 3rd party data import wizard that can be embedded into applications? It should import from Excel, Access, SQLServer, csv, tab-separated flat file, XML, Oracle etc. We have a fixed data structure within our application and the user should be able to configure the wizard to match his/her import fields to our own data structure. The wizard should be a library of sorts – preferably a .Net type library. We may want to have it both web-based and desktop based (hence we may need an ASP.Net controls version and a Winforms version). We may also want integration with WPF and Silverlight. If there’s no UI wizard available, does anyone know of a non-UI library that supports easily configurable import from many, many different datasources?

    Read the article

  • Unaccounted for database size

    - by Nazadus
    I currently have a database that is 20GB in size. I've run a few scripts which show on each tables size (and other incredibly useful information such as index stuff) and the biggest table is 1.1 million records which takes up 150MB of data. We have less than 50 tables most of which take up less than 1MB of data. After looking at the size of each table I don't understand why the database shouldn't be 1GB in size after a shrink. The amount of available free space that SqlServer (2005) reports is 0%. The log mode is set to simple. At this point my main concern is I feel like I have 19GB of unaccounted for used space. Is there something else I should look at? Normally I wouldn't care and would make this a passive research project except this particular situation calls for us to do a backup and restore on a weekly basis to put a copy on a satellite (which has no internet, so it must be done manually). I'd much rather copy 1GB (or even if it were down to 5GB!) than 20GB of data each week. sp_spaceused reports the following: Navigator-Production 19184.56 MB 3.02 MB And the second part of it: 19640872 KB 19512112 KB 108184 KB 20576 KB while I've found a few other scripts (such as the one from two of the server database size questions here, they all report the same information either found above or below). The script I am using is from SqlTeam. Here is the header info: * BigTables.sql * Bill Graziano (SQLTeam.com) * graz@<email removed> * v1.11 The top few tables show this (table, rows, reserved space, data, index, unused, etc): Activity 1143639 131 MB 89 MB 41768 KB 1648 KB 46% 1% EventAttendance 883261 90 MB 58 MB 32264 KB 328 KB 54% 0% Person 113437 31 MB 15 MB 15752 KB 912 KB 103% 3% HouseholdMember 113443 12 MB 6 MB 5224 KB 432 KB 82% 4% PostalAddress 48870 8 MB 6 MB 2200 KB 280 KB 36% 3% The rest of the tables are either the same in size or smaller. No more than 50 tables. Update 1: - All tables use unique identifiers. Usually an int incremented by 1 per row. I've also re-indexed everything. I ran the dbcc shrink command as well as updating the usage before and after. And over and over. An interesting thing I found is that when I restarted the server and confirmed no one was using it (and no maintenance procs are running, this is a very new application -- under a week old) and when I went to run the shrink, every now and then it would say something about data changed. Googling yielded too few useful answers with the obvious not applying (it was 1am and I disconnected everyone, so it seems impossible that was really the case). The data was migrated via C# code which basically looked at another server and brought things over. The quantity of deletes, at this point in time, are probably under 50k in rows. Even if those rows were the biggest rows, that wouldn't be more than 100M I would imagine. When I go to shrink via the GUI it reports 0% available to shrink, indicating that I've already gotten it as small as it thinks it can go. Update 2: sp_spaceused 'Activity' yields this (which seems right on the money): Activity 1143639 134488 KB 91072 KB 41768 KB 1648 KB Fill factor was 90. All primary keys are ints. Here is the command I used to 'updateusage': DBCC UPDATEUSAGE(0); Update 3: Per Edosoft's request: Image 111975 2407773 19262184 It appears as though the image table believes it's the 19GB portion. I don't understand what this means though. Is it really 19GB or is it misrepresented? Update 4: Talking to a co-worker and I found out that it's because of the pages, as someone else here has also state the potential for that. The only index on the image table is a clustered PK. Is this something I can fix or do I just have to deal with it? The regular script shows the Image table to be 6MB in size. Update 5: I think I'm just going to have to deal with it after further research. The images have been resized to be roughly 2-5KB each and on a normal file system doesn't consume much space but on SqlServer it seems to consume considerably more. The real answer, in the long run, will likely be separating that table in to another partition or something similar.

    Read the article

  • Is it possible to "merge" the values of multiple records into a single field without using a stored

    - by j0rd4n
    A co-worker posed this question to me, and I told them, "No, you'll need to write a sproc for that". But I thought I'd give them a chance and put this out to the community. Essentially, they have a table with keys mapping to multiple values. For a report, they want to aggregate on the key and "mash" all of the values into a single field. Here's a visual: --- ------- Key Value --- ------- 1 A 1 B 1 C 2 X 2 Y The result would be as follows: --- ------- Key Value --- ------- 1 A,B,C 2 X,Y They need this in SQLServer 2005. Again, I think they need to write a stored procedure, but if anyone knows a magic out-of-the-box function that does this, I'd be impressed.

    Read the article

  • Creating a CLR UDF with variable number of parameters

    - by josephj1989
    Hi I wanted a function to find the greatest of a list of String values passed in. I want to invoke it as Select greatest('Abcd','Efgh','Zxy','EAD') from sql server. It should return Zxy. The number of parameters is variable.Incidentally it is very similar to oracle GREATEST function. So I wrote a very simple CLR function (Vs2008) and tried to deploy it. See below public partial class UserDefinedFunctions { [Microsoft.SqlServer.Server.SqlFunction] public static SqlString Greatest(params SqlString[] p) { SqlString max=p[0]; foreach (string s in p) max = s.CompareTo(max) > 0 ? s : max; return max; } }; But when I try to compile or deploy it I get the following error Cannot find data type SqlString[]. Is it possible to satisfy my requirement using SQL CLR ?

    Read the article

  • What's new in Solaris 11.1?

    - by Karoly Vegh
    Solaris 11.1 is released. This is the first release update since Solaris 11 11/11, the versioning has been changed from MM/YY style to 11.1 highlighting that this is Solaris 11 Update 1.  Solaris 11 itself has been great. What's new in Solaris 11.1? Allow me to pick some new features from the What's New PDF that can be found in the official Oracle Solaris 11.1 Documentation. The updates are very numerous, I really can't include all.  I. New AI Automated Installer RBAC profiles have been introduced to enable delegation of installation tasks. II. The interactive installer now supports installing the OS to iSCSI targets. III. ASR (Auto Service Request) and OCM (Oracle Configuration Manager) have been enabled by default to proactively provide support information and create service requests to speed up support processes. This is optional and can be disabled but helps a lot in supportcases. For further information, see: http://oracle.com/goto/solarisautoreg IV. The new command svcbundle helps you to create SMF manifests without having to struggle with XML editing. (btw, do you know the interactive editprop subcommand in svccfg? The listprop/setprop subcommands are great for scripting and automating, but for an interactive property editing session try, for example, this: svccfg -s svc:/application/pkg/system-repository:default editprop )  V. pfedit: Ever wondered how to delegate editing permissions to certain files? It is well known "sudo /usr/bin/vi /etc/hosts" is not the right way, for sudo elevates the complete vi process to admin levels, and the user can "break" out of the session as root with simply starting a shell from that vi. Now, the new pfedit command provides a solution exactly to this challenge - an auditable, secure, per-user configurable editing possibility. See the pfedit man page for examples.   VI. rsyslog, the popular logging daemon (filters, SSL, formattable output, SQL collect...) has been included in Solaris 11.1 as an alternative to syslog.  VII: Zones: Solaris Zones - as a major Solaris differentiator - got lots of love in terms of new features: ZOSS - Zones on Shared Storage: Placing your zones to shared storage (FC, iSCSI) has never been this easy - via zonecfg.  parallell updates - with S11's bootenvironments updating zones was no problem and meant no downtime anyway, but still, now you can update them parallelly, a way faster update action if you are running a large number of zones. This is like parallell patching in Solaris 10, but with all the IPS/ZFS/S11 goodness.  per-zone fstype statistics: Running zones on a shared filesystems complicate the I/O debugging, since ZFS collects all the random writes and delivers them sequentially to boost performance. Now, over kstat you can find out which zone's I/O has an impact on the other ones, see the examples in the documentation: http://docs.oracle.com/cd/E26502_01/html/E29024/gmheh.html#scrolltoc Zones got RDSv3 protocol support for InfiniBand, and IPoIB support with Crossbow's anet (automatic vnic creation) feature.  NUMA I/O support for Zones: customers can now determine the NUMA I/O topology of the system from within zones.  VIII: Security got a lot of attention too:  Automated security/audit reporting, with builtin reporting templates e.g. for PCI (payment card industry) audits.  PAM is now configureable on a per-user basis instead of system wide, allowing different authentication requirements for different users  SSH in Solaris 11.1 now supports running in FIPS 140-2 mode, that is, in a U.S. government security accredited fashion.  SHA512/224 and SHA512/256 cryptographic hash functions are implemented in a FIPS-compliant way - and on a T4 implemented in silicon! That is, goverment-approved cryptography at HW-speed.  Generally, Solaris is currently under evaluation to be both FIPS and Common Criteria certified.  IX. Networking, as one of the core strengths of Solaris 11, has been extended with:  Data Center Bridging (DCB) - not only setups where network and storage share the same fabric (FCoE, anyone?) can have Quality-of-Service requirements. DCB enables peers to distinguish traffic based on priorities. Your NICs have to support DCB, see the documentation, and additional information on Wikipedia. DataLink MultiPathing, DLMP, enables link aggregation to span across multiple switches, even between those of different vendors. But there are essential differences to the good old bandwidth-aggregating LACP, see the documentation: http://docs.oracle.com/cd/E26502_01/html/E28993/gmdlu.html#scrolltoc VNIC live migration is now supported from one physical NIC to another on-the-fly  X. Data management:  FedFS, (Federated FileSystem) is new, it relies on Solaris 11's NFS referring mechanism to join separate shares of different NFS servers into a single filesystem namespace. The referring system has been there since S11 11/11, in Solaris 11.1 FedFS uses a LDAP - as the one global nameservice to bind them all.  The iSCSI initiator now uses the T4 CPU's HW-implemented CRC32 algorithm - thus improving iSCSI throughput while reducing CPU utilization on a T4 Storage locking improvements are now RAC aware, speeding up throughput with better locking-communication between nodes up to 20%!  XI: Kernel performance optimizations: The new Virtual Memory subsystem ("VM2") scales now to 100+ TB Memory ranges.  The memory predictor monitors large memory page usage, and adjust memory page sizes to applications' needs OSM, the Optimized Shared Memory allows Oracle DBs' SGA to be resized online XII: The Power Aware Dispatcher in now by default enabled, reducing power consumption of idle CPUs. Also, the LDoms' Power Management policies and the poweradm settings in Solaris 11 OS will cooperate. XIII: x86 boot: upgrade to the (Grand Unified Bootloader) GRUB2. Because grub2 differs in the configuration syntactically from grub1, one shall not edit the new grub configuration (grub.cfg) but use the new bootadm features to update it. GRUB2 adds UEFI support and also support for disks over 2TB. XIV: Improved viewing of per-CPU statistics of mpstat. This one might seem of less importance at first, but nowadays having better sorting/filtering possibilities on a periodically updated mpstat output of 256+ vCPUs can be a blessing. XV: Support for Solaris Cluster 4.1: The What's New document doesn't actually mention this one, since OSC 4.1 has not been released at the time 11.1 was. But since then it is available, and it requires Solaris 11.1. And it's only a "pkg update" away. ...aand I seriously need to stop here. There's a lot I missed, Edge Virtual Bridging, lofi tuning, ZFS sharing and crypto enhancements, USB3.0, pulseaudio, trusted extensions updates, etc - but if I mention all those then I effectively copy the What's New document. Which I recommend reading now anyway, it is a great extract of the 300+ new projects and RFE-followups in S11.1. And this blogpost is a summary of that extract.  For closing words, allow me to come back to Request For Enhancements, RFEs. Any customer can request features. Open up a Support Request, explain that this is an RFE, describe the feature you/your company desires to have in S11 implemented. The more SRs are collected for an RFE, the more chance it's got to get implemented. Feel free to provide feedback about the product, as well as about the Solaris 11.1 Documentation using the "Feedback" button there. Both the Solaris engineers and the documentation writers are eager to hear your input.Feel free to comment about this post too. Except that it's too long ;)  wbr,charlie

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 2 – Table per Type (TPT)

    - by mortezam
    In the previous blog post you saw that there are three different approaches to representing an inheritance hierarchy and I explained Table per Hierarchy (TPH) as the default mapping strategy in EF Code First. We argued that the disadvantages of TPH may be too serious for our design since it results in denormalized schemas that can become a major burden in the long run. In today’s blog post we are going to learn about Table per Type (TPT) as another inheritance mapping strategy and we'll see that TPT doesn’t expose us to this problem. Table per Type (TPT)Table per Type is about representing inheritance relationships as relational foreign key associations. Every class/subclass that declares persistent properties—including abstract classes—has its own table. The table for subclasses contains columns only for each noninherited property (each property declared by the subclass itself) along with a primary key that is also a foreign key of the base class table. This approach is shown in the following figure: For example, if an instance of the CreditCard subclass is made persistent, the values of properties declared by the BillingDetail base class are persisted to a new row of the BillingDetails table. Only the values of properties declared by the subclass (i.e. CreditCard) are persisted to a new row of the CreditCards table. The two rows are linked together by their shared primary key value. Later, the subclass instance may be retrieved from the database by joining the subclass table with the base class table. TPT Advantages The primary advantage of this strategy is that the SQL schema is normalized. In addition, schema evolution is straightforward (modifying the base class or adding a new subclass is just a matter of modify/add one table). Integrity constraint definition are also straightforward (note how CardType in CreditCards table is now a non-nullable column). Another much more important advantage is the ability to handle polymorphic associations (a polymorphic association is an association to a base class, hence to all classes in the hierarchy with dynamic resolution of the concrete class at runtime). A polymorphic association to a particular subclass may be represented as a foreign key referencing the table of that particular subclass. Implement TPT in EF Code First We can create a TPT mapping simply by placing Table attribute on the subclasses to specify the mapped table name (Table attribute is a new data annotation and has been added to System.ComponentModel.DataAnnotations namespace in CTP5): public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } [Table("BankAccounts")] public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } [Table("CreditCards")] public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } If you prefer fluent API, then you can create a TPT mapping by using ToTable() method: protected override void OnModelCreating(ModelBuilder modelBuilder) {     modelBuilder.Entity<BankAccount>().ToTable("BankAccounts");     modelBuilder.Entity<CreditCard>().ToTable("CreditCards"); } Generated SQL For QueriesLet’s take an example of a simple non-polymorphic query that returns a list of all the BankAccounts: var query = from b in context.BillingDetails.OfType<BankAccount>() select b; Executing this query (by invoking ToList() method) results in the following SQL statements being sent to the database (on the bottom, you can also see the result of executing the generated query in SQL Server Management Studio): Now, let’s take an example of a very simple polymorphic query that requests all the BillingDetails which includes both BankAccount and CreditCard types: projects some properties out of the base class BillingDetail, without querying for anything from any of the subclasses: var query = from b in context.BillingDetails             select new { b.BillingDetailId, b.Number, b.Owner }; -- var query = from b in context.BillingDetails select b; This LINQ query seems even more simple than the previous one but the resulting SQL query is not as simple as you might expect: -- As you can see, EF Code First relies on an INNER JOIN to detect the existence (or absence) of rows in the subclass tables CreditCards and BankAccounts so it can determine the concrete subclass for a particular row of the BillingDetails table. Also the SQL CASE statements that you see in the beginning of the query is just to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type) TPT ConsiderationsEven though this mapping strategy is deceptively simple, the experience shows that performance can be unacceptable for complex class hierarchies because queries always require a join across many tables. In addition, this mapping strategy is more difficult to implement by hand— even ad-hoc reporting is more complex. This is an important consideration if you plan to use handwritten SQL in your application (For ad hoc reporting, database views provide a way to offset the complexity of the TPT strategy. A view may be used to transform the table-per-type model into the much simpler table-per-hierarchy model.) SummaryIn this post we learned about Table per Type as the second inheritance mapping in our series. So far, the strategies we’ve discussed require extra consideration with regard to the SQL schema (e.g. in TPT, foreign keys are needed). This situation changes with the Table per Concrete Type (TPC) that we will discuss in the next post. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • DBMS for POS software

    - by Andrew Smith
    Hello, I want to develop a POS application in .NET (C#) that would be used to rent items. I have a good idea of what will be done and the famous question that I have is about the DBMS that I should use. I would like to use MySQL database. The question is: If some places use only one computer (no network, no internet connection), can I use a MySQL database in local? Do I need to install MySQL server on all thoses computers to be able to use such a database? I know SQLite but I'm not sure if the limitations can cause problems in the future... I also looked at SQLServer Express versions. (I must consider that other point of sales are using multiple computers and more transactions so there I can't put sqlexpress or sqlite) So can anybody suggest me what I should do in that situation? Thanks

    Read the article

  • JPA optimistic lock - setting @Version to entity class cause query to include VERSION as column

    - by masato-san
    I'm using JPA Toplink Essential, Netbean6.8, GlassFish v3 In my Entity class I added @Version annotation to enable optimistic locking at transaction commit however after I added the annotation, my query started including VERSION as column thus throwing SQL exception. None of this is mentioned in any tutorial I've seen so far. What could be wrong? Snippet public class MasatosanTest2 implements Serializable { private static final long serialVersionUID = 1L; @Id @Basic(optional = false) @Column(name = "id") private Integer id; @Column(name = "username") private String username; @Column(name = "note") private String note; //here adding Version @Version int version; query used: SELECT m FROM MasatosanTest2 m Internal Exception: com.microsoft.sqlserver.jdbc.SQLServerException Call: SELECT id, username, note, VERSION FROM MasatosanTest2

    Read the article

  • Logging which is the best way

    - by Tony
    Hi People who talk about loggers here never talke about EventLog, I think this is good for windows system. Is it reliable, or I found it dead in some bad morning? Why not logging everything at SQLServer, I am creating E-Commerce website, if SQL server down the website will be down anyway. but I am worry about temporally connection failure, what do u think? Why everyone like files, it can be in great size, too big to handle, or maybe I will create another file when a file is too big, and I can create a file with a date. Some one tried MS Enterprise library? talk to me about it. Thanks

    Read the article

  • Recommendations for supporting both Oracle and MSSQL in the same ASP.NET app with NHibernate

    - by Hugo Zapata
    Our client wants to support both SQLServer and Oracle in the next project. Our experience comes from .NET/SQL Server platform. We will hire an Oracle developer, but our concern is with the DataAccess code. Will NHibernate make the DB Engine transparent for us? I don't think so, but i would like to hear from developers who have faced similar situations. I know this question is a little vague, because i don't have Oracle experience, so i don't know what issues we will find.

    Read the article

  • problem with trying to create ssms add-in

    - by Karl
    I'm trying to create an add-in for SSMS 2008 and/or 2008 R2 but I've run into a problem straight away. I can get my add-in to work and on SSMS start-up get it to simply show a message box. However, after downloading various code-samples, when trying to reference Microsoft.SqlServer.Management.UI.VSIntegration.ServiceCache I get a null reference exception: Commands2 commands = (Commands2)ServiceCache.ExtensibilityModel.Commands; I get this problem when using SSMS 2008 or SSMS 2008 R2. I'm working on Visual Studio 2010. It's a bit frustrating because I'm keen to learn more about SSMS add-ins but can't seem to get past the few samples out there. Any advice/tips appreciated. Thanks

    Read the article

  • Learnings from trying to write better software: Loud errors from the very start

    - by theo.spears
    Microsoft made a very small number of backwards incompatible changes between .NET 1.1 and 2.0, because they wanted to make it as easy and safe as possible to port applications to the new runtime. (Here’s a list.) However, one thing they did change was what happens when a background thread fails with an unhanded exception - in .NET 1.1 nothing happened, the thread terminated, and the application continued oblivious. Try the same trick in .NET 2.0 and the entire application, including all threads, will rudely terminate. There are three reasons for this. Firstly if a background thread has crashed, it may have left the entire application in an inconsistent state, in a way that will affect other threads. It’s better to terminate the entire application than continue and have the application perform actions based on a broken state, for example take customer orders, or write corrupt files to disk.  Secondly, during software development, it is far better for errors to be loud and obtrusive. Even if you have unit tests and integration tests (and you should), a key part of ensuring software works properly is to actually try using it, both through systematic testing and through the casual use all software gets by its developers during use. Subtle errors are easy to miss if you are not actually doing real work using the application, loud errors are obvious. Thirdly, and most importantly, even if catching and swallowing exceptions indiscriminately doesn't cause any problems in your application, the presence of unexpected exceptions shows you do not fully understand the behavior of your code. The currently released version of your application may be absolutely correct. However, because your mental model of the behavior is wrong, any future change you make to the program could and probably will introduce critical errors.  This applies to more than just exceptions causing threads to exit, any unexpected state should make the application blow up in an un-ignorable way. The worst thing you can do is silently swallow errors and continue. And let's be clear, writing to a log file does not count as blowing up in an un-ignorable way.  This is all simple as long as the call stack only contains your code, but when your functions start to be called by third party or .NET framework code, it's surprisingly easy for exceptions to start vanishing. Let's look at two examples.   1. Windows forms drag drop events  Usually if you throw an exception from a winforms event handler it will bring up the "application has crashed" dialog with abort and continue options. This is a good default behavior - the error is big and loud, but it is possible for the user to ignore the error and hopefully save their data, if somehow this bug makes it past testing. However drag and drop are different - throw an exception from one of these and it will just be silently swallowed with no explanation.  By the way, it's not just drag and drop events. Timer events do it too.  You can research how exceptions are treated in different handlers and code appropriately, but the safest and most user friendly approach is to always catch exceptions in your event handlers and show your own error message. I'll talk about one good approach to handling these exceptions at the end of this post.   2. SSMS integration for SQL Tab Magic  A while back wrote an SSMS add-in called SQL Tab Magic (learn more about the process here). It works by listening to certain SSMS events and remembering what documents are opened and closed. I deployed it internally and it was used for a few months by a number of people without problems, so I was reasonably confident in its quality. Before releasing I made a few cleanups, including introducing error reporting. Bam. A few days later I was looking at over 1,000 error reports in my inbox. In turns out I wasn't handling table designers properly. The exceptions were there, but again SSMS was helpfully swallowing them all for me, so I was blissfully unaware. Had I made my errors loud from the start, I would have noticed these issues long before and fixed them.   Handling exceptions  Now you are systematically catching exceptions throughout your application, you need to do something with them. I've tried 3 options: log them, alert the user, and automatically send them home.  There are a few good options for logging in .NET. The most widespread is Apache log4net, which provides a very capable and configurable logging framework. There is also NLog which has a compatible interface, with a greater emphasis on fluent rather than XML configuration.  Alerting the user serves two purposes. Firstly it means they understand their action has failed to they don't just assume it worked (Silent file copy failure is a problem if you then delete the originals) or that they should keep waiting for a background task to complete. Secondly, it means the users can report the bug to your support team, and then you can fix it. This means the message you show the user should contain the information you need as a developer to identify and fix it. And the user will probably just send you a screenshot of the dialog, so it shouldn't be hidden by scroll bars.  This leads us to the third option, automatically sending error reports home. By automatic I mean with minimal effort on the part of the user, rather than doing it silently behind their backs. The advantage of this is you can send back far more detailed and precise information than you can expect a user to include in an email, and by making it easier to report errors, you make it more likely users will do so.  We do this using a great tool called SmartAssembly (full disclosure: this is a product made by Red Gate). It captures complete stack traces including the values of all local variables and then allows the user to send all this information back with a single click. We also capture log files to help understand what lead up to the error. We then use the free SmartAssembly Sync for Jira to dedupe these reports and raise them as bugs in our bug tracking system.  The combined effect of loud errors during development and then automatic error reporting once software is deployed allows us to find and fix more bugs, correct misunderstandings on how our software works, and overall is a key piece in delivering higher quality software. However it is no substitute for having motivated cunning testers in the building - and we're looking to hire more of those too.   If you found this post interesting you should follow me on twitter.  

    Read the article

  • How to Identify Which Hardware Component is Failing in Your Computer

    - by Chris Hoffman
    Concluding that your computer has a hardware problem is just the first step. If you’re dealing with a hardware issue and not a software issue, the next step is determining what hardware problem you’re actually dealing with. If you purchased a laptop or pre-built desktop PC and it’s still under warranty, you don’t need to care about this. Have the manufacturer fix the PC for you — figuring it out is their problem. If you’ve built your own PC or you want to fix a computer that’s out of warranty, this is something you’ll need to do on your own. Blue Screen 101: Search for the Error Message This may seem like obvious advice, but searching for information about a blue screen’s error message can help immensely. Most blue screens of death you’ll encounter on modern versions of Windows will likely be caused by hardware failures. The blue screen of death often displays information about the driver that crashed or the type of error it encountered. For example, let’s say you encounter a blue screen that identified “NV4_disp.dll” as the driver that caused the blue screen. A quick Google search will reveal that this is the driver for NVIDIA graphics cards, so you now have somewhere to start. It’s possible that your graphics card is failing if you encounter such an error message. Check Hard Drive SMART Status Hard drives have a built in S.M.A.R.T. (Self-Monitoring, Analysis, and Reporting Technology) feature. The idea is that the hard drive monitors itself and will notice if it starts to fail, providing you with some advance notice before the drive fails completely. This isn’t perfect, so your hard drive may fail even if SMART says everything is okay. If you see any sort of “SMART error” message, your hard drive is failing. You can use SMART analysis tools to view the SMART health status information your hard drives are reporting. Test Your RAM RAM failure can result in a variety of problems. If the computer writes data to RAM and the RAM returns different data because it’s malfunctioning, you may see application crashes, blue screens, and file system corruption. To test your memory and see if it’s working properly, use Windows’ built-in Memory Diagnostic tool. The Memory Diagnostic tool will write data to every sector of your RAM and read it back afterwards, ensuring that all your RAM is working properly. Check Heat Levels How hot is is inside your computer? Overheating can rsult in blue screens, crashes, and abrupt shut downs. Your computer may be overheating because you’re in a very hot location, it’s ventilated poorly, a fan has stopped inside your computer, or it’s full of dust. Your computer monitors its own internal temperatures and you can access this information. It’s generally available in your computer’s BIOS, but you can also view it with system information utilities such as SpeedFan or Speccy. Check your computer’s recommended temperature level and ensure it’s within the appropriate range. If your computer is overheating, you may see problems only when you’re doing something demanding, such as playing a game that stresses your CPU and graphics card. Be sure to keep an eye on how hot your computer gets when it performs these demanding tasks, not only when it’s idle. Stress Test Your CPU You can use a utility like Prime95 to stress test your CPU. Such a utility will fore your computer’s CPU to perform calculations without allowing it to rest, working it hard and generating heat. If your CPU is becoming too hot, you’ll start to see errors or system crashes. Overclockers use Prime95 to stress test their overclock settings — if Prime95 experiences errors, they throttle back on their overclocks to ensure the CPU runs cooler and more stable. It’s a good way to check if your CPU is stable under load. Stress Test Your Graphics Card Your graphics card can also be stress tested. For example, if your graphics driver crashes while playing games, the games themselves crash, or you see odd graphical corruption, you can run a graphics benchmark utility like 3DMark. The benchmark will stress your graphics card and, if it’s overheating or failing under load, you’ll see graphical problems, crashes, or blue screens while running the benchmark. If the benchmark seems to work fine but you have issues playing a certain game, it may just be a problem with that game. Swap it Out Not every hardware problem is easy to diagnose. If you have a bad motherboard or power supply, their problems may only manifest through occasional odd issues with other components. It’s hard to tell if these components are causing problems unless you replace them completely. Ultimately, the best way to determine whether a component is faulty is to swap it out. For example, if you think your graphics card may be causing your computer to blue screen, pull the graphics card out of your computer and swap in a new graphics card. If everything is working well, it’s likely that your previous graphics card was bad. This isn’t easy for people who don’t have boxes of components sitting around, but it’s the ideal way to troubleshoot. Troubleshooting is all about trial and error, and swapping components out allows you to pin down which component is actually causing the problem through a process of elimination. This isn’t a complete guide to everything that could likely go wrong and how to identify it — someone could write a full textbook on identifying failing components and still not cover everything. But the tips above should give you some places to start dealing with the more common problems. Image Credit: Justin Marty on Flickr     

    Read the article

  • Problem in SQL Server 2005 using ASP.Net

    - by megala
    I created one ASP.Net project using SQLServer database as back end.I shows the foollwing error .How to solve this? ===============Coding Imports System.Data.SqlClient Partial Class Default2 Inherits System.Web.UI.Page Dim myConnection As SqlConnection Dim myCommand As SqlCommand Dim ra As Integer Protected Sub Button1_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles Button1.Click myConnection = New SqlConnection("Data Source=JANANI-FF079747\SQLEXPRESS;Initial Catalog=new;Persist Security Info=True;User ID=sa;Password=janani") 'server=localhost;uid=sa;pwd=;database=pubs") myConnection.Open() myCommand = New SqlCommand("Insert into table3 values 'janani','jan'") ra = myCommand.ExecuteNonQuery() ========---> error is showing here MsgBox("New Row Inserted" & ra) myConnection.Close() End Sub End Class =========Error Message============ ExecuteNonQuery: Connection property has not been initialized.

    Read the article

  • Calling Sub from EventHandler

    - by madlan
    I'm using the below to update controls from another thread (works great) How would I call a Sub (Named UpdateList)? The UpdateList updates a listview with a list of databases on a selected SQL instance, requires no arguments. Private Sub CompleteEventHandler(ByVal sender As Object, ByVal e As Microsoft.SqlServer.Management.Common.ServerMessageEventArgs) SetControlPropertyValue(Label8, "text", e.ToString) UpdateList() MessageBox.Show("Restore Complete") End Sub Delegate Sub SetControlValueCallback(ByVal oControl As Control, ByVal propName As String, ByVal propValue As Object) Private Sub SetControlPropertyValue(ByVal oControl As Control, ByVal propName As String, ByVal propValue As Object) If (oControl.InvokeRequired) Then Dim d As New SetControlValueCallback(AddressOf SetControlPropertyValue) oControl.Invoke(d, New Object() {oControl, propName, propValue}) Else Dim t As Type = oControl.[GetType]() Dim props As PropertyInfo() = t.GetProperties() For Each p As PropertyInfo In props If p.Name.ToUpper() = propName.ToUpper() Then p.SetValue(oControl, propValue, Nothing) End If Next End If End Sub Based On: http://www.shabdar.org/cross-thread-operation-not-valid.html

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >