Search Results

Search found 1110 results on 45 pages for 'constraint'.

Page 41/45 | < Previous Page | 37 38 39 40 41 42 43 44 45  | Next Page >

  • Stored Procedure, 'incorrect syntax error'

    - by jacksonSD
    Attempting to figure out sp's, and I'm getting this error: "Msg 156, Level 15, State 1, Line 5 Incorrect syntax near the keyword 'Procedure'." the error seems to be on the if, but I can drop other existing tables with stored procedures the exact same way so I'm not clear on why this isn't working. can anyone shed some light? Begin Set nocount on Begin Try Create Procedure uspRecycle as if OBJECT_ID('Recycle') is not null Drop Table Recycle create table Recycle (RecycleID integer constraint PK_integer primary key, RecycleType nchar(10) not null, RecycleDescription nvarchar(100) null) insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('1','Compost','Product is compostable, instructions included in packaging') insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('2','Return','Product is returnable to company for 100% reuse') insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('3','Scrap','Product is returnable and will be reclaimed and reprocessed') insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('4','None','Product is not recycleable') End Try Begin Catch DECLARE @ErrMsg nvarchar(4000); SELECT @ErrMsg = ERROR_MESSAGE(); Throw 50001, @ErrMsg, 1; End Catch -- checking to see if table exists and is loaded: If (Select count(*) from Recycle) >1 begin Print 'Recycle table created and loaded '; Print getdate() End set nocount off End

    Read the article

  • How can I bind the same dependency to many dependents in Ninject?

    - by Mike Bantegui
    Let's I have three interfaces: IFoo, IBar, IBaz. I also have the classes Foo, Bar, and Baz that are the respective implementations. In the implementations, each depends on the interface IContainer. So for the Foo (and similarly for Bar and Baz) the implementation might read: class Foo : IFoo { private readonly IDependency Dependency; public Foo(IDependency dependency) { Dependency = dependency; } public void Execute() { Console.WriteLine("I'm using {0}", Dependency.Name); } } Let's furthermore say I have a class Container which happens to contain instances of the IFoo, IBar and IBaz: class Container : IContainer { private readonly IFoo _Foo; private readonly IBar _Bar; private readonly IBaz _Baz; public Container(IFoo foo, IBar bar, IBaz baz) { _Foo = foo; _Bar = bar; _Baz = baz; } } In this scenario, I would like the implementation class Container to bind against IContainer with the constraint that the IDependency that gets injected into IFoo, IBar, and IBaz be the same for all three. In the manual way, I might implement it as: IDependency dependency = new Dependency(); IFoo foo = new Foo(dependency); IBar bar = new Bar(dependency); IBaz baz = new Baz(dependency); IContainer container = new Container(foo, bar, baz); How can I achieve this within Ninject? Note: I am not asking how to do nested dependencies. My question is how I can guarantee that a given dependency is the same among a collection of objects within a materialized service. To be extremely explicit, I understand that Ninject in it's standard form will generate code that is equivalent to the following: IContainer container = new Container(new Foo(new Dependency()), new Bar(new Dependency()), new Baz(new Dependency())); I would not like that behavior.

    Read the article

  • Prevent cached objects to end up in the database with Entity Framework

    - by Dirk Boer
    We have an ASP.NET project with Entity Framework and SQL Azure. A big part of our data only needs to be updated a few times a day, other data is very volatile. The data that barely changes we cache in memory at startup, detach from the context and than use it mainly for reading, drastically lowering the amount of database requests we have to do. The volatile data is requested everytime by a DbContext per Http request. When we do an update to the cached data, we send a message to all instances to catch a fresh version of all the data from the SQL server. So far, so good. Until we introduced a bug that linked one of these 'cached' objects to the 'volatile' data, and did a SaveChanges. Well, that was quite a mess. The whole data tree was added again and again by every update, corrupting the whole database with a whole lot of duplicated data. As a complete hack I added a completely arbitrary column with a UniqueConstraint and some gibberish data on one of the root tables; hopefully failing the SaveChanges() next time we introduce such a bug because it will violate the Unique Constraint. But it is of course hacky, and I'm still pretty scared ;P Are there any better ways to prevent whole tree's of cached objects ending up in the database? More information Project is ASP.NET MVC I cache this data, because it is mainly read only, and this saves a tons of extra database calls per http request

    Read the article

  • SQL Trigger Need to set x from a value

    - by Eric
    Im stuck on a the type of trigger needed to for this constraint. I will have a price and a commission. The price determines the commission amount, < 100 - 4%, < 200 - 5% etc. My idea. the database contains a separate table that will hold 4 price values , 101, 201, 401, 601, with their own matching comission %, this will be called PC. When i create a property listing i want to calculate the commission they earn depending on the price entered. on insert, i need to check the new.price and compare it to the prices in PC. Once new.price is less than the price tuple, i set the price to that commission value create or replace TRIGGER findCommission BEFORE INSERT OR UPDATE ON HASLISTING FOR each ROW BEGIN IF (:NEW.ASKING_PRICE < 100001) THEN :NEW.COMMISSION = 6.0; END IF; IF (:NEW.ASKING_PRICE < 250001) THEN :NEW.COMMISSION = 5.5; END IF; IF (:NEW.ASKING_PRICE < 1000001) THEN :NEW.COMMISSION = 5.0; END IF; IF (:NEW.ASKING_PRICE > 1000000) THEN :NEW.COMMISSION = 4.0; END IF; END;

    Read the article

  • How to tell the Session to throw the error query[NHibernate]?

    - by xandy
    I made a test class against the repository methods shown below: public void AddFile<TFileType>(TFileType FileToAdd) where TFileType : File { try { _session.Save(FileToAdd); _session.Flush(); } catch (Exception e) { if (e.InnerException.Message.Contains("Violation of UNIQUE KEY")) throw new ArgumentException("Unique Name must be unique"); else throw e; } } public void RemoveFile(File FileToRemove) { _session.Delete(FileToRemove); _session.Flush(); } And the test class: try { Data.File crashFile = new Data.File(); crashFile.UniqueName = "NonUniqueFileNameTest"; crashFile.Extension = ".abc"; repo.AddFile(crashFile); Assert.Fail(); } catch (Exception e) { Assert.IsInstanceOfType(e, typeof(ArgumentException)); } // Clean up the file Data.File removeFile = repo.GetFiles().Where(f => f.UniqueName == "NonUniqueFileNameTest").FirstOrDefault(); repo.RemoveFile(removeFile); The test fails. When I step in to trace the problem, I found out that when I do the _session.flush() right after _session.delete(), it throws the exception, and if I look at the sql it does, it is actually submitting a "INSERT INTO" statement, which is exactly the sql that cause UNIQUE CONSTRAINT error. I tried to encapsulate both in transaction but still same problem happens. Anyone know the reason?

    Read the article

  • DB2 Integrity Checks and Exception Tables

    - by imthefirestartr
    I am working on planning a migration of a DB2 8.1 database from a horrible IBM encoding to UTF-8 to support further languages etc. I am encountering an issue that I am stuck on. A few notes on this migration: We are using db2move to export and load the data and db2look to get the details fo the database (tablespaces, tables, keys etc). We found the loading process worked nicely with db2move import, however, the data takes 7 hours to load and this was unacceptable downtime when we actually complete the conversion on the main database. We are now using db2move load, which is much faster as it seems to simply throw the data in without integrity checks. Which leads to my current issue. After completing the db2move load process, several tables are in a check pending state and require integrity checks. Integrity checks are done via the following: set integrity for . immediate checked This works for most tables, however, some tables give an error: DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned: SQL3603N Check data processing through the SET INTEGRITY statement has found integrity violation involving a constraint with name "blah.SQL120124110232400". SQLSTATE=23514 The internets tell me that the solution to this issue is to create an exception table based on the actual table and tell the SET INTEGRITY command to send any exceptions to that table (as below): db2 create table blah_EXCEPTION like blah db2 SET INTEGRITY FOR blah IMMEDIATE CHECKED FOR EXCEPTION IN blah USE blah_EXCEPTION NOW, here is the specific issue I am having! The above forces all the rows with issues to the specified exception table. Well that's just super, buuuuuut I can not lose data in this conversion, its simply unacceptable. The internets and IBM has a vague description of sending the violations to the exception tables and then "dealing with the data" that is in the exception table. Unfortunately, I am not clear what this means and I was hoping that some wise individual knows and could help me out and let me know how I can retrieve this data from these tables and place the data in the original/proper table rather than these exception tables. Let me know if you have any questions. Thanks!

    Read the article

  • How can I force a merge of all WAL files in pg_xlog back into my base "data" directory?

    - by Zac B
    Question: Is there a way to tell Postgres (9.2) to "merge all WAL files in pg_xlog back into the non-WAL data files, and then delete all WAL files successfully merged?" I would like to be able to "force" this operation; i.e. checkpoint_segments or archiving settings should be ignored. The filesystem WAL buffer (pg_xlog) directory should be emptied, or nearly emptied. It's fine if some or all of the space consumed by the pg_xlog directory is then consumed by the data directory; our DBA has asked for WAL database backups without any backlogged WALs, but space consumption is not a concern. Having near-zero WAL activity during this operation is a fine constraint. I can ensure that the database server is either shut down or not connectible (zero user-generated transaction load) during this process. Essentially, I'd like Postgres to ignore archiving/checkpoint retention policies temporarily, and flush all WAL activity to the core database files, leaving pg_xlog in the same state as if the database were recently created--with very few WAL files. What I've Tried: I know that the pg_basebackup utility performs something like this (it generates an almost-all-WALs-merged copy of a Postgres instance's data directory), but we aren't ready to use it on all our systems yet, as we are still testing replication settings; I'm hoping for a more short-term solution. I've tried issuing CHECKPOINT commands, but they just recycle one WAL file and replace it with another (that is, if they do anything at all; if I issue them during database idle time, they do nothing). pg_switch_xlog() similarly just forces a switch to the next log segment; it doesn't flush all queued/buffered segments. I've also played with the pg_resetxlog utility. That utility sort of does what I want, but all of its usage docs seem to indicate that it destroys (rather than flushing out of the transaction log and into the main data files) some or all of the WAL data. Is that impression accurate? If not, can I use pg_resetxlog during a zero-WAL-activity period to force a flush of all queued WAL data to non-WAL data? If the answer to that is negative, how can I achieve this goal? Thanks!

    Read the article

  • while running mvn jetty:run showing the following error ..

    - by munna
    C:\source\myprojectmvn jetty:run [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building AppFuse Spring MVC Application [INFO] task-segment: [jetty:run] [INFO] ------------------------------------------------------------------------ [INFO] Preparing jetty:run [WARNING] POM for 'xfire:xfire-jsr181-api:pom:1.0-M1:compile' is invalid. Its dependencies (if any) will NOT be available to the current build. [INFO] [warpath:add-classes {execution: default}] [INFO] [aspectj:compile {execution: default}] [INFO] [native2ascii:native2ascii {execution: native2ascii-utf8}] [INFO] [native2ascii:native2ascii {execution: native2ascii-8859_1}] [INFO] [resources:resources {execution: default-resources}] [WARNING] File encoding has not been set, using platform encoding Cp1252, i.e. b uild is platform dependent! [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 12 resources [INFO] Copying 1 resource [INFO] Copying 26 resources [INFO] Copying 26 resources [INFO] [compiler:compile {execution: default-compile}] [INFO] Nothing to compile - all classes are up to date [INFO] [resources:testResources {execution: default-testResources}] [WARNING] File encoding has not been set, using platform encoding Cp1252, i.e. b uild is platform dependent! [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 4 resources [INFO] Copying 9 resources [INFO] Preparing hibernate3:hbm2ddl [WARNING] Removing: hbm2ddl from forked lifecycle, to prevent recursive invocati on. [INFO] [warpath:add-classes {execution: default}] [INFO] [aspectj:compile {execution: default}] [INFO] [native2ascii:native2ascii {execution: native2ascii-utf8}] [INFO] [native2ascii:native2ascii {execution: native2ascii-8859_1}] [INFO] [resources:resources {execution: default-resources}] [WARNING] File encoding has not been set, using platform encoding Cp1252, i.e. b uild is platform dependent! [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 12 resources [INFO] Copying 1 resource [INFO] Copying 26 resources [INFO] Copying 26 resources [INFO] Copying 26 resources [INFO] Copying 26 resources [INFO] [hibernate3:hbm2ddl {execution: default}] [INFO] Configuration XML file loaded: file:/C:/source/myproject/src/main/resourc es/hibernate.cfg.xml [INFO] Configuration XML file loaded: file:/C:/source/myproject/src/main/resourc es/hibernate.cfg.xml [INFO] Configuration Properties file loaded: C:\source\myproject\target\classes\ jdbc.properties alter table user_role drop foreign key FK143BF46A4FD90D75; alter table user_role drop foreign key FK143BF46AF503D155; drop table if exists app_user; drop table if exists role; drop table if exists user_role; create table app_user (id bigint not null auto_increment, account_expired bit no t null, account_locked bit not null, address varchar(150), city varchar(50) not null, country varchar(100), postal_code varchar(15) not null, province varchar(1 00), credentials_expired bit not null, email varchar(255) not null unique, accou nt_enabled bit, first_name varchar(50) not null, last_name varchar(50) not null, password varchar(255) not null, password_hint varchar(255), phone_number varcha r(255), username varchar(50) not null unique, version integer, website varchar(2 55), primary key (id)) ENGINE=InnoDB; create table role (id bigint not null auto_increment, description varchar(64), n ame varchar(20), primary key (id)) ENGINE=InnoDB; create table user_role (user_id bigint not null, role_id bigint not null, primar y key (user_id, role_id)) ENGINE=InnoDB; alter table user_role add index FK143BF46A4FD90D75 (role_id), add constraint FK1 43BF46A4FD90D75 foreign key (role_id) references role (id); alter table user_role add index FK143BF46AF503D155 (user_id), add constraint FK1 43BF46AF503D155 foreign key (user_id) references app_user (id); [INFO] [compiler:testCompile {execution: default-testCompile}] [INFO] Nothing to compile - all classes are up to date [INFO] [dbunit:operation {execution: test-compile}] [INFO] [jetty:run {execution: default-cli}] [INFO] Configuring Jetty for project: AppFuse Spring MVC Application [INFO] Webapp source directory = C:\source\myproject\src\main\webapp [INFO] web.xml file = C:\source\myproject\src\main\webapp\WEB-INF\web.xml [INFO] Classes = C:\source\myproject\target\classes [INFO] Adding extra scan target from pattern: C:\source\myproject\src\main\webap p\WEB-INF\applicationContext-validation.xml [INFO] Adding extra scan target from pattern: C:\source\myproject\src\main\webap p\WEB-INF\applicationContext.xml [INFO] Adding extra scan target from pattern: C:\source\myproject\src\main\webap p\WEB-INF\dispatcher-servlet.xml [INFO] Adding extra scan target from pattern: C:\source\myproject\src\main\webap p\WEB-INF\menu-config.xml [INFO] Adding extra scan target from pattern: C:\source\myproject\src\main\webap p\WEB-INF\urlrewrite.xml [INFO] Adding extra scan target from pattern: C:\source\myproject\src\main\webap p\WEB-INF\validation.xml [INFO] Adding extra scan target from pattern: C:\source\myproject\src\main\webap p\WEB-INF\validator-rules-custom.xml [INFO] Adding extra scan target from pattern: C:\source\myproject\src\main\webap p\WEB-INF\validator-rules.xml [INFO] Adding extra scan target from pattern: C:\source\myproject\src\main\webap p\WEB-INF\web.xml 2010-06-02 15:13:28.921::INFO: Logging to STDERR via org.mortbay.log.StdErrLog [INFO] Context path = / [INFO] Tmp directory = determined at runtime [INFO] Web defaults = org/mortbay/jetty/webapp/webdefault.xml [INFO] Web overrides = none [INFO] Webapp directory = C:\source\myproject\src\main\webapp [INFO] Starting jetty 6.1.9 ... 2010-06-02 15:13:28.983::INFO: jetty-6.1.9 2010-06-02 15:13:28.248::INFO: No Transaction manager found - if your webapp re quires one, please configure one. 2010-06-02 15:13:28.482:/:INFO: Initializing Spring root WebApplicationContext [myproject] ERROR [main] ContextLoader.initWebApplicationContext(215) | Context initialization failed org.springframework.beans.factory.BeanDefinitionStoreException: IOException pars ing XML document from ServletContext resource [/WEB-INF/xfire-servlet.xml]; nest ed exception is java.io.FileNotFoundException: Could not open ServletContext res ource [/WEB-INF/xfire-servlet.xml] at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:349) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:310) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:143) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:178) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:149) at org.springframework.web.context.support.XmlWebApplicationContext.load BeanDefinitions(XmlWebApplicationContext.java:124) at org.springframework.web.context.support.XmlWebApplicationContext.load BeanDefinitions(XmlWebApplicationContext.java:92) at org.springframework.context.support.AbstractRefreshableApplicationCon text.refreshBeanFactory(AbstractRefreshableApplicationContext.java:123) at org.springframework.context.support.AbstractApplicationContext.obtain FreshBeanFactory(AbstractApplicationContext.java:423) at org.springframework.context.support.AbstractApplicationContext.refres h(AbstractApplicationContext.java:353) at org.springframework.web.context.ContextLoader.createWebApplicationCon text(ContextLoader.java:255) at org.springframework.web.context.ContextLoader.initWebApplicationConte xt(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitiali zed(ContextLoaderListener.java:45) at org.mortbay.jetty.handler.ContextHandler.startContext(ContextHandler. java:540) at org.mortbay.jetty.servlet.Context.startContext(Context.java:135) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.jav a:1220) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java: 510) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:448 ) at org.mortbay.jetty.plugin.Jetty6PluginWebAppContext.doStart(Jetty6Plug inWebAppContext.java:110) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection .java:152) at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHan dlerCollection.java:156) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection .java:152) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java: 130) at org.mortbay.jetty.Server.doStart(Server.java:222) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.plugin.Jetty6PluginServer.start(Jetty6PluginServer. java:132) at org.mortbay.jetty.plugin.AbstractJettyMojo.startJetty(AbstractJettyMo jo.java:357) at org.mortbay.jetty.plugin.AbstractJettyMojo.execute(AbstractJettyMojo. java:293) at org.mortbay.jetty.plugin.AbstractJettyRunMojo.execute(AbstractJettyRu nMojo.java:203) at org.mortbay.jetty.plugin.Jetty6RunMojo.execute(Jetty6RunMojo.java:184 ) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPlugi nManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:694) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandalone Goal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(Defau ltLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHan dleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegmen ts(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLi fecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:6 0) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: java.io.FileNotFoundException: Could not open ServletContext resource [/WEB-INF/xfire-servlet.xml] at org.springframework.web.context.support.ServletContextResource.getInp utStream(ServletContextResource.java:116) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:336) ... 51 more 2010-06-02 15:13:29.919::WARN: Failed startup of context org.mortbay.jetty.plug in.Jetty6PluginWebAppContext@1ba4806{/,C:\source\myproject\src\main\webapp} org.springframework.beans.factory.BeanDefinitionStoreException: IOException pars ing XML document from ServletContext resource [/WEB-INF/xfire-servlet.xml]; nest ed exception is java.io.FileNotFoundException: Could not open ServletContext res ource [/WEB-INF/xfire-servlet.xml] at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:349) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:310) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:143) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:178) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:149) at org.springframework.web.context.support.XmlWebApplicationContext.load BeanDefinitions(XmlWebApplicationContext.java:124) at org.springframework.web.context.support.XmlWebApplicationContext.load BeanDefinitions(XmlWebApplicationContext.java:92) at org.springframework.context.support.AbstractRefreshableApplicationCon text.refreshBeanFactory(AbstractRefreshableApplicationContext.java:123) at org.springframework.context.support.AbstractApplicationContext.obtain FreshBeanFactory(AbstractApplicationContext.java:423) at org.springframework.context.support.AbstractApplicationContext.refres h(AbstractApplicationContext.java:353) at org.springframework.web.context.ContextLoader.createWebApplicationCon text(ContextLoader.java:255) at org.springframework.web.context.ContextLoader.initWebApplicationConte xt(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitiali zed(ContextLoaderListener.java:45) at org.mortbay.jetty.handler.ContextHandler.startContext(ContextHandler. java:540) at org.mortbay.jetty.servlet.Context.startContext(Context.java:135) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.jav a:1220) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java: 510) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:448 ) at org.mortbay.jetty.plugin.Jetty6PluginWebAppContext.doStart(Jetty6Plug inWebAppContext.java:110) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection .java:152) at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHan dlerCollection.java:156) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection .java:152) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java: 130) at org.mortbay.jetty.Server.doStart(Server.java:222) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.plugin.Jetty6PluginServer.start(Jetty6PluginServer. java:132) at org.mortbay.jetty.plugin.AbstractJettyMojo.startJetty(AbstractJettyMo jo.java:357) at org.mortbay.jetty.plugin.AbstractJettyMojo.execute(AbstractJettyMojo. java:293) at org.mortbay.jetty.plugin.AbstractJettyRunMojo.execute(AbstractJettyRu nMojo.java:203) at org.mortbay.jetty.plugin.Jetty6RunMojo.execute(Jetty6RunMojo.java:184 ) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPlugi nManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:694) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandalone Goal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(Defau ltLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHan dleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegmen ts(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLi fecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:6 0) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: java.io.FileNotFoundException: Could not open ServletContext resource [/WEB-INF/xfire-servlet.xml] at org.springframework.web.context.support.ServletContextResource.getInp utStream(ServletContextResource.java:116) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:336) ... 51 more 2010-06-02 15:13:29.152::WARN: Nested in org.springframework.beans.factory.Bean DefinitionStoreException: IOException parsing XML document from ServletContext r esource [/WEB-INF/xfire-servlet.xml]; nested exception is java.io.FileNotFoundEx ception: Could not open ServletContext resource [/WEB-INF/xfire-servlet.xml]: java.io.FileNotFoundException: Could not open ServletContext resource [/WEB-INF/ xfire-servlet.xml] at org.springframework.web.context.support.ServletContextResource.getInp utStream(ServletContextResource.java:116) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:336) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:310) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:143) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:178) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:149) at org.springframework.web.context.support.XmlWebApplicationContext.load BeanDefinitions(XmlWebApplicationContext.java:124) at org.springframework.web.context.support.XmlWebApplicationContext.load BeanDefinitions(XmlWebApplicationContext.java:92) at org.springframework.context.support.AbstractRefreshableApplicationCon text.refreshBeanFactory(AbstractRefreshableApplicationContext.java:123) at org.springframework.context.support.AbstractApplicationContext.obtain FreshBeanFactory(AbstractApplicationContext.java:423) at org.springframework.context.support.AbstractApplicationContext.refres h(AbstractApplicationContext.java:353) at org.springframework.web.context.ContextLoader.createWebApplicationCon text(ContextLoader.java:255) at org.springframework.web.context.ContextLoader.initWebApplicationConte xt(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitiali zed(ContextLoaderListener.java:45) at org.mortbay.jetty.handler.ContextHandler.startContext(ContextHandler. java:540) at org.mortbay.jetty.servlet.Context.startContext(Context.java:135) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.jav a:1220) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java: 510) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:448 ) at org.mortbay.jetty.plugin.Jetty6PluginWebAppContext.doStart(Jetty6Plug inWebAppContext.java:110) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection .java:152) at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHan dlerCollection.java:156) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection .java:152) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java: 130) at org.mortbay.jetty.Server.doStart(Server.java:222) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.plugin.Jetty6PluginServer.start(Jetty6PluginServer. java:132) at org.mortbay.jetty.plugin.AbstractJettyMojo.startJetty(AbstractJettyMo jo.java:357) at org.mortbay.jetty.plugin.AbstractJettyMojo.execute(AbstractJettyMojo. java:293) at org.mortbay.jetty.plugin.AbstractJettyRunMojo.execute(AbstractJettyRu nMojo.java:203) at org.mortbay.jetty.plugin.Jetty6RunMojo.execute(Jetty6RunMojo.java:184 ) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPlugi nManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:694) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandalone Goal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(Defau ltLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHan dleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegmen ts(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLi fecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:6 0) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) 2010-06-02 15:13:29.417::INFO: Started [email protected]:8080 [INFO] Started Jetty Server [INFO] Starting scanner at interval of 3 seconds.

    Read the article

  • SSAS: Using fake dimension and scopes for dynamic ranges

    - by DigiMortal
    In one of my BI projects I needed to find count of objects in income range. Usual solution with range dimension was useless because range where object belongs changes in time. These ranges depend on calculation that is done over incomes measure so I had really no option to use some classic solution. Thanks to SSAS forums I got my problem solved and here is the solution. The problem – how to create dynamic ranges? I have two dimensions in SSAS cube: one for invoices related to objects rent and the other for objects. There is measure that sums invoice totals and two calculations. One of these calculations performs some computations based on object income and some other object attributes. Second calculation uses first one to define income ranges where object belongs. What I need is query that returns me how much objects there are in each group. I cannot use dimension for range because on one date object may belong to one range and two days later to another income range. By example, if object is not rented out for two days it makes no money and it’s income stays the same as before. If object is rented out after two days it makes some income and this income may move it to another income range. Solution – fake dimension and scopes Thanks to Gerhard Brueckl from pmOne I got everything work fine after some struggling with BI Studio. The original discussion he pointed out can be found from SSAS official forums thread Create a banding dimension that groups by a calculated measure. Solution was pretty simple by nature – we have to define fake dimension for our range and use scopes to assign values for object count measure. Object count measure is primitive – it just counts objects and that’s it. We will use it to find out how many objects belong to one or another range. We also need table for fake ranges and we have to fill it with ranges used in ranges calculation. After creating the table and filling it with ranges we can add fake range dimension to our cube. Let’s see now how to solve the problem step-by-step. Solving the problem Suppose you have ranges calculation defined like this: CASE WHEN [Measures].[ComplexCalc] < 0 THEN 'Below 0'WHEN [Measures].[ComplexCalc] >=0 AND  [Measures].[ComplexCalc] <=50 THEN '0 - 50'...END Let’s create now new table to our analysis database and name it as FakeIncomeRange. Here is the definition for table: CREATE TABLE [FakeIncomeRange] (     [range_id] [int] IDENTITY(1,1) NOT NULL,     [range_name] [nvarchar](50) NOT NULL,     CONSTRAINT [pk_fake_income_range] PRIMARY KEY CLUSTERED      (         [range_id] ASC     ) ) Don’t forget to fill this table with range labels you are using in ranges calculation. To use ranges from table we have to add this table to our data source view and create new dimension. We cannot bind this table to other tables but we have to leave it like it is. Our dimension has two attributes: ID and Name. The next thing to create is calculation that returns objects count. This calculation is also fake because we override it’s values for all ranges later. Objects count measure can be defined as calculation like this: COUNT([Object].[Object].[Object].members) Now comes the most crucial part of our solution – defining the scopes. Based on data used in this posting we have to define scope for each of our ranges. Here is the example for first range. SCOPE([FakeIncomeRange].[Name].&[Below 0], [Measures].[ObjectCount])     This=COUNT(            FILTER(                [Object].[Object].[Object].members,                 [Measures].[ComplexCalc] < 0          )     ) END SCOPE To get these scopes defined in cube we need MDX script blocks for each line given here. Take a look at the screenshot to get better idea what I mean. This example is given from SQL Server books online to avoid conflicts with NDA. :) From previous example the lines (MDX scripts) are: Line starting with SCOPE Block for This = Line with END SCOPE And now it is time to deploy and process our cube. Although you may see examples where there are semicolons in the end of statements you don’t need them. Visual Studio BI tools generate separate command from each script block so you don’t need to worry about it.

    Read the article

  • SQL SERVER – SSMS: Memory Usage By Memory Optimized Objects Report

    - by Pinal Dave
    At conferences and at speaking engagements at the local UG, there is one question that keeps on coming which I wish were never asked. The question around, “Why is SQL Server using up all the memory and not releasing even when idle?” Well, the answer can be long and with the release of SQL Server 2014, this got even more complicated. This release of SQL Server 2014 has the option of introducing In-Memory OLTP which is completely new concept and our dependency on memory has increased multifold. In reality, nothing much changes but we have memory optimized objects (Tables and Stored Procedures) additional which are residing completely in memory and improving performance. As a DBA, it is humanly impossible to get a hang of all the innovations and the new features introduced in the next version. So today’s blog is around the report added to SSMS which gives a high level view of this new feature addition. This reports is available only from SQL Server 2014 onwards because the feature was introduced in SQL Server 2014. Earlier versions of SQL Server Management Studio would not show the report in the list. If we try to launch the report on the database which is not having In-Memory File group defined, then we would see the message in report. To demonstrate, I have created new fresh database called MemoryOptimizedDB with no special file group. Here is the query used to identify whether a database has memory-optimized file group or not. SELECT TOP(1) 1 FROM sys.filegroups FG WHERE FG.[type] = 'FX' Once we add filegroup using below command, we would see different version of report. USE [master] GO ALTER DATABASE [MemoryOptimizedDB] ADD FILEGROUP [IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA GO The report is still empty because we have not defined any Memory Optimized table in the database.  Total allocated size is shown as 0 MB. Now, let’s add the folder location into the filegroup and also created few in-memory tables. We have used the nomenclature of IMO to denote “InMemory Optimized” objects. USE [master] GO ALTER DATABASE [MemoryOptimizedDB] ADD FILE ( NAME = N'MemoryOptimizedDB_IMO', FILENAME = N'E:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\MemoryOptimizedDB_IMO') TO FILEGROUP [IMO_FG] GO You may have to change the path based on your SQL Server configuration. Below is the script to create the table. USE MemoryOptimizedDB GO --Drop table if it already exists. IF OBJECT_ID('dbo.SQLAuthority','U') IS NOT NULL DROP TABLE dbo.SQLAuthority GO CREATE TABLE dbo.SQLAuthority ( ID INT IDENTITY NOT NULL, Name CHAR(500)  COLLATE Latin1_General_100_BIN2 NOT NULL DEFAULT 'Pinal', CONSTRAINT PK_SQLAuthority_ID PRIMARY KEY NONCLUSTERED (ID), INDEX hash_index_sample_memoryoptimizedtable_c2 HASH (Name) WITH (BUCKET_COUNT = 131072) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO As soon as above script is executed, table and index both are created. If we run the report again, we would see something like below. Notice that table memory is zero but index is using memory. This is due to the fact that hash index needs memory to manage the buckets created. So even if table is empty, index would consume memory. More about the internals of how In-Memory indexes and tables work will be reserved for future posts. Now, use below script to populate the table with 10000 rows INSERT INTO SQLAuthority VALUES (DEFAULT) GO 10000 Here is the same report after inserting 1000 rows into our InMemory table.    There are total three sections in the whole report. Total Memory consumed by In-Memory Objects Pie chart showing memory distribution based on type of consumer – table, index and system. Details of memory usage by each table. The information about all three is taken from one single DMV, sys.dm_db_xtp_table_memory_stats This DMV contains memory usage statistics for both user and system In-Memory tables. If we query the DMV and look at data, we can easily notice that the system tables have negative object IDs.  So, to look at user table memory usage, below is the over-simplified version of query. USE MemoryOptimizedDB GO SELECT OBJECT_NAME(OBJECT_ID), * FROM sys.dm_db_xtp_table_memory_stats WHERE OBJECT_ID > 0 GO This report would help DBA to identify which in-memory object taking lot of memory which can be used as a pointer for designing solution. I am sure in future we will discuss at lengths the whole concept of In-Memory tables in detail over this blog. To read more about In-Memory OLTP, have a look at In-Memory OLTP Series at Balmukund’s Blog. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Memory, SQL Reports

    Read the article

  • SQL SERVER – Example of Performance Tuning for Advanced Users with DB Optimizer

    - by Pinal Dave
    Performance tuning is such a subject that everyone wants to master it. In beginning everybody is at a novice level and spend lots of time learning how to master the art of performance tuning. However, as we progress further the tuning of the system keeps on getting very difficult. I have understood in my early career there should be no need of ego in the technology field. There are always better solutions and better ideas out there and we should not resist them. Instead of resisting the change and new wave I personally adopt it. Here is a similar example, as I personally progress to the master level of performance tuning, I face that it is getting harder to come up with optimal solutions. In such scenarios I rely on various tools to teach me how I can do things better. Once I learn about tools, I am often able to come up with better solutions when I face the similar situation next time. A few days ago I had received a query where the user wanted to tune it further to get the maximum out of the performance. I have re-written the similar query with the help of AdventureWorks sample database. SELECT * FROM HumanResources.Employee e INNER JOIN HumanResources.EmployeeDepartmentHistory edh ON e.BusinessEntityID = edh.BusinessEntityID INNER JOIN HumanResources.Shift s ON edh.ShiftID = s.ShiftID; User had similar query to above query was used in very critical report and wanted to get best out of the query. When I looked at the query – here were my initial thoughts Use only column in the select statements as much as you want in the application Let us look at the query pattern and data workload and find out the optimal index for it Before I give further solutions I was told by the user that they need all the columns from all the tables and creating index was not allowed in their system. He can only re-write queries or use hints to further tune this query. Now I was in the constraint box – I believe * was not a great idea but if they wanted all the columns, I believe we can’t do much besides using *. Additionally, if I cannot create a further index, I must come up with some creative way to write this query. I personally do not like to use hints in my application but there are cases when hints work out magically and gives optimal solutions. Finally, I decided to use Embarcadero’s DB Optimizer. It is a fantastic tool and very helpful when it is about performance tuning. I have previously explained how it works over here. First open DBOptimizer and open Tuning Job from File >> New >> Tuning Job. Once you open DBOptimizer Tuning Job follow the various steps indicates in the following diagram. Essentially we will take our original script and will paste that into Step 1: New SQL Text and right after that we will enable Step 2 for Generating Various cases, Step 3 for Detailed Analysis and Step 4 for Executing each generated case. Finally we will click on Analysis in Step 5 which will generate the report detailed analysis in the result pan. The detailed pan looks like. It generates various cases of T-SQL based on the original query. It applies various hints and available hints to the query and generate various execution plans of the query and displays them in the resultant. You can clearly notice that original query had a cost of 0.0841 and logical reads about 607 pages. Whereas various options which are just following it has different execution cost as well logical read. There are few cases where we have higher logical read and there are few cases where as we have very low logical read. If we pay attention the very next row to original query have Merge_Join_Query in description and have lowest execution cost value of 0.044 and have lowest Logical Reads of 29. This row contains the query which is the most optimal re-write of the original query. Let us double click over it. Here is the query: SELECT * FROM HumanResources.Employee e INNER JOIN HumanResources.EmployeeDepartmentHistory edh ON e.BusinessEntityID = edh.BusinessEntityID INNER JOIN HumanResources.Shift s ON edh.ShiftID = s.ShiftID OPTION (MERGE JOIN) If you notice above query have additional hint of Merge Join. With the help of this Merge Join query hint this query is now performing much better than before. The entire process takes less than 60 seconds. Please note that it the join hint Merge Join was optimal for this query but it is not necessary that the same hint will be helpful in all the queries. Additionally, if the workload or data pattern changes the query hint of merge join may be no more optimal join. In that case, we will have to redo the entire exercise once again. This is the reason I do not like to use hints in my queries and I discourage all of my users to use the same. However, if you look at this example, this is a great case where hints are optimizing the performance of the query. It is humanly not possible to test out various query hints and index options with the query to figure out which is the most optimal solution. Sometimes, we need to depend on the efficiency tools like DB Optimizer to guide us the way and select the best option from the suggestion provided. Let me know what you think of this article as well your experience with DB Optimizer. Please leave a comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Simultaneously calling multiple methods on a WCF service from silverlight

    - by ola karlsson
    A while back I had to debug some performance issues in an existing Silverlight app, as the problem / solution was a bit obscure and finding info about it was quite tricky, I thought I’d share, maybe it can help the next person with this problem. The App On start, the app would do a number of calls to different methods on a WCF service, this to populate the UI with the necessary data. Recently one of those services had been changed and was now taking quite a bit longer than it used to. This was resulting in quite a long loading time for the whole UI, which was set up so it wouldn’t let the user interact with anything, until all the service calls had finished. First I broke out the longer running service call from the others, then removed the constraint that it had to be loaded for the UI in general to become responsive. I also added a loading indicator just on that area of the UI, thinking that the main UI would load while this particular section could keep loading independently. The Problem However this is where things started to get a bit strange. I found that even after these changes, the main UI wouldn’t activate until the long running call returned. So now, I did what I should have done to start with, I got Fiddler out and had a look at what was really happening. What I found was that, once the call to the long running service method was placed, all subsequent call were waiting for that one to return before executing. Not having really worked with WCF previously or knowing much about it in general, I was stumped… I knew of the issues where Silverlight is restricted by the browsers networking features in regards to number of simultaneous connections etc. However that just didn’t seem to be the issue here, you can clearly see in Fiddler that there’s numerous calls, but they’re just not returning. I thought of the problem maybe being in the WCF service, but the calls were really not that complicated and surely the service should be able to handle a lot more than what I was throwing at it! So I did what every developer does in this type of scenario, I hit the search engines. I did a whole bunch of searching on things like “multiple simultaneous WCF calls from Silverlight” and “Calling long running WCF services from Silverlight” etc. etc. This however, pretty much got me nowhere, I found a whole heap of resources on how to do WCF calls from Silverlight but most of them were very basic and of no use what so ever. The fog is clearing It wasn’t until I came across the term “ WCF blocking calls” and started incorporating that in my searches I started to get somewhere. Those searches quite quickly brought me to the following thread in the Silverlight forum “Long-running WCF call blocking subsequent calls” which discussed the exact problem I was facing and the best part, one of the guys there had the solution! The short answer is in the forum post and the guys answering, has also done a more extensive blog post about it called “Silverlight, WCF, and ASP.Net Configuration Gotchas” which covers it very well.  So come on what’s the solution?! I heard you ask, unless you’ve already gone to the links and looked it up ;) The Solution Well, it turns out that the issue is founded in a mix of Silverlight, Asp.Net and WCF, basically if you’re doing multiple calls to a single WCF web-service and you have Asp.Net session state enabled, the calls will be executed sequentially by the service, hence any long running calls will block subsequent ones. So why is Asp.Net session state effecting us, we’re working in Silverlight, right? We'll as mentioned earlier, by default Silverlight uses the browsers networking stack when doing service calls, hence to the WCF service, the call looks like it might as well be coming from a normal Asp.Net. To get around this, we look to a feature introduced in Silverlight 3, namely the Client HTTP Stack. The Client HTTP Stack to the rescue By using the following syntax (for example in our App.xaml.cs, Application_Startup method) WebRequest.RegisterPrefix("http://", WebRequestCreator.ClientHttp); we can set our Silverlight application to use the Client HTTP Stack, which incidentally solves our problem! By using Silverlights own networking stack, rather than that of the browser, we get around the Asp.Net - WCF session state issue. The above code specifies that all calls to addresses starting with “http://” should go through the client stack, this can actually be set more granular and you can specify it to be used only for certain domains etc. Summary The actual solution is well covered in the forum and blog posts I link to above. This post is more about sharing my experience, hopefully helping to spread the word about this and maybe make it a bit easier for the next poor guy with this issue to find the solution. Until next time, Ola

    Read the article

  • SQL SERVER – SmallDateTime and Precision – A Continuous Confusion

    - by pinaldave
    Some kinds of confusion never go away. Here is one of the ancient confusing things in SQL. The precision of the SmallDateTime is one concept that confuses a lot of people, proven by the many messages I receive everyday relating to this subject. Let me start with the question: What is the precision of the SMALLDATETIME datatypes? What is your answer? Write it down on your notepad. Now if you do not want to continue reading the blog post, head to my previous blog post over here: SQL SERVER – Precision of SMALLDATETIME. A Social Media Question Since the increase of social media conversations, I noticed that the amount of the comments I receive on this blog is a bit staggering. I receive lots of questions on facebook, twitter or Google+. One of the very interesting questions yesterday was asked on Facebook by Raghavendra. I am re-organizing his script and asking all of the questions he has asked me. Let us see if we could help him with his question: CREATE TABLE #temp (name VARCHAR(100),registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY registered DESC GO DROP TABLE #temp GO Now when the above script is ran, we will get the following result: Well, the expectation of the query was to have the following result. The row which was inserted last was expected to return as first row in result set as the ORDER BY descending. Side note: Because the requirement is to get the latest data, we can’t use any  column other than smalldatetime column in order by. If we use name column in the order by, we will get an incorrect result as it can be any name. My Initial Reaction My initial reaction was as follows: 1) DataType DateTime2: If file precision of the column is expected from the column which store date and time, it should not be smalldatetime. The precision of the column smalldatetime is One Minute (Read Here) for finer precision use DateTime or DateTime2 data type. Here is the code which includes above suggestion: CREATE TABLE #temp (name VARCHAR(100), registered datetime2) GO DECLARE @test datetime2 SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY registered DESC GO DROP TABLE #temp GO 2) Tie Breaker Identity: There are always possibilities that two rows were inserted at the same time. In that case, you may need a tie breaker. If you have an increasing identity column, you can use that as a tie breaker as well. CREATE TABLE #temp (ID INT IDENTITY(1,1), name VARCHAR(100),registered datetime2) GO DECLARE @test datetime2 SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY ID DESC GO DROP TABLE #temp GO Those two were the quick suggestions I provided. It is not necessary that you should use both advices. It is possible that one can use only DATETIME datatype or Identity column can have datatype of BIGINT or have another tie breaker. An Alternate NO Solution In the facebook thread this was also discussed as one of the solutions: CREATE TABLE #temp (name VARCHAR(100),registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO DROP TABLE #temp GO However, I believe it is not the solution and can be further misleading if used in a production server. Here is the example of why it is not a good solution: CREATE TABLE #temp (name VARCHAR(100) NOT NULL,registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO -- Before Index SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO -- Create Index ALTER TABLE #temp ADD CONSTRAINT [PK_#temp] PRIMARY KEY CLUSTERED (name DESC) GO -- After Index SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO DROP TABLE #temp GO Now let us examine the resultset. You will notice that an index which is created on the base table which is (indeed) schema change the table but can affect the resultset. As you can see, an index can change the resultset, so this method is not yet perfect to get the latest inserted resultset. No Schema Change Requirement After giving these two suggestions, I was waiting for the feedback of the asker. However, the requirement of the asker was there can’t be any schema change because the application was used by many other applications. I validated again, and of course, the requirement is no schema change at all. No addition of the column of change of datatypes of any other columns. There is no further help as well. This is indeed an interesting question. I personally can’t think of any solution which I could provide him given the requirement of no schema change. Can you think of any other solution to this? Need of Database Designer This question once again brings up another ancient question:  “Do we need a database designer?” I often come across databases which are facing major performance problems or have redundant data. Normalization is often ignored when a database is built fast under a very tight deadline. Often I come across a database which has table with unnecessary columns and performance problems. While working as Developer Lead in my earlier jobs, I have seen developers adding columns to tables without anybody’s consent and retrieving them as SELECT *.  There is a lot to discuss on this subject in detail, but for now, let’s discuss the question first. Do you have any suggestions for the above question? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: CodeProject, Developer Training, PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Autoscaling in a modern world&hellip;. Part 2

    - by Steve Loethen
    When we last left off, we had a web application spinning away in the cloud, and a local console application watching it and reacting to changes in demand.  Reactions that were specified by a set of rules.  Let’s talk about those rules. Constraints.  The first set of rules this application answered to were the constraints. Here is what they looked like: <constraintRules> <rule name="default" enabled="true" rank="1" description="The default constraint rule"> <actions> <range min="1" max="4" target="AutoscalingApplicationRole"/> </actions> </rule> </constraintRules> Pretty basic.  We have one role, the “AutoscalingApplicationRole”, and we have decided to have it live within a range of 1 to 4.  This rule does not adjust, but instead, set’s limits on what other rules can do.  It has a rank, so you can have you can specify other sets of constraints, perhaps based on time or date, to allow for deviations from this set.  But for now, let’s keep it simple.  In the real world, you would probably use the minimum to set a lower end SLA.  A common value might be a 2, to prevent the reactive rules from ever taking you down to 1 role.  The maximum is often used to keep a rule from driving the cost up, setting an upper limit to prevent you waking up one morning and find a bill for hundreds of instances you didn’t expect.  So, here we have the range we want our application to live inside.  This is good for our investigation and testing.  Next, let’s take a look at the reactive rules.  These rules are what you use to react (hence reactive rules) to changing demands on your application.  The HOL has two simple rules.  One that looks at a queue depth, and one that looks at a performance counter that reports cpu utilization.  the XML in the rules file looks like this: <reactiveRules> <rule name="ScaleUp" rank="10" description="Scale Up the web role" enabled="true"> <when> <any> <greaterOrEqual operand="Length_05_holqueue" than="10"/> <greaterOrEqual operand="CPU_05_holwebrole" than="65"/> </any> </when> <actions> <scale target="AutoscalingApplicationRole" by="1"/> </actions> </rule> <rule name="ScaleDown" rank="10" description="Scale down the web role" enabled="true"> <when> <all> <less operand="Length_05_holqueue" than="5"/> <less operand="CPU_05_holwebrole" than="40"/> </all> </when> <actions> <scale target="AutoscalingApplicationRole" by="-1"/> </actions> </rule> </reactiveRules> <operands> <performanceCounter alias="CPU_05_holwebrole" performanceCounterName="\Processor(_Total)\% Processor Time" source="AutoscalingApplicationRole" timespan="00:05:00" aggregate="Average" /> <queueLength alias="Length_05_holqueue" queue="hol-queue" timespan="00:05:00" aggregate="Average"/> </operands> These rules are currently contained in a file called rules.xml, that is in the root of the console application.  The console app, starts up, grabs the rules and starts watching the 2 operands.  When it detects a rule has been satisfied, it performs the desired action.  (here, scale up or down my 1). But I want to host the autoscaler  in the cloud.  For my first trick, I will move the rules (and another file called services.xml) to azure blob storage.  Look for part 3.

    Read the article

  • SQL SERVER – Beginning New Weekly Series – Memory Lane – #001

    - by pinaldave
    I am introducing a new series today.  This series is called “Memory Lane.”  From the last six years and 2,300 articles, there are fantastic articles I keep revisiting.  Sometimes when I read old blog posts I think I should have included something or added a bit more to the topic.  But for many articles, I still feel they are fantastic (even after six years) and could be read again and again. I have also found that after six years of blogging, readers will write to me and say “Pinal, why don’t you write about X, Y or Z.”  The answer is: I already did!  It is here on the blog, or in the comments, or possibly in one of my books.  The solution has always been there, it is simply a matter of finding it and presenting it again.  That is why I have created Memory Lane.  I will be listing the best articles from the same week of the past six years.  You will find plenty of reading material every Saturday from articles of SQLAuthority past. Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 Query to Display Foreign Key Relationships and Name of the Constraint for Each Table in Database My blogging journey began with this blog post. As many of you know my journey began with creating a repository of my scripts. This was very first script which I had written to find out foreign key relationship and constraints. The same query was updated later on using the new SYS schema modification in SQL Server. Version 1: Using sys.schema Version 2: Using sys.schema and additional columns 2007 Milestone Posts – 1 Year (365 blogs) and 1 Million Views When I reached 1st week of Nov in 2007 SQLAuthority.com blog had around 365 blog posts and 1 Million Views. I was not obsessed with the statistics before but this was indeed an interesting moment for me as I was blogging for myself and did not realize that so many people are reading my blog. In year 2006 there were not many bloggers so blogging was new to me as well. I was learning it as I go. 2008 Stored Procedure WITH ENCRYPTION and Execution Plan If you have stored procedure and its code is encrypted when you execute it what will be displayed in the execution plan. There are two kinds of execution plans 1) Estimated and 2) Actual. It will be indeed interesting to know what is displayed in both the cases when Stored Procedure is encrypted. What is your guess? Now go ahead and click on here and figure out your answer. If the user is not able to login into SQL Server due to any error or issues there were two different blog post addresses the same issue here and here. 2009 It seems like Nov is the month of SQLPASS month. In 2009 on the same week I was in USA attending SQLPASS event. I had a fantastic experience attending the event. Here are the blog posts covering the subject Day 1, Day 2, Day 3, Day 4 2010 Finding the last backup time for all the databases This little script is very powerful and instantly gives details when was the last time your database backup performed. If you are reading this blog post – I say just go ahead and check if everything is alright on your server and you have all the necessary latest backup. It is better to be safe than sorrow. Version 1: Above script was improved to get more details about the database Version 2: This version of the script will include pretty much have all the backup related information in a single script. Do not miss to save it for future use. Are you a Database Administrator or a Database Developer? Three years ago I created a very small survey and the results which I have received are very interesting. The question was asking what is the profile of the visitor of that blog post and I noticed that DBA and Developers have balanced with little inclination towards Developers. Have you voted so far? If not, go ahead! 2011 New Book Released – SQL Server Interview Questions And Answers One year ago, on November 3, 2011 I published my book SQL Server Interview Questions and Answers.  The book has a lot of great reviews, and we have even received emails telling us this book was a life changer because it helped get them a great new job.  I don’t think anyone can get a job just from my book.  It was the individual who studied hard and took it seriously, and was determined to learn something new.  The book might have helped guide them and show them the topics to study, but they spent their own energy on it.  It was their own skills that helped them pass the exam. So, in this very first installment, I would like to thank the readers for accepting our book, for giving it great reviews and for using it and sharing it.  Our goal in writing this book was to help others, and it seems like we succeeded. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Declarative Architectures in Infrastructure as a Service (IaaS)

    - by BuckWoody
    I deal with computing architectures by first laying out requirements, and then laying in any constraints for it's success. Only then do I bring in computing elements to apply to the system. As an example, a requirement might be "world-side availability" and a constraint might be "with less than 80ms response time and full HA" or something similar. Then I can choose from the best fit of technologies which range from full-up on-premises computing to IaaS, PaaS or SaaS. I also deal in abstraction layers - on-premises systems are fully under your control, in IaaS the hardware is abstracted (but not the OS, scale, runtimes and so on), in PaaS the hardware and the OS is abstracted and you focus on code and data only, and in SaaS everything is abstracted - you merely purchase the function you want (like an e-mail server or some such) and simply use it. When you think about solutions this way, the architecture moves to the primary factor in your decision. It's problem-first architecting, and then laying in whatever technology or vendor best fixes the problem. To that end, most architects design a solution using a graphical tool (I use Visio) and then creating documents that  let the rest of the team (and business) know what is required. It's the template, or recipe, for the solution. This is extremely easy to do for SaaS - you merely point out what the needs are, research the vendor and present the findings (and bill) to the business. IT might not even be involved there. In PaaS it's not much more complicated - you use the same Application Lifecycle Management and design tools you always have for code, such as Visual Studio or some other process and toolset, and you can "stamp out" the application in multiple locations, update it and so on. IaaS is another story. Here you have multiple machines, operating systems, patches, virus scanning, run-times, scale-patterns and tools and much more that you have to deal with, since essentially it's just an in-house system being hosted by someone else. You can certainly automate builds of servers - we do this as technical professionals every day. From Windows to Linux, it's simple enough to create a "build script" that makes a system just like the one we made yesterday. What is more problematic is being able to tie those systems together in a coherent way (as a solution) and then stamp that out repeatedly, especially when you might want to deploy that solution on-premises, or in one cloud vendor or another. Lately I've been working with a company called RightScale that does exactly this. I'll point you to their site for more info, but the general idea is that you document out your intent for a set of servers, and it will deploy them to on-premises clouds, Windows Azure, and other cloud providers all from the same script. In other words, it doesn't contain the images or anything like that - it contains the scripts to build them on-premises or on a cloud vendor like Microsoft. Using a tool like this, you combine the steps of designing a system (all the way down to passwords and accounts if you wish) and then the document drives the distribution and implementation of that intent. As time goes on and more and more companies implement solutions on various providers (perhaps for HA and DR) then this becomes a compelling investigation. The RightScale information is here, if you want to investigate it further. Yes, there are other methods I've found, but most are tied to a single kind of cloud, and I'm not into vendor lock-in. Poppa Bear Level - Hands-on EvaluateRightScale at no cost.  Just bring your Windows Azurecredentials and follow the these tutorials: Sign Up for Windows Azure Add     Windows Azure to a RightScale Account Windows Azure Virtual Machines     3-tier Deployment Momma Bear Level - Just the Right level... ;0)  WindowsAzure Evaluation Guide - if you are new toWindows Azure Virtual Machines and new to RightScale, we recommend that youread the entire evaluation guide to gain a more complete understanding of theWindows Azure + RightScale solution.    WindowsAzure Support Page @ support.rightscale.com - FAQ's, tutorials,etc. for  Windows Azure Virtual Machines (Work in Progress) Baby Bear Level - Marketing WindowsAzure Page @ www.rightscale.com - find overview informationincluding solution briefs and presentation & demonstration videos   Scale     and Automate Applications on Windows Azure  Solution Brief     - how RightScale makes Windows Azure Virtual Machine even better SQL     Server on Windows Azure  Solution Brief   -       Run Highly Available SQL Server on Windows Azure Virtual Machines

    Read the article

  • Professional Development – Difference Between Bio, CV and Resume

    - by Pinal Dave
    Applying for work can be very stressful – you want to put your best foot forward, and it can be very hard to sell yourself to a potential employer while highlighting your best characteristics and answering questions.  On top of that, some jobs require different application materials – a biography (or bio), a curriculum vitae (or CV), or a resume.  These things seem so interchangeable, so what is the difference? Let’s start with the one most of us have heard of – the resume.  A resume is a summary of your job and education history.  If you have ever applied for a job, you will have used a resume.  The ability to write a good resume that highlights your best characteristics and emphasizes your qualifications for a specific job is a skill that will take you a long way in the world.  For such an essential skill, unfortunately it is one that many people struggle with. RESUME So let’s discuss what makes a great resume.  First, make sure that your name and contact information are at the top, in large print (slightly larger font than the rest of the text, size 14 or 16 if the rest is size 12, for example).  You need to make sure that if you catch the recruiter’s attention and they know how to get a hold of you. As for qualifications, be quick and to the point.  Make your job title and the company the headline, and include your skills, accomplishments, and qualifications as bullet points.  Use good action verbs, like “finished,” “arranged,” “solved,” and “completed.”  Include hard numbers – don’t just say you “changed the filing system,” say that you “revolutionized the storage of over 250 files in less than five days.”  Doesn’t that sentence sound much more powerful? Curriculum Vitae (CV) Now let’s talk about curriculum vitae, or “CVs”.  A CV is more like an expanded resume.  The same rules are still true: put your name front and center, keep your contact info up to date, and summarize your skills with bullet points.  However, CVs are often required in more technical fields – like science, engineering, and computer science.  This means that you need to really highlight your education and technical skills. Difference between Resume and CV Resumes are expected to be one or two pages long – CVs can be as many pages as necessary.  If you are one of those people lucky enough to feel limited by the size constraint of resumes, a CV is for you!  On a CV you can expand on your projects, highlight really exciting accomplishments, and include more educational experience – including GPA and test scores from the GRE or MCAT (as applicable).  You can also include awards, associations, teaching and research experience, and certifications.  A CV is a place to really expand on all your experience and how great you will be in this particular position. Biography (Bio) Chances are, you already know what a bio is, and you have even read a few of them.  Think about the one or two paragraphs that every author includes in the back flap of a book.  Think about the sentences under a blogger’s photo on every “About Me” page.  That is a bio.  It is a way to quickly highlight your life experiences.  It is essentially the way you would introduce yourself at a party. Where a bio is required for a job, chances are they won’t want to know about where you were born and how many pets you have, though.  This is a way to summarize your entire job history in quick-to-read format – and sometimes during a job hunt, being able to get to the point and grab the recruiter’s interest is the best way to get your foot in the door.  Think of a bio as your entire resume put into words. Most bios have a standard format.  In paragraph one, talk about your most recent position and accomplishments there, specifically how they relate to the job you are applying for.  If you have teaching or research experience, training experience, certifications, or management experience, talk about them in paragraph two.  Paragraph three and four are for highlighting publications, education, certifications, associations, etc.  To wrap up your bio, provide your contact info and availability (dates and times). Where to use What? For most positions, you will know exactly what kind of application to use, because the job announcement will state what materials are needed – resume, CV, bio, cover letter, skill set, etc.  If there is any confusion, choose whatever the industry standard is (CV for technical fields, resume for everything else) or choose which of your documents is the strongest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, PostADay, Professional Development, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Subterranean IL: Generics and array covariance

    - by Simon Cooper
    Arrays in .NET are curious beasts. They are the only built-in collection types in the CLR, and SZ-arrays (single dimension, zero-indexed) have their own commands and IL syntax. One of their stranger properties is they have a kind of built-in covariance long before generic variance was added in .NET 4. However, this causes a subtle but important problem with generics. First of all, we need to briefly recap on array covariance. SZ-array covariance To demonstrate, I'll tweak the classes I introduced in my previous posts: public class IncrementableClass { public int Value; public virtual void Increment(int incrementBy) { Value += incrementBy; } } public class IncrementableClassx2 : IncrementableClass { public override void Increment(int incrementBy) { base.Increment(incrementBy); base.Increment(incrementBy); } } In the CLR, SZ-arrays of reference types are implicitly convertible to arrays of the element's supertypes, all the way up to object (note that this does not apply to value types). That is, an instance of IncrementableClassx2[] can be used wherever a IncrementableClass[] or object[] is required. When an SZ-array could be used in this fashion, a run-time type check is performed when you try to insert an object into the array to make sure you're not trying to insert an instance of IncrementableClass into an IncrementableClassx2[]. This check means that the following code will compile fine but will fail at run-time: IncrementableClass[] array = new IncrementableClassx2[1]; array[0] = new IncrementableClass(); // throws ArrayTypeMismatchException These checks are enforced by the various stelem* and ldelem* il instructions in such a way as to ensure you can't insert a IncrementableClass into a IncrementableClassx2[]. For the rest of this post, however, I'm going to concentrate on the ldelema instruction. ldelema This instruction pops the array index (int32) and array reference (O) off the stack, and pushes a pointer (&) to the corresponding array element. However, unlike the ldelem instruction, the instruction's type argument must match the run-time array type exactly. This is because, once you've got a managed pointer, you can use that pointer to both load and store values in that array element using the ldind* and stind* (load/store indirect) instructions. As the same pointer can be used for both input and output to the array, the type argument to ldelema must be invariant. At the time, this was a perfectly reasonable restriction, and maintained array type-safety within managed code. However, along came generics, and with it the constrained callvirt instruction. So, what happens when we combine array covariance and constrained callvirt? .method public static void CallIncrementArrayValue() { // IncrementableClassx2[] arr = new IncrementableClassx2[1] ldc.i4.1 newarr IncrementableClassx2 // arr[0] = new IncrementableClassx2(); dup newobj instance void IncrementableClassx2::.ctor() ldc.i4.0 stelem.ref // IncrementArrayValue<IncrementableClass>(arr, 0) // here, we're treating an IncrementableClassx2[] as IncrementableClass[] dup ldc.i4.0 call void IncrementArrayValue<class IncrementableClass>(!!0[],int32) // ... ret } .method public static void IncrementArrayValue<(IncrementableClass) T>( !!T[] arr, int32 index) { // arr[index].Increment(1) ldarg.0 ldarg.1 ldelema !!T ldc.i4.1 constrained. !!T callvirt instance void IIncrementable::Increment(int32) ret } And the result: Unhandled Exception: System.ArrayTypeMismatchException: Attempted to access an element as a type incompatible with the array. at IncrementArrayValue[T](T[] arr, Int32 index) at CallIncrementArrayValue() Hmm. We're instantiating the generic method as IncrementArrayValue<IncrementableClass>, but passing in an IncrementableClassx2[], hence the ldelema instruction is failing as it's expecting an IncrementableClass[]. On features and feature conflicts What we've got here is a conflict between existing behaviour (ldelema ensuring type safety on covariant arrays) and new behaviour (managed pointers to object references used for every constrained callvirt on generic type instances). And, although this is an edge case, there is no general workaround. The generic method could be hidden behind several layers of assemblies, wrappers and interfaces that make it a requirement to use array covariance when calling the generic method. Furthermore, this will only fail at runtime, whereas compile-time safety is what generics were designed for! The solution is the readonly. prefix instruction. This modifies the ldelema instruction to ignore the exact type check for arrays of reference types, and so it lets us take the address of array elements using a covariant type to the actual run-time type of the array: .method public static void IncrementArrayValue<(IncrementableClass) T>( !!T[] arr, int32 index) { // arr[index].Increment(1) ldarg.0 ldarg.1 readonly. ldelema !!T ldc.i4.1 constrained. !!T callvirt instance void IIncrementable::Increment(int32) ret } But what about type safety? In return for ignoring the type check, the resulting controlled mutability pointer can only be used in the following situations: As the object parameter to ldfld, ldflda, stfld, call and constrained callvirt instructions As the pointer parameter to ldobj or ldind* As the source parameter to cpobj In other words, the only operations allowed are those that read from the pointer; stind* and similar that alter the pointer itself are banned. This ensures that the array element we're pointing to won't be changed to anything untoward, and so type safety within the array is maintained. This is a typical example of the maxim that whenever you add a feature to a program, you have to consider how that feature interacts with every single one of the existing features. Although an edge case, the readonly. prefix instruction ensures that generics and array covariance work together and that compile-time type safety is maintained. Tune in next time for a look at the .ctor generic type constraint, and what it means.

    Read the article

  • Database Rebuild

    - by Robert May
    I promised I’d have a simpler mechanism for rebuilding the database.  Below is a complete MSBuild targets file for rebuilding the database from scratch.  I don’t know if I’ve explained the rational for this.  The reason why you’d WANT to do this is so that each developer has a clean version of the database on their local machine.  This also includes the continuous integration environment.  Basically, you can do whatever you want to the database without fear, and in a minute or two, have a completely rebuilt database structure. DBDeploy (including the KTSC build task for dbdeploy) is used in this script to do change tracking on the database itself.  The MSBuild ExtensionPack is used in this target file.  You can get an MSBuild DBDeploy task here. There are two database scripts that you’ll see below.  First is the task for creating an admin (dbo) user in the system.  This script looks like the following: USE [master] GO If not Exists (select Name from sys.sql_logins where name = '$(User)') BEGIN CREATE LOGIN [$(User)] WITH PASSWORD=N'$(Password)', DEFAULT_DATABASE=[$(DatabaseName)], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF END GO EXEC master..sp_addsrvrolemember @loginame = N'$(User)', @rolename = N'sysadmin' GO USE [$(DatabaseName)] GO CREATE USER [$(User)] FOR LOGIN [$(User)] GO ALTER USER [$(User)] WITH DEFAULT_SCHEMA=[dbo] GO EXEC sp_addrolemember N'db_owner', N'$(User)' GO The second creates the changelog table.  This script can also be found in the dbdeploy.net install\scripts directory. CREATE TABLE changelog ( change_number INTEGER NOT NULL, delta_set VARCHAR(10) NOT NULL, start_dt DATETIME NOT NULL, complete_dt DATETIME NULL, applied_by VARCHAR(100) NOT NULL, description VARCHAR(500) NOT NULL ) GO ALTER TABLE changelog ADD CONSTRAINT Pkchangelog PRIMARY KEY (change_number, delta_set) GO Finally, Here’s the targets file. <Projectxmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0" DefaultTargets="Update">   <PropertyGroup>     <DatabaseName>TestDatabase</DatabaseName>     <Server>localhost</Server>     <ScriptDirectory>.\Scripts</ScriptDirectory>     <RebuildDirectory>.\Rebuild</RebuildDirectory>     <TestDataDirectory>.\TestData</TestDataDirectory>     <DbDeploy>.\DBDeploy</DbDeploy>     <User>TestUser</User>     <Password>TestPassword</Password>     <BCP>bcp</BCP>     <BCPOptions>-S$(Server) -U$(User) -P$(Password) -N -E -k</BCPOptions>     <OutputFileName>dbDeploy-output.sql</OutputFileName>     <UndoFileName>dbDeploy-output-undo.sql</UndoFileName>     <LastChangeToApply>99999</LastChangeToApply>   </PropertyGroup>     <ImportProject="$(MSBuildExtensionsPath)\ExtensionPack\4.0\MSBuild.ExtensionPack.tasks"/>   <UsingTask TaskName="Ktsc.Build.DBDeploy" AssemblyFile="$(DbDeploy)\Ktsc.Build.dll"/>   <ItemGroup>     <VariableInclude="DatabaseName">       <Value>$(DatabaseName)</Value>     </Variable>     <VariableInclude="Server">       <Value>$(Server)</Value>     </Variable>     <VariableInclude="User">       <Value>$(User)</Value>     </Variable>     <VariableInclude="Password">       <Value>$(Password)</Value>     </Variable>   </ItemGroup>     <TargetName="Rebuild">     <!--Take the database offline to disconnect any users. Requires that the current user is an admin of the sql server machine.-->     <MSBuild.ExtensionPack.SqlServer.SqlCmd Variables="@(Variable)" Database="$(DatabaseName)" TaskAction="Execute" CommandLineQuery ="ALTER DATABASE $(DatabaseName) SET OFFLINE WITH ROLLBACK IMMEDIATE"/>         <!--Bring it back online.  If you don't, the database files won't be deleted.-->     <MSBuild.ExtensionPack.Sql2008.DatabaseTaskAction="SetOnline" DatabaseItem="$(DatabaseName)"/>     <!--Delete the database, removing the existing files.-->     <MSBuild.ExtensionPack.Sql2008.DatabaseTaskAction="Delete" DatabaseItem="$(DatabaseName)"/>     <!--Create the new database in the default database path location.-->     <MSBuild.ExtensionPack.Sql2008.DatabaseTaskAction="Create" DatabaseItem="$(DatabaseName)" Force="True"/>         <!--Create admin user-->     <MSBuild.ExtensionPack.SqlServer.SqlCmd TaskAction="Execute" Server="(local)" Database="$(DatabaseName)" InputFiles="$(RebuildDirectory)\0002 Create Admin User.sql" Variables="@(Variable)" />     <!--Create the dbdeploy changelog.-->     <MSBuild.ExtensionPack.SqlServer.SqlCmd TaskAction="Execute" Server="(local)" Database="$(DatabaseName)" LogOn="$(User)" Password="$(Password)" InputFiles="$(RebuildDirectory)\0003 Create Changelog.sql" Variables="@(Variable)" />     <CallTarget Targets="Update;ImportData"/>     </Target>    <TargetName="Update" DependsOnTargets="CreateUpdateScript">     <MSBuild.ExtensionPack.SqlServer.SqlCmd TaskAction="Execute" Server="(local)" Database="$(DatabaseName)" LogOn="$(User)" Password="$(Password)" InputFiles="$(OutputFileName)" Variables="@(Variable)" />   </Target>   <TargetName="CreateUpdateScript">     <ktsc.Build.DBDeploy DbType="mssql"                                        DbConnection="User=$(User);Password=$(Password);Data Source=$(Server);Initial Catalog=$(DatabaseName);"                                        Dir="$(ScriptDirectory)"                                        OutputFile="..\$(OutputFileName)"                                        UndoOutputFile="..\$(UndoFileName)"                                        LastChangeToApply="$(LastChangeToApply)"/>   </Target>     <TargetName="ImportData">     <ItemGroup>       <TestData Include="$(TestDataDirectory)\*.dat"/>     </ItemGroup>     <ExecCommand="$(BCP) $(DatabaseName).dbo.%(TestData.Filename) in&quot;%(TestData.Identity)&quot;$(BCPOptions)"/>   </Target> </Project> Technorati Tags: MSBuild

    Read the article

  • Speakers, Please Check Your Time

    - by AjarnMark
    Woodrow Wilson was once asked how long it would take him to prepare for a 10 minute speech. He replied "Two weeks". He was then asked how long it would take for a 1 hour speech. "One week", he replied. 2 hour speech? "I'm ready right now," he replied.  Whether that is a true story or an urban legend, I don’t really know, but either way, it is a poignant reminder for all speakers, and particularly apropos this week leading up to the PASS Community Summit. (Cross-posted to the PASS Professional Development Virtual Chapter blog #PASSProfDev.) What’s the point of that story?  Simply this…if you have plenty of time to do your presentation, you don’t need to prepare much because it is easy to throw in more and more material to stretch out to your allotted time.  But if you are on a tight time constraint, then it will take significant preparation to distill your talk down to only the essential points. I have attended seven of the last eight North American Summit events, and every one of them has been fantastic.  The speakers are great, the material is timely and relevant, and the networking opportunities are awesome.  And every year, there is one little thing that just bugs me…speakers going over their allotted time.  Why does it bother me so?  Well, if you look at a typical schedule for a Summit, you’ll see that there are six or more sessions going on at the same time, and only 15 minutes to move from one to another.  If you’re trying to maximize your training dollar by attending something during every session time slot, and you don’t want to be the last guy trying to squeeze into the middle of the row, then those 15 minutes can be critical.  All the more so if you need to stop and use the bathroom or if you have to hike to the opposite end of the convention center.  It is really a bad position to find yourself having to choose between learning the last key points of Speaker A who is going over time, and getting over to Speaker B on time so you don’t miss her key opening remarks. And frankly, I think it is just rude.  Yes, the speakers are the function, after all they are bringing the content that the rest of us are paying to learn.  But it is also an honor to be given the opportunity to speak at a conference like this, and no one speaker is so important that the conference would be a disaster without him.  Speakers know when they submit their abstract, long before the conference, how much time they will have.  It has been the same pattern at the Summit for at least the last eight years.  Program Sessions are 75 minutes long.  Some speakers who have a good track record, and meet other qualifying criteria, are extended an invitation to present a Spotlight Session which is 90 minutes (a 20% increase).  So there really is no excuse.  It’s not like you were promised a 2-hour segment and then discovered when you got here that it was only 75 minutes.  In fact, it’s not like PASS advertised 90-minute sessions for everyone and then a select few were cut back to only 75.  As a speaker, you know well before you get here which type of session you are doing and how long it is, so as a professional, you should plan accordingly. Now you might think that this only happens to rookies, but I’ll tell you that some of the worst offenders are big-name veterans who draw huge attendance numbers for their sessions.  Some attendees blow this off as, “Hey, it’s so-and-so, and I’d stay here for hours and listen to him/her talk.”  To which I would reply, “Then they should have submitted for a pre- or post-conference day-long seminar instead, but don’t try to squeeze your day-long talk into a 90-minute session.”  Now I don’t really believe that these speakers are being malicious or just selfishly trying to extend their time in the spotlight.  I think that most of them are merely being undisciplined and did not trim their presentation sufficiently, or allowed themselves to get off-track (often in a generous attempt to help someone in the audience with a question or problem that really should have been noted for further discussion after the session). So here is my recommendation…my plea, even.  TRIM THE FAT!  Now.  Before it’s too late.  Before you even get on the airplane, take a long, hard look at your presentation and eliminate some of the points that you originally thought you had to make, but in reality are not truly crucial to your main topic.  Delete a few slides.  Test your demos and have them already scripted rather than typing them during your talk.  It is better to cut out too much and end up with plenty of time at the end for Questions & Answers.  And you can always keep some notes on the stuff that you cut out so that you could fill it back in at the end as bonus material if you really do end up with a whole bunch of time on your hands.  But I don’t think you will.  And if you do, that will look even better to the audience as it will look like you’re giving them something extra that not every audience gets.  And they will thank you for that.

    Read the article

  • Tip #19 Module Private Visibility in OSGi

    - by ByronNevins
    I hate public and protected methods and classes.  It requires so much work to change them in a huge project like GlassFish.  Not to mention that you may well have to support those APIs forever.  They are highly overused in GlassFish.  In fact I'd bet that > 95% of classes are marked as public for no good reason.  It's just (bad) habit is my guess. private and default visibility (I call it package-private) is easier to maintain.  It is much much easier to change such classes and methods around.  If you have ANY public method or public class in GlassFish you'll need to grep through a tremendous amount of source code to find all callers.  But even that won't be theoretically reliable.  What if a caller is using reflection to access public methods?  You may never find such usages. If you have package private methods, it's easy.  Simply grep through all the code in that one package.  As long as that package compiles ok you're all set.  There can' be any compile errors anywhere else.  It's a waste of time to even look around or build the "outside" world.  So you may be thinking: "Aha!  I'll just make my module have one giant package with all the java files.  Then I can use the default visibility and maintenance will be much easier.  But there's a problem.  You are wasting a very nice feature of java -- organizing code into separate packages.  It also makes the code much more encapsulated.  Unfortunately to share code between the packages you have no choice but to declare public visibility. What happens in practice is that a module ends up having tons of public classes and methods that are used exclusively inside the module.  Which finally brings me to the point of this blog:  If Only There Was A Module-Private Visibility Available Well, surprise!  There is such a mechanism.  If your project is running under OSGi that is.  Like GlassFish does!  With this mechanism you can easily add another level of visibility by telling OSGi exactly which public you want to be exposed outside of the module.  You get the best of both worlds: Better encapsulation of your code so that maintenance is easier and productivity is increased. Usage of public visibility inside the module so that you can encapsulate intra-module better with packages. How I do this in GlassFish: Carefully plan out at least one package that will contain "true" publics.  This is the package that will be exported by OSGi.  I recommend just one package. Here is how to tell OSGi to use it in GlassFish -- edit osgi.bundle like so:-exportcontents:     org.glassfish.mymodule.truepublics;  version=${project.osgi.version} Now all publics declared in any other packages will be visible module-wide but not outside the module. There is one caveat: Accessing "module-private" items outside of the module is controlled at run-time, not compile-time.  The compiler has no clue that a public in a dependent module isn't really public.  it will happily compile it.  At runtime you will definitely see fireworks.  The good news is that you don't have to wait for the code path that tries to use the "module-private" items to fire.  OSGi will complain loudly when that module gets loaded.  OSGi will refuse to load it.  You will see an error like this: remote failure: Error while loading FOO: Exception while adding the new configuration : Error occurred during deployment: Exception while loading the app : org.osgi.framework.BundleException: Unresolved constraint in bundle com.oracle.glassfish.miscreant.code [115]: Unable to resolve 115.0: missing requirement [115.0] osgi.wiring.package; (osgi.wiring.package=org.glassfish.mymodule.unexported). Please see server.log for more details. That is if you accidentally change code in module B to use a public that is really a "module-private" in module A, then you will see the error immediately when you try to test whatever you were changing in module B.

    Read the article

  • SQLiteDataAdapter Fill exception

    - by Lirik
    I'm trying to use the OleDb CSV parser to load some data from a CSV file and insert it into a SQLite database, but I get an exception with the OleDbAdapter.Fill method and it's frustrating: An unhandled exception of type 'System.Data.ConstraintException' occurred in System.Data.dll Additional information: Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints. Here is the source code: public void InsertData(String csvFileName, String tableName) { String dir = Path.GetDirectoryName(csvFileName); String name = Path.GetFileName(csvFileName); using (OleDbConnection conn = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + dir + @";Extended Properties=""Text;HDR=No;FMT=Delimited""")) { conn.Open(); using (OleDbDataAdapter adapter = new OleDbDataAdapter("SELECT * FROM " + name, conn)) { QuoteDataSet ds = new QuoteDataSet(); adapter.Fill(ds, tableName); // <-- Exception here InsertData(ds, tableName); // <-- Inserts the data into the my SQLite db } } } class Program { static void Main(string[] args) { SQLiteDatabase target = new SQLiteDatabase(); string csvFileName = "D:\\Innovations\\Finch\\dev\\DataFeed\\YahooTagsInfo.csv"; string tableName = "Tags"; target.InsertData(csvFileName, tableName); Console.ReadKey(); } } The "YahooTagsInfo.csv" file looks like this: tagId,tagName,description,colName,dataType,realTime 1,s,Symbol,symbol,VARCHAR,FALSE 2,c8,After Hours Change,afterhours,DOUBLE,TRUE 3,g3,Annualized Gain,annualizedGain,DOUBLE,FALSE 4,a,Ask,ask,DOUBLE,FALSE 5,a5,Ask Size,askSize,DOUBLE,FALSE 6,a2,Average Daily Volume,avgDailyVolume,DOUBLE,FALSE 7,b,Bid,bid,DOUBLE,FALSE 8,b6,Bid Size,bidSize,DOUBLE,FALSE 9,b4,Book Value,bookValue,DOUBLE,FALSE I've tried the following: Removing the first line in the CSV file so it doesn't confuse it for real data. Changing the TRUE/FALSE realTime flag to 1/0. I've tried 1 and 2 together (i.e. removed the first line and changed the flag). None of these things helped... One constraint is that the tagId is supposed to be unique. Here is what the table look like in design view: Can anybody help me figure out what is the problem here? Update: I changed the HDR property from HDR=No to HDR=Yes and now it doesn't give me an exception: OleDbConnection conn = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + dir + @";Extended Properties=""Text;HDR=Yes;FMT=Delimited"""); I assumed that if HDR=No and I removed the header (i.e. first line), then it should work... strangely it didn't work. In any case, now I'm no longer getting the exception. The new problem is here: public void InsertData(QuoteDataSet data, String tableName) { using (SQLiteConnection conn = new SQLiteConnection(_connectionString)) { conn.Open(); using (SQLiteDataAdapter sqliteAdapter = new SQLiteDataAdapter("SELECT * FROM " + tableName, conn)) { Console.WriteLine("Num rows updated is " + sqliteAdapter.Update(data, tableName)); } } } Now the Num rows updated is 0... any hints?

    Read the article

  • Soapui & populating database before

    - by mada
    SoapUi & database data needed Hi, In a J2ee project( spring ws + hibernate +postgresql+maven 2 + tomcat),We have created webservices with full crud operations. We are currently testing them with SoapUi (not yet integrated in the integration-test maven phase ). We need tomcat to test them. Actually before the phase test, with a maven plugin, only one time, some datas are injected in the database before the launch of all tests through many sql files. But the fact is that to test some webservices i need some data entry & their Id (primary key) A- What is the best way to create them ? 1- populate one of the previous .sql test file. But now we have a big dependency where i have to be careful to use the same id use in a insert in the sql file in SoapUi too. And if any developper drop the value, the test will broke. 2- create amongst the .sql file test a soapui.sql where ALL data needed for the Sopaui test will be concentrated. The advantage is that the maintenance will be much more easy. 3-because i have Ws available for all crud operations, i can create before any testsuite a previous testsuite called setup-testsuiteX. In this one, i will use those Ws to inject datas in the databse & save id in variables thanks to the feature "property transfer" 4-launch the soapui test xml file with a junit java test & in the Setup Method (we are using spring test), we will populate the database with DbUnit. Drawback: they are already data in the database & some conflicts may appear due to constraint.And now i have a dependency between: - dbUnit Xml file (with the value primary key) - sopaui test where THOSe value of primarykey will be used advantage: A automatic rollback is possible for DbUnit Data but i dont know how it will react because those data have been used with the Ws. 5- your suggestion ? better idea ? Testing those Ws create datas in the DB. How is the best way to remove them at the end ? (in how daostest we were using a automatic rollback with spring test to let the database clean) Thanks in advance for your hep & sorry for my english, it is not my mother thongue. Regards.

    Read the article

  • How to make a Custom Data Generator for SQL XML DataType.

    - by Keith Sirmons
    Howdy, I am using Visual Studio 2010 and am playing around with the Database Projects. I am creating a DataGenerationPlan to insert data into a simple table, in which one of the column datatypes is XML. Out of the box, the generation plan uses the Regular Expression generator and generates something like this : HGcSv9wa7yM44T9x5oFT4pmBkEmv62lJ7OyAmCnL6yqXC2X.......... I am looking at creating a custom data Generator for this data type and have followed this site for the basics: http://msdn.microsoft.com/en-us/library/aa833244.aspx This example works if I am creating a string datatype and using it for a nvarchar datatype. What do I need to change to hook this Generator to the XML Datatype? Below are my code files. The string property works for nvarchar. The XElement property does not work for the xml datatype, and the RecordXMLDataGenerator is not listed as an option in the Generator column for the generation plan. CustomDataGenerators: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Data.Schema.Tools.DataGenerator; using Microsoft.Data.Schema.Extensibility; using Microsoft.Data.Schema; using Microsoft.Data.Schema.Sql; using System.Xml.Linq; namespace CustomDataGenerators { [DatabaseSchemaProviderCompatibility(typeof(SqlDatabaseSchemaProvider))] public class RecordXMLDataGenerator : Generator { private XElement _RecordData; [Output(Description = "Generates string of XML Data for the Record.", Name = "RecordDataString")] public string RecordDataString { get { return _RecordData.ToString(SaveOptions.None); } } [Output(Description = "Generates XML Data for the Record.", Name = "RecordData")] public XElement RecordData { get { return _RecordData; } } protected override void OnGenerateNextValues() { base.OnGenerateNextValues(); XElement element = new XElement("Root", new XElement("Children1", 1), new XElement("Children6", 6) ); _RecordData = element; } } } XML Extensions File: <?xml version="1.0" encoding="utf-8" ?> <extensions assembly="" version="1" xmlns="urn:Microsoft.Data.Schema.Extensions" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:Microsoft.Data.Schema.Extensions Microsoft.Data.Schema.Extensions.xsd"> <extension type="CustomDataGenerators.RecordXMLDataGenerator" assembly="CustomDataGenerators, Version=1.0.0.0, Culture=neutral, PublicKeyToken=xxxxxxxxxxxx" enabled="true"/> </extensions> Table.sql: CREATE TABLE [dbo].[Record] ( id int IDENTITY (1,1) NOT NULL, recordData xml NULL, userId int NULL, test nvarchar(max) NULL, rowver rowversion NULL, CONSTRAINT pk_RecordID PRIMARY KEY (id) )

    Read the article

  • What .Net Namespace contains Entity for use in a generic repository?

    - by Sara
    I have a question that I'm ashamed to ask, but I'm going to have a go at it anyway. I am creating a generic repository in asp.net mvc. I came across an example on this website which I find to be exactly what I was looking for, but there is one problem. It references an object - Entity - and I don't know what namespace it is in. I typically create my repositories and use Entity Framework but I decided to use a generic repository because I am using the same code in multiple projects over and over again. Here is the code: public interface IRepository { void Save(ENTITY entity) where ENTITY : Entity; void Delete<ENTITY>(ENTITY entity) where ENTITY : Entity; ENTITY Load<ENTITY>(int id) where ENTITY : Entity; IQueryable<ENTITY> Query<ENTITY>() where ENTITY : Entity; IList<ENTITY> GetAll<ENTITY>() where ENTITY : Entity; IQueryable<ENTITY> Query<ENTITY>(IDomainQuery<ENTITY> whereQuery) where ENTITY : Entity; ENTITY Get<ENTITY>(int id) where ENTITY : Entity; IList<ENTITY> GetObjectsForIds<ENTITY>(string ids) where ENTITY : Entity; void Flush(); } Can someone please tell me what namespace Entity is in? As you can tell, a constraint is placed on the code so that it must be an Entity type. I know that there is an Entity in System.Data.Entity, but that isn't what I need. I have had instances before where I was looking for some namespace that took me forever to find, but I have searched and I'm unable to find the appropriate namespace to cast my generic items correctly. I could cast it as a class and be done with it, but it is bugging me that I can't find Entity anywhere. Can someone help me....please..... :-) Here is a link to the original post. http://stackoverflow.com/questions/1472719/asp-net-mvc-how-many-repositories

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45  | Next Page >