Search Results

Search found 2495 results on 100 pages for 'mssql maintenance'.

Page 89/100 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • T-SQL selecting values that match ISNUMERIC and also are within a specified range. (plus Linq-to-sql

    - by Toby
    I am trying to select rows from a table where one of the (NVARCHAR) columns is within a numeric range. SELECT ID, Value FROM Data WHERE ISNUMERIC(Value) = 1 AND CONVERT(FLOAT, Value) < 66.6 Unfortunately as part of the SQL spec the AND clauses don't have to short circuit (and don't on MSSQL Server EE 2008). More info: http://stackoverflow.com/questions/789231/is-the-sql-where-clause-short-circuit-evaluated My next attempt was to try this to see if I could achieve delayed evaluation of the CONVERT SELECT ID, Value FROM Data WHERE (CASE WHEN ISNUMERIC(Value) = 1 THEN CONVERT(FLOAT, Value) < 66.6 ELSE 0 END) but I cannot seem to use a < (or any comparison) with the result of a CONVERT. It fails with the error Incorrect syntax near '<'. I can get away with SELECT ID, CONVERT(FLOAT, Value) AS Value FROM Data WHERE ISNUMERIC(Value) = 1 So the obvious solution is to wrap the whole select statement in another SELECT and WHERE and return the converted values from the inner select and filter in there where of the outer select. Unfortunately this is where my Linq-to-sql problem comes in. I am filtering not only by one range but potentialy by many, or just by the existance of the record (there are some date range selects and comparisons I've left out.) Essentially I would like to be able to generate something like this: SELECT ID, TypeID, Value FROM Data WHERE (TypeID = 4 AND ISNUMERIC(Value) AND CONVERT(Float, Value) < 66.6) OR (TypeID = 8 AND ISNUMERIC(Value) AND CONVERT(Float, Value) > 99) OR (TypeID = 9) (With some other clauses in each of those where options.) This clearly doesn't work if I filter out the non-ISNUMERIC values in an inner select. As I mentioned I am using Linq-to-sql (and PredicateBulider) to build up these queries but unfortunately Datas.Where(x => ISNUMERIC(x.Value) ? Convert.ToDouble(x.Value) < 66.6 : false) Gets converted to this which fails the initial problem. WHERE (ISNUMERIC([t0].[Value]) = 1) AND ((CONVERT(Float,[t0].[Value])) < @p0) My last resort will have to be to outer join against a double select on the same table for each of the comparisons but this isn't really an idea solution. I was wondering if anyone has run into similar issues before?

    Read the article

  • Linq To Sql Concat() dropping fields in created TSQL

    - by user191468
    This is strange. I am moving a stored proc to a service. The TSQL unions multiple selects. To replicate this I created multiple queries resulting in a common new concrete type. Then I issue a return result.ToString(); and the resulting SQL selects have varying numbers of columns specified thus causing an MSSQL Msg 205... using (var db = GetDb()) { var fundInv = from f in db.funds select new Investments { Company = f.company, FullName = f.fullname, Admin = f.admin, Fund = f.fund1, FundCode = f.fundcode, Source = STR_FUNDS, IsPortfolio = false, IsActive = f.active, Strategy = f.strategy, SubStrategy = f.substrategy, AltStrategy = f.altstrategy, AltSubStrategy = f.altsubstrategy, Region = f.region, AltRegion = f.altregion, UseAlternate = f.usealt, ClassesAllowed = f.classallowed }; var stocksInv = from s in db.stocks where !fundInv.Select(f => f.Company).Contains(s.vehcode) select new Investments { Company = s.company, FullName = s.issuer, Admin = STR_PRS, Fund = s.shortname, FundCode = s.vehcode, Source = STR_STOCK, IsPortfolio = false, IsActive = (s.inactive == null), Strategy = s.style, SubStrategy = s.substyle, AltStrategy = s.altstyle, AltSubStrategy = s.altsubsty, Region = s.geography, AltRegion = s.altgeo, UseAlternate = s.usealt, ClassesAllowed = STR_GENERIC }; var bondsInv = from oi in db.bonds where !fundInv.Select(f => f.Company).Contains(oi.vehcode) select new Investments { Company = string.Empty, FullName = oi.issue, Admin = STR_PRS1, Fund = oi.issue, FundCode = oi.vehcode, Source = STR_BONDS, IsPortfolio = false, IsActive = oi.closed, Strategy = STR_OTH, SubStrategy = STR_OTH, AltStrategy = STR_OTH, AltSubStrategy = STR_OTH, Region = STR_OTH, AltRegion = STR_OTH, UseAlternate = false, ClassesAllowed = STR_GENERIC }; return (fundInv.Concat(stocksInv).Concat(bondsInv)).ToList(); } The code above results in a complex select statement where each "table" above has different column count. (see SQL below) I've been trying a few things but no change yet. Ideas are welcome. SELECT [t6].[company] AS [Company], [t6].[fullname] AS [FullName], [t6].[admin] AS [Admin], [t6].[fund] AS [Fund], [t6].[fundcode] AS [FundCode], [t6].[value] AS [Source], [t6].[value2] AS [IsPortfolio], [t6].[active] AS [IsActive], [t6].[strategy] AS [Strategy], [t6].[substrategy] AS [SubStrategy], [t6].[altstrategy] AS [AltStrategy], [t6].[altsubstrategy] AS [AltSubStrategy], [t6].[region] AS [Region], [t6].[altregion] AS [AltRegion], [t6].[usealt] AS [UseAlternate], [t6].[classallowed] AS [ClassesAllowed] FROM ( SELECT [t3].[company], [t3].[fullname], [t3].[admin], [t3].[fund], [t3].[fundcode], [t3].[value], [t3].[value2], [t3].[active], [t3].[strategy], [t3].[substrategy], [t3].[altstrategy], [t3].[altsubstrategy], [t3].[region], [t3].[altregion], [t3].[usealt], [t3].[classallowed] FROM ( SELECT [t0].[company], [t0].[fullname], [t0].[admin], [t0].[fund], [t0].[fundcode], @p0 AS [value], [t0].[active], [t0].[strategy], [t0].[substrategy], [t0].[altstrategy], [t0].[altsubstrategy], [t0].[region], [t0].[altregion], [t0].[usealt], [t0].[classallowed] FROM [zInvest].[funds] AS [t0] UNION ALL SELECT [t1].[company], [t1].[issuer], @p6 AS [value], [t1].[shortname], [t1].[vehcode], @p7 AS [value2], @p8 AS [value3], (CASE WHEN [t1].[inactive] IS NULL THEN 1 ELSE 0 END) AS [value5], [t1].[style], [t1].[substyle], [t1].[altstyle], [t1].[altsubsty], [t1].[geography], [t1].[altgeo], [t1].[usealt], @p10 AS [value6] FROM [zBank].[stocks] AS [t1] WHERE (NOT (EXISTS( SELECT NULL AS [EMPTY] FROM [zInvest].[funds] AS [t2] WHERE [t2].[company] = [t1].[vehcode] ))) AND ([t1].[vehcode] <> @p2) AND (SUBSTRING([t1].[vehcode], @p3 + 1, @p4) <> @p5) ) AS [t3] UNION ALL SELECT @p11 AS [value], [t4].[issue], @p12 AS [value2], [t4].[vehcode], @p13 AS [value3], @p14 AS [value4], [t4].[closed], @p16 AS [value6], @p17 AS [value7] FROM [zMut].[bonds] AS [t4] WHERE NOT (EXISTS( SELECT NULL AS [EMPTY] FROM [zInvest].[funds] AS [t5] WHERE [t5].[company] = [t4].[vehcode] )) ) AS [t6]

    Read the article

  • NHibernate many-to-many with composite element

    - by E1
    Hello everyone. I encountered the following problem. In my application uses the entities Owner and Area binding as many to many. public class Area : DomainObject<int> { private ISet<OwnersPeriod> _owners = new HashedSet<OwnersPeriod>(); public ICollection<OwnersPeriod> Owners { get { return _owners; } set { Check.Require(value != null); _owners = new HashedSet<OwnersPeriod>(value); } } } Table Owner2Area has the following fields: CREATE TABLE [dbo].[Owner2Area]( [ID] [int] IDENTITY(1,1) NOT NULL, [IDOwner] [int] NOT NULL, [IDArea] [int] NOT NULL, [FirstDate] [datetime] NOT NULL, [LastDate] [datetime] NULL, CONSTRAINT [PK_Owner2Area] PRIMARY KEY CLUSTERED) Therefore corresponds to the class OwnersPeriod public class OwnersPeriod { private Owner _member; private Area _area; public Owner Owner { get {...} set{...} } public Area Area { get { ... } set { ... } } public DateTime FirstDate { get; set; } public DateTime? LastDate { get; set; } } I wrote mappings <class lazy="false" name="Core.Domain.Area, Core" table="Areas"> ... <set name="Owners" table="Owner2Area" inverse="true" lazy="true" > <key column="IDArea"/> <composite-element class="Core.Domain.OwnersPeriod, Core" > <parent name="Area" /> <property name="FirstDate" type="datetime"/> <property name="LastDate" type="datetime"/> <many-to-one name="Owner" class="Core.Domain.Owner, Core" column="IDOwner"/> </composite-element> </set> </class> For each area existing data are successfully loaded into Owners, but when I add new record in Owner2Area through CreateSQLQuery, these data are not updated for instance of area. If I re-opened the form and got all areas, added link successfully loaded into the collection. How can be forced to load added thus recording of relation many-to-many? Nhibernate v.2.0.1, db MSSQL 2005

    Read the article

  • What Is The Best Database For Delphi Desktop Applications That Supports Stored Procedures?

    - by Cape Cod Gunny
    I started with Turbo Pascal 3, went to TP5, Bought TP6 called Borland the next day and downgraded to TP5.5. Bought Delphi 3, and now have Delphi 5 Enterprise. I sort of lost interest in writing code about 4-5 years ago for two reasons; Spent all day writing ASP & SQL for someone else. PC Techniques magazine went away. I've got a few programs in the shareware market that are solid performers but are in need of serious updating. I love Delphi or did when it was Borland (before Borland bought DBase and all the other crap), I'd like to salvage as much of my D5E code as possible but I doubt I can. I plan on upgrading to Delphi 2010. My next software release needs to interact with a database. I'm very proficient with MS Sql and like to put all of the database code in stored procedures. What is the best database choice that interacts well with Delphi, allows stored procedures and is so easy to deploy that even the Geico gecko could deploy it? 10/25/2009 18:53 PM EST Re-Opened After Reading Install Docs for Delphi 2010 I downloaded a trial version of Delphi 2010 and unzipped the install. I've been reading the install docs included in the package. I started with the install.htm inside the zip package. install.htm wisely tells you to see the following two articles: Installation Notes: http://edn.embarcadero.com/article/39754 Release Notes: http://edn.embarcadero.com/article/39758 the release notes state the following... MSSQL driver requires the installation of the SQL Native Client. SQL Native Client 2008 is required for dbxmss.dll. SQL Native Client 2005 is required for dbxmss9.dll I checked my machine to see if SQL Native Client is installed. Nope. I wasn't done reading the docs so I made a note to install SQL Native Client. I googled dbxmss.dll and dbxmss9.dll and found a very interesting thread on the Embarcadero forums. read thread here. After reading this thread and some careful thought I don't think I will be using Microsoft SQL Express. I can't rely on my customers having the right drivers installed. So, I'm back to looking for a different solution. If I'm selling a $40 product to the general masses I need to have a bulletproof solution that doesn't require my brand new customer to update their machine before my software will work.

    Read the article

  • How to implement a SIMPLE "You typed ACB, did you mean ABC?"

    - by marcgg
    I know this is not a straight up question, so if you need me to provide more information about the scope of it, let me know. There are a bunch of questions that address almost the same issue (they are linked here), but never the exact same one with the same kind of scope and objective - at least as far as I know. Context: I have a MP3 file with ID3 tags for artist name and song title. I have two tables Artists and Songs The ID3 tags might be slightly off (e.g. Mikaell Jacksonne) I'm using ASP.NET + C# and a MSSQL database I need to synchronize the MP3s with the database. Meaning: The user launches a script The script browses through all the MP3s The script says "Is 'Mikaell Jacksonne' 'Michael Jackson' YES/NO" The user pick and we start over Examples of what the system could find: In the database... SONGS = {"This is a great song title", "This is a song title"} ARTISTS = {"Michael Jackson"} Outputs... "This is a grt song title" did you mean "This is a great song title" ? "This is song title" did you mean "This is a song title" ? "This si a song title" did you mean "This is a song title" ? "This si song a title" did you mean "This is a song title" ? "Jackson, Michael" did you mean "Michael Jackson" ? "JacksonMichael" did you mean "Michael Jackson" ? "Michael Jacksno" did you mean "Michael Jackson" ? etc. I read some documentation from this /how-do-you-implement-a-did-you-mean and this is not exactly what I need since I don't want to check an entire dictionary. I also can't really use a web service since it's depending a lot on what I already have in my database. If possible I'd also like to avoid dealing with distances and other complicated things. I could use the google api (or something similar) to do this, meaning that the script will try spell checking and test it with the database, but I feel there could be a better solution since my database might end up being really specific with weird songs and artists, making spell checking useless. I could also try something like what has been explained on this post, using Soundex for c#. Using a regular spell checker won't work because I won't be using words but names and 'titles'. So my question is: is there a relatively simple way of doing this, and if so, what is it? Any kind of help would be appreciated. Thanks!

    Read the article

  • Asp.net hosting equivalent of Dreamhost (pricing, features and support)

    - by Cherian
    Disclaimer: I have browsed http://stackoverflow.com/questions/tagged/asp.net+hosting and didn’t find anything quite similar in value to Dreamhost. One of the biggest impediments IMHO for developing web applications on asp.net is the cost of deployment. I am not talking about building sites like Stackoverflow.com or plentyoffish.com. This is about sites that are bigger than brochureware and smaller than ones that require dedicated servers. Let me give you an example. xmec.org is an asp.net site I maintain for my college alumni. On an average it’s slated to hit around 1000-1100 views per day. At present it’s hosted on godaddy. The service is so damn pathetic; I am using it only because of the lack of options. The site doesn’t scale (no, it’s not the code) and the web control panels are extremely slow. The money I pay doesn’t justify the service or the performance. Every deployment push is a visit to the infuriating web control panel to set the permissions and the root directories. Had I developed it in python, this would have been deployed on Dreamhost.com with $10/year hosting fees (they have offers running all throughout) 50 GB space 5 MySQL Databases Shell / FTP Users POP / SMTP Access Unlimited Domains hosting Unlimited Sub domains hosting Unlimited Domains Forwarded/Mirrored Custom DNS (These are the only ones I could think of. More at the feature page) With a dream host shell, I even have a svn checked-out version of wordpress for my blog. Now, that’s control! To my question: Is there any asp.net (preferably .net 3.5. Dreamhost keeps on updating versions every fortnight) hosting company providing remotely similar feature-sets and pricing like Dreamhost. My requirements are: Less than $15-25/ year Typical WISP minus PHP .net 3.5 SP1 Full Trust mode(I can live with medium trust, if not for the IL emitting libraries) Isolated Application Pool 5 – 10 MySQL db’s Unlimited domain hosting MsSql 2005 or 2008 FTP support At Least 5 GB space SMTP IIS 7 Log files Accessibility Moderately good control panel Scripting, shell support Nominal bandwidth Another case in point: Recently I’ve been contemplating building a tool-website to find duplicates and weird characters in my Google contacts and fix them. With asp.net, the best part is that I can do this with LINQ to XML in less than 100 lines of code. What’s bad is the hosting part. I don’t think I stand to make any money out of this and therefore can’t afford to host it on GoGrid or DiscountAsp.net. Godaddy is not an option either. If I do this in python, I can push to this my existing $10 Dreamhost account with another domain pointed. No extra cost. Svn exported with scripts (capability) to change the connection string! Looking at the problem holistically, I think I represent a large breed of programmers playing it cheap and experimenting different things on a regular basis, one of which will become the next twitter/digg.

    Read the article

  • Is it worth moving from stored procedures to linq ?

    - by Josef
    I'm looking at standardizing programming in an organisaiton. Half uses stored procedures and the other half Linq. From what i've read there is still some debate going on on this topic. My concern is that MS is trying to slip in it's own proprietry query language 'linq' to make SQL redundant. If a few years back microsoft had tried to win customers from oracle and sybase with their MSSQL database and stated that it didn't use SQL by their own proprietry query langues ie linq. I doubt many would have switched. I believe that is exactly what is happening now by introducting it into the applicaiton business layer. I have used MS for many years but there is one gripe that I have with them and that is that they change their direction a lot. By a lot I mean new releases of .net, silverlight etc are more than 30% different from previous version. So by the time you become productive a new release is on the way. As things stand now a web developer using .net would need to know either vb.net or c#, xml, xaml,javascript,html, sql and now linq. That doesn't make for good productivity in my books. My concern is that once we all start using linq MS will start changing it between releases. and it will become an ever changing landscape. I believe that 'linq to sql' has already been deprecated. At leas with SQL we are dealing with a more stable and standardized language. Are we looking at a programming revolution or a marketing campaign? As far as I know other languages like Cobol have stayed the same for years. A cobol program from 20 years ago could pick up todays code and start working on it. Could a Vb3 person work on a modern .net web app ? Would these large changes need to be made if the underlying original foundation had been sound ? I worry about following MS shaking roadmap with it's deadends and double backs. are there any architects out there who feel the same ? regards Josef

    Read the article

  • XML Return from an Oracle Stored Procedure

    - by Tequila Jinx
    Unfortunately most of my DB experience has been with MSSQL which tends to hold your hand a lot more than Oracle. What I'm trying to do is fairly trivial in tSQL, however, pl/sql is giving me a headache. I have the following procedure: CREATE OR REPLACE PROCEDURE USPX_GetUserbyID (USERID USERS.USERID%TYPE, USERRECORD OUT XMLTYPE) AS BEGIN SELECT XMLELEMENT("user" , XMLATTRIBUTES(u.USERID AS "userid", u.companyid as "companyid", u.usertype as "usertype", u.status as "status", u.personid as "personid") , XMLFOREST( p.FIRSTNAME AS "firstname" , p.LASTNAME AS "lastname" , p.EMAIL AS "email" , p.PHONE AS "phone" , p.PHONEEXTENSION AS "extension") , XMLELEMENT("roles", (SELECT XMLAGG(XMLELEMENT("role", r.ROLETYPE)) FROM USER_ROLES r WHERE r.USERID = USERID AND r.ISACTIVE = 1 ) ) , XMLELEMENT("watches", (SELECT XMLAGG( XMLELEMENT("watch", XMLATTRIBUTES(w.WATCHID AS "id", w.TICKETID AS "ticket") ) ) FROM USER_WATCHES w WHERE w.USERID = USERID AND w.ISACTIVE = 1 ) ) ) AS "RESULT" INTO USERRECORD FROM USERS u LEFT JOIN PEOPLE p ON p.PERSONID = u.PERSONID WHERE u.USERID = USERID; END USPX_GetUserbyID; When executed, it should return an XML document with the following structure: <user userid="" companyid="" usertype="" status="" personid=""> <firstname /> <lastname /> <email /> <phone /> <extension /> <roles> <role /> </roles> <watches> <watch id="" ticket="" /> </watches> </user> When I execute the query itself, replacing the USERID parameter with a string and removing the "into" clause, the query runs fine and returns the expected structure. However, when the procedure attempts to execute the query, passing the results of the XMLELEMENT function into the USERRECORD output parameter, I get the following exception: Error report: ORA-01422: exact fetch returns more than requested number of rows ORA-06512: at "USPX_GETUSERBYID", line 4 ORA-06512: at line 3 01422. 00000 - "exact fetch returns more than requested number of rows" *Cause: The number specified in exact fetch is less than the rows returned. *Action: Rewrite the query or change number of rows requested I'm baffled trying to nail this down, and unfortunately my google-fu hasn't helped. I've found plenty of Oracle SQL|XML examples, but none that deal with XML returns from a procedure. Note: I know that an alternate method of retrieving XML using DBMS methods exists, however, it's my understanding that that functionality is deprecated in favor of SQL|XML.

    Read the article

  • JDBC CommunicationsException with MySQL Database

    - by Dominik Siebel
    I'm having a little trouble with my MySQL- Connection- Pooling. This is the case: Different jobs are scheduled via Quartz. All jobs connect to different databases which works fine the whole day while the nightly scheduled jobs fail with a CommunicationsException... Quartz-Jobs: Job1 runs 0 0 6,10,14,18 * * ? Job2 runs 0 30 10,18 * * ? Job3 runs 0 0 5 * * ? As you can see the last job runs at 18 taking about 1 hour to run. The first job at 5am is the one that fails. I already tried all kinds of parameter-combinations in my resource config this is the one I am running right now: <!-- Database 1 (MySQL) --> <Resource auth="Container" driverClassName="com.mysql.jdbc.Driver" maxActive="100" maxIdle="30" maxWait="10000" removeAbandoned="true" removeAbandonedTimeout="60" logAbandoned="true" type="javax.sql.DataSource" name="jdbc/appDbProd" username="****" password="****" url="jdbc:mysql://127.0.0.1:3306/appDbProd?autoReconnect=true&amp;useUnicode=true&amp;characterEncoding=UTF-8" testWhileIdle="true" testOnBorrow="true" testOnReturn="true" validationQuery="SELECT 1" timeBetweenEvictionRunsMillis="1800000" /> <!-- Database 2 (MySQL) --> <Resource auth="Container" driverClassName="com.mysql.jdbc.Driver" maxActive="100" maxIdle="30" maxWait="10000" removeAbandoned="true" removeAbandonedTimeout="60" logAbandoned="true" type="javax.sql.DataSource" name="jdbc/prodDbCopy" username="****" password="****" url="jdbc:mysql://127.0.0.1:3306/prodDbCopy?autoReconnect=true&amp;useUnicode=true&amp;characterEncoding=UTF-8" testWhileIdle="true" testOnBorrow="true" testOnReturn="true" validationQuery="SELECT 1" timeBetweenEvictionRunsMillis="1800000" /> <!-- Database 3 (MSSQL)--> <Resource auth="Container" driverClassName="net.sourceforge.jtds.jdbc.Driver" maxActive="30" maxIdle="30" maxWait="100" removeAbandoned="true" removeAbandonedTimeout="60" logAbandoned="true" name="jdbc/catalogDb" username="****" password="****" type="javax.sql.DataSource" url="jdbc:jtds:sqlserver://127.0.0.1:1433;databaseName=catalog;useNdTLMv2=false" testWhileIdle="true" testOnBorrow="true" testOnReturn="true" validationQuery="SELECT 1" timeBetweenEvictionRunsMillis="1800000" /> For obvious reasons I changed IPs, Usernames and Passwords but they can be assumed to be correct, seeing that the application runs successfully the whole day. The most annoying thing is: The first job that runs first queries Database2 successfully but fails to query Database1 for some reason (CommunicationsException): Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 39,376,539 milliseconds ago. The last packet sent successfully to the server was 39,376,539 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. Any ideas? Thanks!

    Read the article

  • Should we denormalize database to improve performance?

    - by Groo
    We have a requirement to store 500 measurements per second, coming from several devices. Each measurement consists of a timestamp, a quantity type, and several vector values. Right now there is 8 vector values per measurement, and we may consider this number to be constant for needs of our prototype project. We are using HNibernate. Tests are done in SQLite (disk file db, not in-memory), but production will probably be MsSQL. Our Measurement entity class is the one that holds a single measurement, and looks like this: public class Measurement { public virtual Guid Id { get; private set; } public virtual Device Device { get; private set; } public virtual Timestamp Timestamp { get; private set; } public virtual IList<VectorValue> Vectors { get; private set; } } Vector values are stored in a separate table, so that each of them references its parent measurement through a foreign key. We have done a couple of things to ensure that generated SQL is (reasonably) efficient: we are using Guid.Comb for generating IDs, we are flushing around 500 items in a single transaction, ADO.Net batch size is set to 100 (I think SQLIte does not support batch updates? But it might be useful later). The problem Right now we can insert 150-200 measurements per second (which is not fast enough, although this is SQLite we are talking about). Looking at the generated SQL, we can see that in a single transaction we insert (as expected): 1 timestamp 1 measurement 8 vector values which means that we are actually doing 10x more single table inserts: 1500-2000 per second. If we placed everything (all 8 vector values and the timestamp) into the measurement table (adding 9 dedicated columns), it seems that we could increase our insert speed up to 10 times. Switching to SQL server will improve performance, but we would like to know if there might be a way to avoid unnecessary performance costs related to the way database is organized right now. [Edit] With in-memory SQLite I get around 350 items/sec (3500 single table inserts), which I believe is about as good as it gets with NHibernate (taking this post for reference: http://ayende.com/Blog/archive/2009/08/22/nhibernate-perf-tricks.aspx). But I might as well switch to SQL server and stop assuming things, right? I will update my post as soon as I test it.

    Read the article

  • SQL Invalid Object Name 'AddressType'

    - by salvationishere
    I am getting the above error in my VS 2008 C# method when I try to invoke the SQL getColumnNames stored procedure from VS. This SP accepts one input parameter, the table name, and works successfully from SSMS. Currently I am selecting the AdventureWorks AddressType table for it to pull the column names from this table. I can see teh AdventureWorks table available in VS from my Server Explorer / Data Connection. And I see both the AddressType table and getColumnNames SP showing in Server Explorer. But I am still getting this error listed above. Here is the C# code snippet I use to execute this: public static DataTable DisplayTableColumns(string tt) { SqlDataReader dr = null; string TableName = tt; string connString = "Data Source=.;AttachDbFilename=\"C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\AdventureWorks_Data.mdf\";Initial Catalog=AdventureWorks;Integrated Security=True;Connect Timeout=30;User Instance=False"; string errorMsg; SqlConnection conn2 = new SqlConnection(connString); SqlCommand cmd = conn2.CreateCommand(); try { cmd.CommandText = "dbo.getColumnNames"; cmd.CommandType = CommandType.StoredProcedure; cmd.Connection = conn2; SqlParameter parm = new SqlParameter("@TableName", SqlDbType.VarChar); parm.Value = TableName; parm.Direction = ParameterDirection.Input; cmd.Parameters.Add(parm); conn2.Open(); dr = cmd.ExecuteReader(); } catch (Exception ex) { errorMsg = ex.Message; } And when I examine the errorMsg it says the following: " at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)\r\n at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection)\r\n at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)\r\n at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)\r\n at System.Data.SqlClient.SqlDataReader.ConsumeMetaData()\r\n at System.Data.SqlClient.SqlDataReader.get_MetaData()\r\n at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)\r\n at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async)\r\n at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result)\r\n at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method)\r\n at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method)\r\n at System.Data.SqlClient.SqlCommand.ExecuteReader()\r\n at ADONET_namespace.ADONET_methods.DisplayTableColumns(String tt) in C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\ADONET methods.cs:line 35" Where line 35 is dr = cmd.ExecuteReader();

    Read the article

  • Helping linqtosql datacontext use implicit conversion between varchar column in the database and tab

    - by user213256
    I am creating an mssql database table, "Orders", that will contain a varchar(50) field, "Value" containing a string that represents a slightly complex data type, "OrderValue". I am using a linqtosql datacontext class, which automatically types the "Value" column as a string. I gave the "OrderValue" class implicit conversion operators to and from a string, so I can easily use implicit conversion with the linqtosql classes like this: // get an order from the orders table MyDataContext db = new MyDataContext(); Order order = db.Orders(o => o.id == 1); // use implicit converstion to turn the string representation of the order // value into the complex data type. OrderValue value = order.Value; // adjust one of the fields in the complex data type value.Shipping += 10; // use implicit conversion to store the string representation of the complex // data type back in the linqtosql order object order.Value = value; // save changes db.SubmitChanges(); However, I would really like to be able to tell the linqtosql class to type this field as "OrderValue" rather than as "string". Then I would be able to avoid complex code and re-write the above as: // get an order from the orders table MyDataContext db = new MyDataContext(); Order order = db.Orders(o => o.id == 1); // The Value field is already typed as the "OrderValue" type rather than as string. // When a string value was read from the database table, it was implicity converted // to "OrderValue" type. order.Value.Shipping += 10; // save changes db.SubmitChanges(); In order to achieve this desired goal, I looked at the datacontext designer and selected the "Value" field of the "Order" table. Then, in properties, I changed "Type" to "global::MyApplication.OrderValue". The "Server Data Type" property was left as "VarChar(50) NOT NULL" The project built without errors. However, when reading from the database table, I was presented with the following error message: Could not convert from type 'System.String' to type 'MyApplication.OrderValue'. at System.Data.Linq.DBConvert.ChangeType(Object value, Type type) at Read_Order(ObjectMaterializer1 ) at System.Data.Linq.SqlClient.ObjectReaderCompiler.ObjectReader2.MoveNext() at System.Linq.Buffer1..ctor(IEnumerable1 source) at System.Linq.Enumerable.ToArray[TSource](IEnumerable`1 source) at Example.OrdersProvider.GetOrders() at ... etc From the stack trace, I believe this error is happening while reading the data from the table. When presented with converting a string to my custom data type, even though the implicit conversion operators are present, the DBConvert class gets confused and throws an error. Is there anything I can do to help it not get confused and do the implicit conversion? Thanks in advance, and apologies if I have posted in the wrong forum. cheers / Ben

    Read the article

  • JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g

    - by John-Brown.Evans
    JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g ol{margin:0;padding:0} .c5{vertical-align:top;width:156pt;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 2pt 0pt 2pt} .c7{list-style-type:disc;margin:0;padding:0} .c4{background-color:#ffffff} .c14{color:#1155cc;text-decoration:underline} .c6{height:11pt;text-align:center} .c13{color:inherit;text-decoration:inherit} .c3{padding-left:0pt;margin-left:36pt} .c0{border-collapse:collapse} .c12{text-align:center} .c1{direction:ltr} .c8{background-color:#f3f3f3} .c2{line-height:1.0} .c11{font-style:italic} .c10{height:11pt} .c9{font-weight:bold} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt}.subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-size:18pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-size:14pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-size:12pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-style:italic;font-size:11pt;font-family:"Arial";padding-bottom:0pt} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-size:10pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-style:italic;font-size:10pt;font-family:"Arial";padding-bottom:0pt} This example shows the steps to create a simple JMS queue in WebLogic Server 11g for testing purposes. For example, to use with the two sample programs QueueSend.java and QueueReceive.java which will be shown in later examples. Additional, detailed information on JMS can be found in the following Oracle documentation: Oracle® Fusion Middleware Configuring and Managing JMS for Oracle WebLogic Server 11g Release 1 (10.3.6) Part Number E13738-06 http://docs.oracle.com/cd/E23943_01/web.1111/e13738/toc.htm 1. Introduction and Definitions A JMS queue in Weblogic Server is associated with a number of additional resources: JMS Server A JMS server acts as a management container for resources within JMS modules. Some of its responsibilities include the maintenance of persistence and state of messages and subscribers. A JMS server is required in order to create a JMS module. JMS Module A JMS module is a definition which contains JMS resources such as queues and topics. A JMS module is required in order to create a JMS queue. Subdeployment JMS modules are targeted to one or more WLS instances or a cluster. Resources within a JMS module, such as queues and topics are also targeted to a JMS server or WLS server instances. A subdeployment is a grouping of targets. It is also known as advanced targeting. Connection Factory A connection factory is a resource that enables JMS clients to create connections to JMS destinations. JMS Queue A JMS queue (as opposed to a JMS topic) is a point-to-point destination type. A message is written to a specific queue or received from a specific queue. The objects used in this example are: Object Name Type JNDI Name TestJMSServer JMS Server TestJMSModule JMS Module TestSubDeployment Subdeployment TestConnectionFactory Connection Factory jms/TestConnectionFactory TestJMSQueue JMS Queue jms/TestJMSQueue 2. Configuration Steps The following steps are done in the WebLogic Server Console, beginning with the left-hand navigation menu. 2.1 Create a JMS Server Services > Messaging > JMS Servers Select New Name: TestJMSServer Persistent Store: (none) Target: soa_server1  (or choose an available server) Finish The JMS server should now be visible in the list with Health OK. 2.2 Create a JMS Module Services > Messaging > JMS Modules Select New Name: TestJMSModule Leave the other options empty Targets: soa_server1  (or choose the same one as the JMS server)Press Next Leave “Would you like to add resources to this JMS system module” unchecked and  press Finish . 2.3 Create a SubDeployment A subdeployment is not necessary for the JMS queue to work, but it allows you to easily target subcomponents of the JMS module to a single target or group of targets. We will use the subdeployment in this example to target the following connection factory and JMS queue to the JMS server we created earlier. Services > Messaging > JMS Modules Select TestJMSModule Select the Subdeployments  tab and New Subdeployment Name: TestSubdeployment Press Next Here you can select the target(s) for the subdeployment. You can choose either Servers (i.e. WebLogic managed servers, such as the soa_server1) or JMS Servers such as the JMS Server created earlier. As the purpose of our subdeployment in this example is to target a specific JMS server, we will choose the JMS Server option. Select the TestJMSServer created earlier Press Finish 2.4  Create a Connection Factory Services > Messaging > JMS Modules Select TestJMSModule  and press New Select Connection Factory  and Next Name: TestConnectionFactory JNDI Name: jms/TestConnectionFactory Leave the other values at default On the Targets page, select the Advanced Targeting  button and select TestSubdeployment Press Finish The connection factory should be listed on the following page with TestSubdeployment and TestJMSServer as the target. 2.5 Create a JMS Queue Services > Messaging > JMS Modules Select TestJMSModule  and press New Select Queue and Next Name: TestJMSQueueJNDI Name: jms/TestJMSQueueTemplate: NonePress Next Subdeployments: TestSubdeployment Finish The TestJMSQueue should be listed on the following page with TestSubdeployment and TestJMSServer. Confirm the resources for the TestJMSModule. Using the Domain Structure tree, navigate to soa_domain > Services > Messaging > JMS Modules then select TestJMSModule You should see the following resources The JMS queue is now complete and can be accessed using the JNDI names jms/TestConnectionFactory andjms/TestJMSQueue. In the following blog post in this series, I will show you how to write a message to this queue, using the WebLogic sample Java program QueueSend.java.

    Read the article

  • Framework 4 Features: Summary of Security enhancements

    - by Anthony Shorten
    In the last log entry I mentioned one of the new security features in Oracle Utilities Application Framework 4.0.1. Security is one of the major "tent poles" (to borrow a phrase from Steve Jobs) in this release of the framework. There are a number of security related enhancements requested by customers and as a result of internal reviews that we have introduced. Here is a summary of some of the security enchancements we have added in this release: Security Cache Changes - Security authorization information is automatically cached on the server for performance reasons (security is checked for every single call the product makes for all modes of access). Prior to this release the cache auto-refreshed every 30 minutes (or so). This has beem made more nimble by supporting a cache refresh every minute (or so). This means authorization changes are reflected quicker than before. Business Level security - Business Services are configurable services that are based upon Application Services. Typically, the business service inherited its security profile from its parent service. Whilst this is sufficient for most needs, it is now required to further specify security on the Business Service definition itself. This will allow granular security and allow the same application service to be exposed as different Business Services with their own security. This is particularly useful when you base a Business Service on a query zone. User Propogation - As with other client server applications, the database connections are pooled and shared as needed. This means that a common database user is used to access the database from the pool to allow sharing. Unfortunently, this means that tracability at the database level is that much harder. In Oracle Utilities Application Framework V4 the end userid is now propogated to the database using the CLIENT_IDENTIFIER as part of the Oracle JDBC connection API. This not only means that the common database userid is still used but the end user is indentifiable for the duration of the database call. This can be used for monitoring or to hook into Oracle's database security products. This enhancement is only available to Oracle Database customers. Enhanced Security Definitions - Security Administrators use the product browser front end to control access rights of defined users. While this is sufficient for most sites, a new security portal has been introduced to speed up the maintenance of security information. Oracle Identity Manager Integration - With the popularity of Oracle's Identity Management Suite, the Framework now provides an integration adapter and Identity Manager Generic Transport Connector (GTC) to allow users and group membership to be provisioned to any Oracle Utilities Application Framework based product from Oracle's Identity Manager. This is also available for Oracle Utilties Application Framework V2.2 customers. Refer to My Oracle Support KBid 970785.1 - Oracle Identity Manager Integration Overview. Audit On Inquiry - Typically the configurable audit facility in the Oracle Utilities Application Framework is used to audit changes to records. In Oracle Utilities Application Framework the Business Services and Service Scripts could be configured to audit inquiries as well. Now it is possible to attach auditing capabilities to zones on the product (including base package ones). Time Zone Support - In some of the Oracle Utilities Application Framework based products, the timezone of the end user is a factor in the processing. The user object has been extended to allow the recording of time zone information for use in product functionality. JAAS Suport - Internally the Oracle Utilities Application Framework uses a number of techniques to validate and transmit security information across the architecture. These various methods have been reconciled into using Java Authentication and Authorization Services for standardized security. This is strictly an internal change with no direct on how security operates externally. JMX Based Cache Management - In the last bullet point, I mentioned extra security applied to cache management from the browser. Alternatively a JMX based interface is now provided to allow IT operations to control the cache without the browser interface. This JMX capability can be initiated from a JSR120 compliant JMX console or JMX browser. I will be writing another more detailed blog entry on the JMX enhancements as it is quite a change and an exciting direction for the product line. Data Patch Permissions - The database installer provided with the product required lower levels of security for some operations. At some sites they wanted the ability for non-DBA's to execute the utilities in a controlled fashion. The framework now allows feature configuration to allow delegation for patch execution. User Enable Support - At some sites, the use of temporary staff such as contractors is commonplace. In this scenario, temporary security setups were required and used. A potential issue has arisen when the contractor left the company. Typically the IT group would remove the contractor from the security repository to prevent login using that contractors userid but the userid could NOT be removed from the authorization model becuase of audit requirements (if any user in the product updates financials or key data their userid is recorded for audit purposes). It is now possible to effectively diable the user from the security model to prevent any use of the useridwhilst retaining audit information. These are a subset of the security changes in Oracle Utilities Application Framework. More details about the security capabilities of the product is contained in My Oracle Support KB Id 773473.1 - Oracle Utilities Application Framework Security Overview.

    Read the article

  • Session Report: What’s New in JSF: A Complete Tour of JSF 2.2

    - by Janice J. Heiss
    On Wednesday, Ed Burns, Consulting Staff Member at Oracle, presented a session, CON3870 -- “What’s New in JSF: A Complete Tour of JSF 2.2,” in which he provided an update on recent developments in JavaServer Faces 2.2. He began by emphasizing that, “JavaServer Faces 2.2 continues the evolution of the Java EE standard user interface technology. Like previous releases, this iteration is very community-driven and transparent.” He pointed out that since JSF was introduced at the 2001 JavaOne Keynote, it has had a long and successful run and has found a home in applications where the UI logic resides entirely on the server where the model and UI logic is. In such cases, the browser performs fairly simple functions. However, developers can take advantage of the power of browsers, something that Project Avatar is focused on by letting developers author their applications so the UI logic is running on the client and communicating to the back end via RESTful web services. “Most importantly,” remarked Burns, “JSF 2.2 offers a really good migration path because even in the scope of one application you could have an app written with JSF that has its UI logic on the server and, on a gradual basis, you could migrate parts of the app over to use client-side technologies. This can be done at any level of granularity – per page or per collection of pages. It all depends on what you want to do.” His presentation, which focused on the basic new features of JSF 2.2, began by restating the scope of JSF and encouraged attendees to check out Roger Kitain’s session: CON5133 “Techniques for Responsive Real-Time Web UIs.” Burns explained that JSF has endured because, “We still need web apps that are maintainable, localizable, quick to build, accessible, secure, look great and are fun to use.” It is used on every continent – the curious can go here to check out where its unofficial usage is tracked. He emphasized the significance of the UI logic being substantially on the server. This: Separates Component Semantics from Rendering, Allows components to “own” their little patch of the UI -- encode/decode, And offers a well-defined lifecycle: Inversion of Control. Burns reminded attendees that JSR-344, the spec for JSF 2.2, is now on Java Community Process 2.8, a revised version of the JCP that allows for more openness and transparency. He then offered some tools for community access to JSF 2.2:    * Public java.net projects spec http://jsf-spec.java.net/ impl http://jsf.java.net/ Open Source: GPL+Classpath Exception    * Mailing Lists [email protected]                                Public readable archive, JSPA signed member read/write [email protected]                                     Public readable archive, any java.net member read/write                         All mail sent to jsr344-experts is sent to users. * Issue Tracker spec http://jsf-spec.java.net/issues/ impl http://jsf.java.net/issues/ JSF 2.2, which is JSR 344, has a Public Review Draft planned by December 2012 with no need for a Renewal Ballot. The Early Draft Review of JSR 344 was published on December 8, 2011. Interested developers are encouraged to offer their input. Six Big Ticket Features of JSF 2.2 Burns summarized the six big ticket features of JSF 2.2:* HTML5 Friendly Markup Support Pass through attributes and elements * Faces Flows* Cross Site Request Forgery Protection* Loading Facelets via ResourceHandler* File Upload Component* Multi-Templating He explained that he called it “HTML 5 friendly” because there is really nothing HTML 5 specific about it -- it could be 4. But it enables developers to use new elements that are present in HTML5 without having a JSF component library that is written to take advantage of those specifically. It gives the page author the ability to use plain HTML5 to write their page, but to still take advantage of the server-side available in JSF. He presented a demo showing JSF 2.2’s ability to leverage the expressiveness of HTML5. Burns then explained the significance of face flows, which offer function points and quantify how much work has taken place, something of great value to JSF users. He went on to talk about JSF 2.2.’s cross-site request forgery protection (CSRF) and offered details about how it protects applications against attack. Then he talked about JSF 2.2’s File Upload Component and explained that the final specification will have Ajax and non-Ajax support. The current milestone has non-Ajax support implemented. He then went on to explain its capacity to add facelets through ResourceHandler. Previously, JSF 2.0 added Facelets and ResourceHandler as disparate units; now in JSF 2.2 the two concepts are unified. Finally, he explained the concept of multi-templating in JSF 2.2 and went on to discuss more medium-level features of the release. For an easy, low maintenance way of staying in touch with JSF developments go to JSF’s Twitter page where every month or so, important updates are offered.

    Read the article

  • XNA Notes 011

    - by George Clingerman
    Even with a lot of the XNA community working on Dream Build Play entries ( I swear I’m going to finish mine this year!) people are still finding time to do side projects and be amazingly active in the XNA and XBLIG community. With my one eye on my code and one eye on the community, here’s what I noticed these over achievers doing this past week! Time Critical XNA News: Xbox LIVE Indie Games sales data will be delayed March 17-20th due to some schedule maintenance http://create.msdn.com/en-us/news/indie_games_data_delay_march2011 GameMarx is releasing a series of videos to help raise donations for victims of the earthquakes and tsunami in Japan. Help out if you can! http://www.gamemarx.com/video/special/29/help-japan-sushido.aspx XNA MVPs: Catalin Zima shares his thoughts on the MVP summit and my book! http://www.catalinzima.com/2011/03/mvp-summit-2011/ Glenn Wilson (@mykre) helps the XNA team announce some new educational content that you don’t want to miss if you’re porting your app or game to Windows Phone 7 http://www.virtualrealm.com.au/Blog/tabid/62/EntryId/653/Porting-your-App-or-Game-to-Windows-Phone-7.aspx and Windows Phone 7 from scratch http://www.virtualrealm.com.au/Blog/tabid/62/EntryId/654/Windows-Phone-from-Scratch.aspx and shares a link to some free architectural models and textures http://twitter.com/#!/Mykre/status/46410160784158720 George (that’s me!) shares his MVP Summit 2011 summary and XBLIG thoughts http://geekswithblogs.net/clingermangw/archive/2011/03/15/144366.aspx XNA Developers: @SmallCaveGames shares a Code of Ethics for Xbox LIVE Indie Game Developers http://smallcavegames.blogspot.com/2011/03/unofficial-xblig-developers-code-of.html Derek S adds more Xbox LIVE Indie Game studios to his master list of XBLIG links http://twitter.com/#!/Mr_Deeke/status/46140996056125440 http://xbl-indieverse.blogspot.com/p/xblig-links.html Making games and want to help kids? Then share your story with GameFace: America! http://gameitupinitiative.com/about-the-initiative/programs/gameface-america/ Xbox LIVE Indie Games (XBLIG): XonaGames shares some video footage of their booth from GDC 2011 Video 1: http://youtu.be/lxIV9nk3Gq4 Video 2: http://youtu.be/GgfrjqkxR_o Video 3: http://youtu.be/yVcpXrTX7SQ Joystiq on Mommy’s Best Games Serious Sam Double D http://www.joystiq.com/2011/03/16/the-most-important-thing-about-serious-sam-double-d/ And The Escapist recommends that gamers start learning to avoid cleavage now http://www.escapistmagazine.com/news/view/108543-Boobie-Bomber-Makes-First-Appearance-in-Serious-Sam-Double-D Magiko Gaming started a blog on the XBLIG dashboard daily Top 10 games in the US. Good way to go back in time and look at the history of which games were in the the Top 10. http://dailytop10indiegames.wordpress.com/ Where are they going now? XBLIG developers at a crossroads.. http://www.gamesetwatch.com/2011/03/where_are_they_going_now_xblig.php http://www.gamasutra.com/view/news/33527/InDepth_Where_Are_They_Going_Now_XBLIG_Developers_At_A_Crossroads_.php BinaryTweed’s Clover: A Curious Tail is Xbox LIVE’s Deal of the Week! http://www.armlessoctopus.com/2011/03/15/what-luck-clover-a-curious-tale-is-half-price-this-week/ Looking for an Xbox LIVE Indie Game to buy? Writings of Mass Deduction has over 125 suggestions at this point! http://writingsofmassdeduction.com/ SkaStudios shares Vampire Smile Achievements AND their PAX East 2011 Both Setup video http://www.ska-studios.com/2011/03/14/vampire-smile-achievement/ http://www.ska-studios.com/2011/03/15/pax-booth-setup-time-lapse/ MasterBlud and VVGTV starts a new community for XBLIG developers and gamers to join http://vvgtv.forumotion.com/ Raymond Matthews (@DrakstarMatryx) covers Mommy’s Best Games getting Serious http://www.darkstarmatryx.com/?p=286 XNA Development: Dave Henry (@mort8088) posts the 4th tutorial in his series XNA 4.0 SpriteBatch extended http://mort8088.com/2011/03/11/xna-4-0-tutorial-4-spritebatch-extended/ Tutorial 5 - Creating a manual blank texture http://mort8088.com/2011/03/13/xna-4-tutorial-5-manual-blank-texture/ XNA 4.0 Tutorial 6 - Spritesheet Object http://mort8088.com/2011/03/18/xna-4-0-tutorial-6-spritesheet-object/ Jason Mitchell shares a tutorial on setting the alpha value for spritebatch in XNA 4.0 http://www.jason-mitchell.com/index.php/2011/03/13/setting-alpha-value-for-spritebatch-draw-in-xna-4/ XNA for Silverlight Developers: Part 7 - Collision Detection http://www.silverlightshow.net/items/XNA-for-Silverlight-developers-Part-7-Collision-detection.aspx Markus Ewald (@Cygon4) shares the full Ninject 2.0 binding for XNA and Sunburn http://twitter.com/#!/Cygon4/status/48330203826622464 Michael B. McLaughlin shares an AccelerometerInput XNA GameComponent he created (which I’m probably going to snag for a game I’m working on...) http://geekswithblogs.net/mikebmcl/archive/2011/03/17/accelerometerinput-xna-gamecomponent.aspx Extra Credit tackles the building of a good tutorial. Must watch for all Indie game devs (thanks for pointing it out Evan Johnson!) http://twitter.com/#!/johnsonevan/status/48452115680604160 http://www.escapistmagazine.com/videos/view/extra-credits/2921-Tutorials-101 ExEn is fully funded at this point so definitely something for XBLIG developers to keep an eye on as they consider releasing their games on other platforms http://rockethub.com/projects/752-exen-xna-for-iphone-android-and-silverlight Channel 9 and Greg Duncan post Mixing the Game State Management and Platformer XNA Recipes http://channel9.msdn.com/coding4fun/blog/Mixing-the-Game-State-Management-and-Platformer-XNA-Recipes Sgt. Conker has noticed Mike McLaughlin has been crazy productive and has done a recap of his recent posts http://www.sgtconker.com/2011/03/recap-of-mikebmcls-posts/

    Read the article

  • BizTalk host throttling &ndash; Singleton pattern and High database size

    - by S.E.R.
    Originally posted on: http://geekswithblogs.net/SERivas/archive/2013/06/30/biztalk-host-throttling-ndash-singleton-pattern-and-high-database-size.aspxI have worked for some days around the singleton pattern (for those unfamiliar with it, read this post by Victor Fehlberg) and have come across a few very interesting posts, among which one dealt with performance issues (here, also by Victor Fehlberg). Simply put: if you have an orchestration which implements the singleton pattern, then performances will continuously decrease as the orchestration receives and consumes messages, and that behavior is more obvious when the orchestration never ends (ie : it keeps looping and never terminates or completes). As I experienced the same kind of problem (actually I was alerted by SCOM, which told me that the host was being throttled because of High database size), I thought it would be a good idea to dig a little bit a see what happens deep inside BizTalk and thus understand the reasons for this behavior. NOTE: in this article, I will focus on this High database size throttling condition. I will try and work on the other conditions in some not too distant future… Test conditions The singleton orchestration For the purpose of this study, I have created the following orchestration, which is a very basic implementation of a singleton that piles up incoming messages, then does something else when a certain timeout has been reached without receiving another message: Throttling settings I have two distinct hosts : one that hosts the receive port (basic FILE port) : Ports_ReceiveHostone that hosts the orchestration : ProcessingHost In order to emphasize the throttling mechanism, I have modified the throttling settings for each of these hosts are as follows (all other parameters are set to the default value): [Throttling thresholds] Message count in database: 500 (default value : 50000) Evolution of performance counters when submitting messages Since we are investigating the High database size throttling condition, here are the performance counter that we should take a look at (all of them are in the BizTalk:Message Agent performance object): Database sizeHigh database sizeMessage delivery throttling stateMessage publishing throttling stateMessage delivery delay (ms)Message publishing delay (ms)Message delivery throttling state durationMessage publishing throttling state duration (If you are not used to Perfmon, I strongly recommend that you start using it right now: it is a wonderful tool that allows you to open the hood and see what is going on inside BizTalk – and other systems) Database size It is quite obvious that we will start by watching the database size and high database size counters, just to see when the first reaches the configured threshold (500) and when the second rings the alarm. NOTE : During this test I submitted 600 messages, one message at a time every 10ms to see the evolution of the counters we have previously selected. It might not show very well on this screenshot, but here is what happened: From 15:46:50 to 15:47:50, the database size for the Ports_ReceiveHost host (blue line) kept growing until it reached a maximum of 504.At 15:47:50, the high database size alert fires At first I was surprised by this result: why is it the database size of the receiving host that keeps growing since it is the processing host that piles up messages? Actually, it makes total sense. This counter measures the size of the database queue that is being filled by the host, not consumed. Therefore, the high database size alert is raised on the host that fills the queue: Ports_ReceiveHost. More information is available on the Public MPWiki page. Now, looking at the Message publishing throttling state for the receiving host (green line), we can see that a throttling condition has been reached at 15:47:50: We can also see that the Message publishing delay(ms) (blue line) has begun growing slowly from this point. All of this explains why performances keep decreasing when a singleton keeps processing new messages: the database size grows and when it has exceeded the Message count in database threshold, the host is throttled and the publishing delay keeps increasing. Digging further So, what happens to the database queue then? Is it flushed some day or does it keep growing and growing indefinitely? The real question being: will the host be throttled forever because of this singleton? To answer this question, I set the Message count in database threshold to 20 (this value is very low in order not to wait for too long, otherwise I certainly would have fallen asleep in front of my screen) and I submitted 30 messages. The test was started at 18:26. At 18:56 (ie : exactly 30min later) the throttling was stopped and the database size was divided by 2. 30 min later again, the database size had dropped to almost zero: I guess I’ll have to find some documentation and do some more testing before I sort this out! My guess is that some maintenance job is at work here, though I cannot tell which one Digging even further If we take a look at the Message delivery throttling state counter for the processing host, we can see that this host was also throttled during the submission of the 600 documents: The value for the counter was 1, meaning that Message delivery incoming rate for the host instance exceeds the Message delivery outgoing rate * the specified Rate overdrive factor (percent) value. We will see this another day… :) A last word Let’s end this article with a warning: DO NOT CHANGE THE THROTTLING SETTINGS LIGHTLY! The temptation can be great to just bypass throttling by setting very high values for each parameter (or zero in some cases, which simply disables throttling). Nevertheless, always keep in mind that this mechanism is here for a very good reason: prevent your BizTalk infrastructure from exploding!! So whatever you do with those settings, do a lot of testing and benchmarking!

    Read the article

  • Juniper Strategy, LLC is hiring SharePoint Developers&hellip;

    - by Mark Rackley
    Isn’t everybody these days? It seems as though there are definitely more jobs than qualified devs these days, but yes, we are looking for a few good devs to help round out our burgeoning SharePoint team. Juniper Strategy is located in the DC area, however we will consider remote devs for the right fit. This is your chance to get in on the ground floor of a bright company that truly “gets it” when it comes to SharePoint, Project Management, and Information Assurance. We need like-minded people who “get it”, enjoy it, and who are looking for more than just a job. We have government and commercial opportunities as well as our own internal product that has a bright future of its own. Our immediate needs are for SharePoint .NET developers, but feel free to submit your resume for us to keep on file as it looks as though we’ll need several people in the coming months. Please email us your resume and salary requirements to [email protected] Below are our official job postings. Thanks for stopping by, we look forward to  hearing from you. Senior SharePoint .NET Developer Senior developer will focus on design and coding of custom, end-to-end business process solutions within the SharePoint framework. Senior developer with the ability to serve as a senior developer/mentor and manage day-to-day development tasks. Work with business consultants and clients to gather requirements to prepare business functional specifications. Analyze and recommend technical/development alternative paths based on business functional specifications. For selected development path, prepare technical specification and build the solution. Assist project manager with defining development task schedule and level-of-effort. Lead technical solution deployment. Job Requirements Minimum of 7 years experience in agile development, with at least 3 years of SharePoint-related development experience (SPS, SharePoint 2007/2010, WSS2-4). Thorough understanding of and demonstrated experience in development under the SharePoint Object Model, with focus on the WSS 3.0 foundation (MOSS 2007 Standard/Enterprise, Project Server 2007). Experience with using multiple data sources/repositories for database CRUD activities, including relational databases, SAP, Oracle e-Business. Experience with designing and deploying performance-based solutions in SharePoint for business processes that involve a very large number of records. Experience designing dynamic dashboards and mashups with data from multiple sources (internal to SharePoint as well as from external sources). Experience designing custom forms to facilitate user data entry, both with and without leveraging Forms Services. Experience building custom web part solutions. Experience with designing custom solutions for processing underlying business logic requirements including, but not limited to, SQL stored procedures, C#/ASP.Net workflows/event handlers (including timer jobs) to support multi-tiered decision trees and associated computations. Ability to create complex solution packages for deployment (e.g., feature-stapled site definitions). Must have impeccable communication skills, both written and verbal. Seeking a "tinkerer"; proactive with a thirst for knowledge (and a sense of humor). A US Citizen is required, and need to be able to pass NAC/E-Verify. An active Secret clearance is preferred. Applicants must pass a skills assessment test. MCP/MCTS or comparable certification preferred. Salary & Travel Negotiable SharePoint Project Lead Define project task schedule, work breakdown structure and level-of-effort. Serve as principal liaison to the customer to manage deliverables and expectations. Day-to-day project and team management, including preparation and maintenance of project plans, budgets, and status reports. Prepare technical briefings and presentation decks, provide briefs to C-level stakeholders. Work with business consultants and clients to gather requirements to prepare business functional specifications. Analyze and recommend technical/development alternative paths based on business functional specifications. The SharePoint Project Lead will be working with SharePoint architects and system owners to perform requirements/gap analysis and develop the underlying functional specifications. Once we have functional specifications as close to "final" as possible, the Project Lead will be responsible for preparation of the associated technical specification/development blueprint, along with assistance in preparing IV&V/test plan materials with support from other team members. This person will also be responsible for day-to-day management of "developers", but is also expected to engage in development directly as needed.  Job Requirements Minimum 8 years of technology project management across the software development life-cycle, with a minimum of 3 years of project management relating specifically to SharePoint (SPS 2003, SharePoint2007/2010) and/or Project Server. Thorough understanding of and demonstrated experience in development under the SharePoint Object Model, with focus on the WSS 3.0 foundation (MOSS 2007 Standard/Enterprise, Project Server 2007). Ability to interact and collaborate effectively with team members and stakeholders of different skill sets, personalities and needs. General "development" skill set required is a fundamental understanding of MOSS 2007 Enterprise, SP1/SP2, from the top-level of skinning to the core of the SharePoint object model. Impeccable communication skills, both written and verbal, and a sense of humor are required. The projects will require being at a client site at least 50% of the time in Washington DC (NW DC) and Maryland (near Suitland). A US Citizen is required, and need to be able to pass NAC/E-Verify. An active Secret clearance is preferred. PMP certification, PgMP preferred. Salary & Travel Negotiable

    Read the article

  • Dynamic Data Connections

    - by Tim Dexter
    I have had a long running email thread running between Dan and David over at Valspar and myself. They have built some impressive connectivity between their in house apps and BIP using web services. The crux of their problem has been that they have multiple databases that need the same report executed against them. Not such an unusual request as I have spoken to two customers in the last month with the same situation. Of course, you could create a report against each data connection and just run or call the appropriate report. Not too bad if you have two or three data connections but more than that and it becomes a maintenance nightmare having to update queries or layouts. Ideally you want to have just a single report definition on the BIP server and to dynamically set the connection to be used at runtime based on the user or system that the user is in. A quick bit of digging and help from Shinji on the development team and I had an answer. Rather embarassingly, the solution has been around since the Oct 2010 rollup patch last year. Still, I grabbed the latest Jan 2011 patch - check out Note 797057.1 for the latest available patches. Once installed, I used the best web service testing tool I have yet to come across - SoapUI. Just point it at the WSDL and you can check out the available services and their parameters and then test them too. The XML packet has a new dynamic data source entry. You can set you own custom JDBC connection or just specify an existing data source name thats defined on the server. <pub:runReport> <pub:reportRequest> <pub:attributeFormat>xml</pub:attributeFormat> <pub:attributeTemplate>0</pub:attributeTemplate> <pub:byPassCache>true</pub:byPassCache> <pub:dynamicDataSource> <pub:JDBCDataSource> <pub:JDBCDriverClass></pub:JDBCDriverClass> <pub:JDBCDriverType></pub:JDBCDriverType> <pub:JDBCPassword></pub:JDBCPassword> <pub:JDBCURL></pub:JDBCURL> <pub:JDBCUserName></pub:JDBCUserName> <pub:dataSourceName>Conn1</pub:dataSourceName> </pub:JDBCDataSource> </pub:dynamicDataSource> <pub:reportAbsolutePath>/Test/Employee Report/Employee Report.xdo</pub:reportAbsolutePath> </pub:reportRequest> <pub:userID>Administrator</pub:userID> <pub:password>Administrator</pub:password> </pub:runReport> So I have Conn1 and Conn2 defined that are connections to different databases. I can just flip the name, make the WS call and get the appropriate dataset in my report. Just as an example, here's my web service call java code. Just a case of bringing in the BIP java libs to my java project. publicReportServiceService = new PublicReportServiceService(); PublicReportService publicReportService = publicReportServiceService.getPublicReportService_v11(); String userID = "Administrator"; String password = "Administrator"; ReportRequest rr = new ReportRequest(); rr.setAttributeFormat("xml"); rr.setAttributeTemplate("1"); rr.setByPassCache(true); rr.setReportAbsolutePath("/Test/Employee Report/Employee Report.xdo"); rr.setReportOutputPath("c:\\temp\\output.xml"); BIPDataSource bipds = new BIPDataSource(); JDBCDataSource jds = new JDBCDataSource(); jds.setDataSourceName("Conn1"); bipds.setJDBCDataSource(jds); rr.setDynamicDataSource(bipds); try { publicReportService.runReport(rr, userID, password); } catch (InvalidParametersException e) { e.printStackTrace(); } catch (AccessDeniedException e) { e.printStackTrace(); } catch (OperationFailedException e) { e.printStackTrace(); } } Note, Im no java whiz kid or whizzy old bloke, at least not unless Ive had a coffee. JDeveloper has a nice feature where you point it at the WSDL and it creates everything to support your calling code for you. Couple of things to remember: 1. When you call the service, remember to set the bypass the cache option. Forget it and much scratching of your head and taking my name in vain will ensue. 2. My demo actually hit the same database but used two users, one accessed the base tables another views with the same name. For far too long I thought the connection swapping was not working. I was getting the same results for both users until I realized I was specifying the schema name for the table/view in my query e.g. select * from EMP.EMPLOYEES. So remember to have a generic query that will depend entirely on the connection. Its a neat feature if you want to be able to switch connections and only define a single report and call it remotely. Now if you want the connection to be set dynamically based on the user and the report run via the user interface, thats going to be more tricky ... need to think about that one!

    Read the article

  • SQL SERVER – Simple Example to Configure Resource Governor – Introduction to Resource Governor

    - by pinaldave
    Let us jump right away with question and answer mode. What is resource governor? Resource Governor is a feature which can manage SQL Server Workload and System Resource Consumption. We can limit the amount of CPU and memory consumption by limiting /governing /throttling on the SQL Server. Why is resource governor required? If there are different workloads running on SQL Server and each of the workload needs different resources or when workloads are competing for resources with each other and affecting the performance of the whole server resource governor is a very important task. What will be the real world example of need of resource governor? Here are two simple scenarios where the resource governor can be very useful. Scenario 1: A server which is running OLTP workload and various resource intensive reports on the same server. The ideal situation is where there are two servers which are data synced with each other and one server runs OLTP transactions and the second server runs all the resource intensive reports. However, not everybody has the luxury to set up this kind of environment. In case of the situation where reports and OLTP transactions are running on the same server, limiting the resources to the reporting workload it can be ensured that OTLP’s critical transaction is not throttled. Scenario 2: There are two DBAs in one organization. One DBA A runs critical queries for business and another DBA B is doing maintenance of the database. At any point in time the DBA A’s work should not be affected but at the same time DBA B should be allowed to work as well. The ideal situation is that when DBA B starts working he get some resources but he can’t get more than defined resources. Does SQL Server have any default resource governor component? Yes, SQL Server have two by default created resource governor component. 1) Internal –This is used by database engine exclusives and user have no control. 2) Default – This is used by all the workloads which are not assigned to any other group. What are the major components of the resource governor? Resource Pools Workload Groups Classification In simple words here is what the process of resource governor is. Create resource pool Create a workload group Create classification function based on the criteria specified Enable Resource Governor with classification function Let me further explain you the same with graphical image. Is it possible to configure resource governor with T-SQL? Yes, here is the code for it with explanation in between. Step 0: Here we are assuming that there are separate login accounts for Reporting server and OLTP server. /*----------------------------------------------- Step 0: (Optional and for Demo Purpose) Create Two User Logins 1) ReportUser, 2) PrimaryUser Use ReportUser login for Reports workload Use PrimaryUser login for OLTP workload -----------------------------------------------*/ Step 1: Creating Resource Pool We are creating two resource pools. 1) Report Server and 2) Primary OLTP Server. We are giving only a few resources to the Report Server Pool as described in the scenario 1 the other server is mission critical and not the report server. ----------------------------------------------- -- Step 1: Create Resource Pool ----------------------------------------------- -- Creating Resource Pool for Report Server CREATE RESOURCE POOL ReportServerPool WITH ( MIN_CPU_PERCENT=0, MAX_CPU_PERCENT=30, MIN_MEMORY_PERCENT=0, MAX_MEMORY_PERCENT=30) GO -- Creating Resource Pool for OLTP Primary Server CREATE RESOURCE POOL PrimaryServerPool WITH ( MIN_CPU_PERCENT=50, MAX_CPU_PERCENT=100, MIN_MEMORY_PERCENT=50, MAX_MEMORY_PERCENT=100) GO Step 2: Creating Workload Group We are creating two workloads each mapping to each of the resource pool which we have just created. ----------------------------------------------- -- Step 2: Create Workload Group ----------------------------------------------- -- Creating Workload Group for Report Server CREATE WORKLOAD GROUP ReportServerGroup USING ReportServerPool ; GO -- Creating Workload Group for OLTP Primary Server CREATE WORKLOAD GROUP PrimaryServerGroup USING PrimaryServerPool ; GO Step 3: Creating user defiled function which routes the workload to the appropriate workload group. In this example we are checking SUSER_NAME() and making the decision of Workgroup selection. We can use other functions such as HOST_NAME(), APP_NAME(), IS_MEMBER() etc. ----------------------------------------------- -- Step 3: Create UDF to Route Workload Group ----------------------------------------------- CREATE FUNCTION dbo.UDFClassifier() RETURNS SYSNAME WITH SCHEMABINDING AS BEGIN DECLARE @WorkloadGroup AS SYSNAME IF(SUSER_NAME() = 'ReportUser') SET @WorkloadGroup = 'ReportServerGroup' ELSE IF (SUSER_NAME() = 'PrimaryServerPool') SET @WorkloadGroup = 'PrimaryServerGroup' ELSE SET @WorkloadGroup = 'default' RETURN @WorkloadGroup END GO Step 4: In this final step we enable the resource governor with the classifier function created in earlier step 3. ----------------------------------------------- -- Step 4: Enable Resource Governer -- with UDFClassifier ----------------------------------------------- ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION=dbo.UDFClassifier); GO ALTER RESOURCE GOVERNOR RECONFIGURE GO Step 5: If you are following this demo and want to clean up your example, you should run following script. Running them will disable your resource governor as well delete all the objects created so far. ----------------------------------------------- -- Step 5: Clean Up -- Run only if you want to clean up everything ----------------------------------------------- ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION = NULL) GO ALTER RESOURCE GOVERNOR DISABLE GO DROP FUNCTION dbo.UDFClassifier GO DROP WORKLOAD GROUP ReportServerGroup GO DROP WORKLOAD GROUP PrimaryServerGroup GO DROP RESOURCE POOL ReportServerPool GO DROP RESOURCE POOL PrimaryServerPool GO ALTER RESOURCE GOVERNOR RECONFIGURE GO I hope this introductory example give enough light on the subject of Resource Governor. In future posts we will take this same example and learn a few more details. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Resource Governor

    Read the article

  • Oracle Enterprise Manager Ops Center 12c : Enterprise Controller High Availability (EC HA)

    - by Anand Akela
    Contributed by Mahesh sharma, Oracle Enterprise Manager Ops Center team In Oracle Enterprise Manager Ops Center 12c we introduced a new feature to make the Enterprise Controllers highly available. With EC HA if the hardware crashes, or if the Enterprise Controller services and/or the remote database stop responding, then the enterprise services are immediately restarted on the other standby Enterprise Controller without administrative intervention. In today's post, I'll briefly describe EC HA, look at some of the prerequisites and then show some screen shots of how the Enterprise Controller is represented in the BUI. In my next post, I'll show you how to install the EC in a HA environment and some of the new commands. What is EC HA? Enterprise Controller High Availability (EC HA) provides an active/standby fail-over solution for two or more Ops Center Enterprise Controllers, all within an Oracle Clusterware framework. This allows EC resources to relocate to a standby if the hardware crashes, or if certain services fail. It is also possible to manually relocate the services if maintenance on the active EC is required. When the EC services are relocated to the standby, EC services are interrupted only for the period it takes for the EC services to stop on the active node and to start back up on a standby node. What are the prerequisites? To install EC in a HA framework an understanding of the prerequisites are required. There are many possibilities on how these prerequisites can be installed and configured - we will not discuss these in this post. However, best practices should be applied when installing and configuring, I would suggest that you get expert help if you are not familiar with them. Lets briefly look at each of these prerequisites in turn: Hardware : Servers are required to host the active and standby node(s). As the nodes will be in a clustered environment, they need to be the same model and configured identically. The nodes should have the same processor class, number of cores, memory, network cards, for example. Operating System : We can use Solaris 10 9/10 or higher, Solaris 11, OEL 5.5 or higher on x86 or Sparc Network : There are a number of requirements for network cards in clusterware, and cables should be networked identically on all the nodes. We must also consider IP allocation for public / private and Virtual IP's (VIP's). Storage : Shared storage will be required for the cluster voting disks, Oracle Cluster Register (OCR) and the EC's libraries. Clusterware : Oracle Clusterware version 11.2.0.3 or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html Remote Database : Oracle RDBMS 11.1.0.x or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html For detailed information on how to install EC HA , please read : http://docs.oracle.com/cd/E27363_01/doc.121/e25140/install_config-shared.htm#OPCSO242 For detailed instructions on installing Oracle Clusterware, please read : http://docs.oracle.com/cd/E11882_01/install.112/e17214/chklist.htm#BHACBGII For detailed instructions on installing the remote Oracle database have a read of: http://www.oracle.com/technetwork/database/enterprise-edition/documentation/index.html The schematic diagram below gives a visual view of how the prerequisites are connected. When a fail-over occurs the Enterprise Controller resources and the VIP are relocated to one of the standby nodes. The standby node then becomes active and all Ops Center services are resumed. Connecting to the Enterprise Controller from your favourite browser. Let's presume we have installed and configured all the prerequisites, and installed Ops Center on the active and standby nodes. We can now connect to the active node from a browser i.e. http://<active_node1>/, this will redirect us to the virtual IP address (VIP). The VIP is the IP address that moves with the Enterprise Controller resource. Once you log on and view the assets, you will see some new symbols, these represent that the nodes are cluster members, with one being an active member and the other a standby member in this case. If you connect to the standby node, the browser will redirect you to a splash page, indicating that you have connected to the standby node. Hope you find this topic interesting. Next time I will post about how to install the Enterprise Controller in the HA frame work. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • SQL SERVER – Parsing SSIS Catalog Messages – Notes from the Field #030

    - by Pinal Dave
    [Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications. The tool may also be used to automate maintenance of SQL Server databases and updates to multidimensional cube data. In this episode of the Notes from the Field series I requested SSIS Expert Andy Leonard to discuss one of the most interesting concepts of SSIS Catalog Messages. There are plenty of interesting and useful information captured in the SSIS catalog and we will learn together how to explore the same. The SSIS Catalog captures a lot of cool information by default. Here’s a query I use to parse messages from the catalog.operation_messages table in the SSISDB database, where the logged messages are stored. This query is set up to parse a default message transmitted by the Lookup Transformation. It’s one of my favorite messages in the SSIS log because it gives me excellent information when I’m tuning SSIS data flows. The message reads similar to: Data Flow Task:Information: The Lookup processed 4485 rows in the cache. The processing time was 0.015 seconds. The cache used 1376895 bytes of memory. The query: USE SSISDB GO DECLARE @MessageSourceType INT = 60 DECLARE @StartOfIDString VARCHAR(100) = 'The Lookup processed ' DECLARE @ProcessingTimeString VARCHAR(100) = 'The processing time was ' DECLARE @CacheUsedString VARCHAR(100) = 'The cache used ' DECLARE @StartOfIDSearchString VARCHAR(100) = '%' + @StartOfIDString + '%' DECLARE @ProcessingTimeSearchString VARCHAR(100) = '%' + @ProcessingTimeString + '%' DECLARE @CacheUsedSearchString VARCHAR(100) = '%' + @CacheUsedString + '%' SELECT operation_id , SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1))) AS LookupRowsCount , SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))) AS LookupProcessingTime , CASE WHEN (CONVERT(numeric(3,3),SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))))) = 0 THEN 0 ELSE CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))) / CONVERT(numeric(3,3),SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1)))) END AS LookupRowsPerSecond , SUBSTRING(MESSAGE, (PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1)) - (PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1))) AS LookupBytesUsed ,CASE WHEN (CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))))= 0 THEN 0 ELSE CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1)) - (PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1)))) / CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))) END AS LookupBytesPerRow FROM [catalog].[operation_messages] WHERE message_source_type = @MessageSourceType AND MESSAGE LIKE @StartOfIDSearchString GO Note that you have to set some parameter values: @MessageSourceType [int] – represents the message source type value from the following results: Value     Description 10           Entry APIs, such as T-SQL and CLR Stored procedures 20           External process used to run package (ISServerExec.exe) 30           Package-level objects 40           Control Flow tasks 50           Control Flow containers 60           Data Flow task 70           Custom execution message Note: Taken from Reza Rad’s (excellent!) helper.MessageSourceType table found here. @StartOfIDString [VarChar(100)] – use this to uniquely identify the message field value you wish to parse. In this case, the string ‘The Lookup processed ‘ identifies all the Lookup Transformation messages I desire to parse. @ProcessingTimeString [VarChar(100)] – this parameter is message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Processing Time value. For this execution, I use the string ‘The processing time was ‘. @CacheUsedString [VarChar(100)] – this parameter is also message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Cache  Used value. It returns the memory used, in bytes. For this execution, I use the string ‘The cache used ‘. The other parameters are built from variations of the parameters listed above. The query parses the values into text. The string values are converted to numeric values for ratio calculations; LookupRowsPerSecond and LookupBytesPerRow. Since ratios involve division, CASE statements check for denominators that equal 0. Here are the results in an SSMS grid: This is not the only way to retrieve this information. And much of the code lends itself to conversion to functions. If there is interest, I will share the functions in an upcoming post. If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • Profit's COLLABORATE 10 Session Selections

    - by Aaron Lazenby
    COLLABORATE 2010 is a mere 11 days away (thanks for the reminder @ocp_advisor). Every year I publish my a list of the sessions I think reflect some of the more interesting people/trends in enterprise IT. I should be at all of these sessions, so drop by for a chat--I'll be the guy tapping out emails on my iPad... Monday, April 19 9:15 a.m. - Keynote: Transforming Customer Value, Delivering Highest Customer Service Location: Keynote Hall I never miss Charles Phillips when he speaks--it's one of the best opportunities to get an update on Oracle product developments and strategy. And there's certainly occasion for an update: this will be Phillips' first big presentation since the Oracle + Sun Strategy Update in late January. Phillips is appearing with Oracle Executive Vice President of Development Thomas Kurian which means there should be some excellent information about how customers are using Oracle's complete software and hardware stack to address enterprise IT challenges. The session should provide some excellent context for the rest of the week's session...don't miss it. 10:45 a.m. - Oracle Fusion Applications: Functional Overview Location: South Seas FI met Basheer Khan at COLLABORATE 08 in Denver and have followed his work ever since. He's a former member of the OAUG Board of Directors, an Oracle ACE, and a charismatic enterprise IT expert. Having worked with the Oracle Usability Advisory Board, Basheer should have some fascinating insights to share about the features and interface of Oracle's Fusine Applications. This session, along with Nadia Bendjedou's "10 Things You Can Do Today to Prepare for the Next Generation Applications" (on Tuesday, April 20 8:00 a.m. in room 3662) should give attendees the update they need about Oracle's next-generation applications.   1:15p.m. - E-Business Suite in the Amazon Cloud Location: South Seas HI did my first full-fledged cloud computing coverage at last year's COLLABORATE show (check out my interview with Oracle's Bill Hodak), where I first learned about Amazon's EC2 offering. I've since talked with several people who have provisioned server space on Amazon's cloud with great results. So I'm looking forward to watching the audience configure an instance of the Oracle E-Business Suite release 12 on the cloud while Chuck Edwards from Blue Gecko drives. This session should take some of the mist and vapor out of the cloud conversation.2:30 p.m. - "Zero Sign-on" to EBS - Enabling 96000 Users to Login to EBS Without User Maintenance Location: South Seas HI'll be sitting tight in South Seas H for the next session on Monday where Doug Pepka, a ten-year veteran of communications giant Comcast, will be walking attendees through a massive single sign-on (SSO) project across the enterprise. I'm working on a story about SSO for the August issue of Profit, so this session has real practical value to me. Plus the proliferation of user account logins--both personal and professional--makes this a critical usability/change management issue for IT leaders planning for successful long-term IT implementations.   Tuesday 8:00 am  - Information Architecture for Men in Kilts Location: SURF AGetting to a 8:00 a.m. presentation is a tall order in Las Vegas, but presenter Billy Cripe will make it worth your effort. Not only is the title of this session great, but the content should appeal to any IT strategist looking to push the limits of Web 2.0 technologies in the enterprise. Cripe is a product management director of Enterprise 2.0 and Enterprise Content Management at Oracle, author of Reshaping Your Business with Web 2.0, and a prolific blogger--he knows how information architecture is critical to and enterprise 2.0 implementation.    10:30a.m. - Oracle Virtualization: From Desktop to Data Center Location: REEF FData center virtualization is still one of the best ways to reduce the cost of running enterprise IT. With the addition of Sun products, Oracle has the industry's most comprehensive virtualization portfolio. I must admit, I'm no expert in this subject. So I'm looking forward to Monica Kumar's presentation so I can get up to speed.   Wednesday 8:00 a.m. - The Art of the Steal Location: Mandalay Bay Ballroom JMany will know Frank Abagnale from Steven Spielberg's 2002 film "Catch Me if You Can." The one-time con man and international fugitive who swindled $2.5 million in forged checks went on to help U.S. federal officials investigate fraud cases. Now the CEO of Abagnale and Associates, he has become an invaluable source to the business world on the subject of fraud and fraud protection. With identity theft and digital fraud still on the rise, this session should be an entertaining, and sobering, education on the threats facing businesses and customers around the world. A great way to start Wednesday.1:00 p.m. - Google Wave: Will it replace e-mail as we know it today? Location: SURF EBy many assessments (my own included), Google Wave is a bit of an open collaboration failure. It may seem like an odd reason for me to be excited about this session, but I'm looking forward to the chance to revisit the technology. Also, this is a great case study in connecting free, available Internet tools to existing enterprise computing environments--an issue that IT strategists must contend with as workers spreads out and choose their own productivity tools.  

    Read the article

  • What Will Happen to Real Estate Leases when Operating Leases are Gone?

    - by Theresa Hickman
    Many people are concerned about what will happen to real estate leases when FASB and IASB abolish operating leases. They plan to unveil the proposed standards on treating leases this summer as part of the convergence project but no "finalized ruling" is expected for at least a year because it will need to get formal consensus from many players, such as the SEC, American Association of Investors, Congress, the Big Four, American Associate of Realtors, the international equivalents of these, etc. If your accounting is a bit rusty, an Operating Lease is where you lease equipment or some asset for a shorter period than the actual (expected) life of the asset and then give the asset back while it still has some useful life in it. (Think leasing a car). Because an Operating Lease does not contain any of the provisions that would qualify it as a Capital Lease, the lease is not treated as a sale or purchase and hits the lessee's rental expense and the lessor's revenue. So it all stays on the P&L (assuming no prepayments are made). Capital Leases, on the other hand, hit lessee's and lessor's balance sheets because the asset is treated as a sale. (I'm ignoring interest and depreciation here to emphasize my point). Question: What will happen to real estate leases when Operating Leases go away and how will Oracle Financials address these changes? Before I attempt to address these questions, here's a real-life example to expound on some of the issues: Let's say a U.S. retailer leases a store in a mall for 15 years. Under U.S. GAAP, the lease is considered an operating or expense lease. Will that same lease be considered a capital lease under IFRS? Real estate leases are supposedly going to be capitalized under IFRS. If so, will everyone need to change all leases from operating to capital? Or, could we make some adjustments so we report the lease as an expense for operations reporting but capitalize it for SEC reporting? Would all aspects of the lease be capitalized, or would some line items still be expensed? For example, many retail store leases are defined to include (1) the agreed-to rent amount; (2) a negotiated increase in base rent, e.g., maybe a 5% increase in Year 5; (3) a sales rent component whereby the retailer pays a variable additional amount based on the sales generated in the prior month; (4) parking lot maintenance fees. Would the entire lease be capitalized, or would some portions still be expensed? To help answer these questions, I met up with our resident accounting expert and walking encyclopedia, Seamus Moran. Here's what he had to say: Oracle is aware of the potential changes specific to reporting/capitalization of real estate leases; i.e., we are aware that FASB and IASB have identified real estate leases as one of the areas for standards convergence. Oracle stays apprised of the on-going convergence through our domain expertise staff, our relationship with customers, our market awareness, and, of course, our relationships with the Big 4. This is part of our normal process with respect to regulatory compliance worldwide. At this time, Oracle expects that the standards convergence committee will make a recommendation about reporting standards for real estate leases in about a year. Following typical procedures, we also expect that the recommendation will be up for review for a year, and customers will then need to start reporting to the new standard about a year after that. So that means we would expect the first customer to report under the new standard in maybe 3 years. Typically, after the new standard is finalized and distributed, we find that our customers then begin to evaluate how they plan to meet the new standard. And through groups like the Customer Advisory Boards (CABs), our customers tell us what kind of product changes are needed in order to satisfy their new reporting requirements. Of course, Oracle is also working with the Big 4 and Accenture and other implementers in order to ascertain that these recommended changes will indeed meet new reporting standards. So the best advice we can offer right now is, stay apprised of the standards convergence committee; know that Oracle is also staying abreast of developments; get involved with your CAB so your voice is heard; know that Oracle products continue to be GAAP compliant, and we will continue to maintain that as our standard. But exactly what is that "standard"--we need to wait on the standards convergence committee. In a nut shell, operating leases will become either capital leases or month to month rentals, but it is still too early, too political and too uncertain to call out at this point.

    Read the article

  • Five Reasons to Attend PLM Summit 2013: The Conference Formerly Known as AGILITY

    - by Terri Hiskey
    As we approach the end of 2012, we are also closing in on the last couple of weeks that Agile customers and prospects can register for the upcoming PLM Summit 2013 for the bargain early bird rate of $195. Register now to secure your spot! The Conference Formerly Known as AGILITY... Long-time Agile customers may remember AGILITY, which was Agile's PLM customer conference that was held on an annual basis prior to Oracle's acquisiton of Agile in 2007. In February 2012, due to feedback we received from our Agile PLM community, we successfully resurrected the AGILITY conference and renamed it the PLM Summit. The PLM Summit was so well received and well-attended, that we are doing it again in 2013. This upcoming PLM Summit is being co-located in San Francisco under the overarching banner of the Oracle Value Chain Summit, and will be held alongside several other Oracle customer conferences that cover a range of value chain solutions, including Value Chain Planning, Value Chain Execution, Procurement, Maintenance and Manufacturing. This setup offers PLM attendees the best of all worlds--the opportunity to participate and learn about PLM in smaller, focused sessions by product and by industry, while also giving attendees the chance to see how PLM works together with other critical enterprise applications that address other important aspects of the value chain. Top Five Reasons to Attend the PLM Summit 2013 In the spirit of all of the end-of-the-year lists that are currently popping up, here is a list of the top five reasons to attend the PLM Summit for anyone out there needs a little extra encouragement to register: 1. The Best Opportunities for Customer Networking   The PLM Summit offers attendees numerous opportunities to learn and network with fellow Agile users. Customer stories are featured in keynote and breakout presentations and the schedule allows for plenty of networking time during breakfasts, lunches, breaks and dinners. Customer networking is the number one reason that Agile users attend the PLM Summit. Read what attendees thought of the most recent PLM Summit: "Hearing about the implementation of Agile products from a customers’ perspective is invaluable." - Director of Quality Assurance & Regulatory Affairs, leading medical device manufacturer "Understanding the scope of other companies’ projects and the lessons learned made attending this event well worth my time." - Director of Test Engineering, global industrial manufacturer "The most beneficial thing about attending this event is the opportunity to network with other customers with similar experiences." - Director of Business Process Improvement, leading high technology company Come to the PLM Summit and play an active role within the PLM community: swap war stories and business cards, connect on LinkedIn and Facebook, share your stories and discuss the sessions from each day. Register now! 2. It's Educational! The PLM Summit is the premier educational event for anyone in the Agile PLM community. There are nearly 40 PLM-focused in-depth educational sessions led by Agile PLM experts, customers and partners that will cover a range of specific product and industry-focused topics. Keynotes will give attendees a broad overview of the entire Agile PLM footprint, while sessions will delve deeply into specific product functionality and customer case studies. There is truly something for everyone. Check out the latest agenda for view of all the sessions. 3. Visit with the PLM Partner Community Our partners play a significant and important role within the Agile PLM community. At the PLM Summit, attendees will be able to meet and mingle with several of the top Oracle Agile PLM partners including: Deloitte, Domain, GoEngineer, Hitachi Consulting, IBM, Kalypso, KPIT Cummins (CPG Solutions), Perception Software, Verdant, Xavor and ZeroWaitState. Go here for a complete list of all the Value Chain Summit sponsors. 4. See Agile PLM in Action at our Dedicated PLM Demo Pods At the PLM Summit, attendees will have the chance to see Agile PLM in action at dedicated PLM demo pods, manned by expert members of our Agile PLM team. If you would like to see up close specific Agile PLM functionality, or if you have a question on how to extend the scope of your current implemention or if you want a better understanding of how to leverage Agile PLM to address specific use-cases, stop by one of the Agile PLM demo pods and engage the Agile PLM experts on hand at the PLM Summit. 5. Spend Some Time in Lovely San Francisco Still on the fence about the upcoming PLM Summit? Remember that it is being held in San Francisco, which is a fantastic city for a getaway. After spending time learning and networking about PLM, take an extra day or two to escape the dreary winter and enjoy the beautiful scenery and the unique actitivies offered only by the City by the Bay. You will walk away from the conference not only with renewed excitement about Agile PLM, but feeling rejuvenated in general.

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >