Search Results

Search found 64472 results on 2579 pages for 'data context'.

Page 495/2579 | < Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >

  • SQL SERVER – Order By Numeric Values Formatted as String

    - by pinaldave
    When I was writing this blog post I had a hard time to come up with the title of the blog post so I did my best to come up with one. Here is the reason why? I wrote a blog post earlier SQL SERVER – Find First Non-Numeric Character from String. One of the questions was that how that blog can be useful in real life scenario. This blog post is the answer to that question. Let us first see a problem. We have a table which has a column containing alphanumeric data. The data always has first as an integer and later part as a string. The business need is to order the data based on the first part of the alphanumeric data which is an integer. Now the problem is that no matter how we use ORDER BY the result is not produced as expected. Let us understand this with example. Prepare a sample data: -- How to find first non numberic character USE tempdb GO CREATE TABLE MyTable (ID INT, Col1 VARCHAR(100)) GO INSERT INTO MyTable (ID, Col1) SELECT 1, '1one' UNION ALL SELECT 2, '11eleven' UNION ALL SELECT 3, '2two' UNION ALL SELECT 4, '22twentytwo' UNION ALL SELECT 5, '111oneeleven' GO -- Select Data SELECT * FROM MyTable GO The above query will give following result set. Now let us use ORDER BY COL1 and observe the result along with Original SELECT. -- Select Data SELECT * FROM MyTable GO -- Select Data SELECT * FROM MyTable ORDER BY Col1 GO The result of the table is not as per expected. We need the result in following format. Here is the good example of how we can use PATINDEX. -- Use of PATINDEX SELECT ID, LEFT(Col1,PATINDEX('%[^0-9]%',Col1)-1) 'Numeric Character', Col1 'Original Character' FROM MyTable ORDER BY LEFT(Col1,PATINDEX('%[^0-9]%',Col1)-1) GO We can use PATINDEX to identify the length of the digit part in the alphanumeric string (Remember: Our string has a first part as an int always. It will not work in any other scenario). Now you can use the LEFT function to extract the INT portion from the alphanumeric string and order the data according to it. You can easily clean up the script by dropping following table. DROP TABLE MyTable GO Here is the complete script so you can easily refer it. -- How to find first non numberic character USE tempdb GO CREATE TABLE MyTable (ID INT, Col1 VARCHAR(100)) GO INSERT INTO MyTable (ID, Col1) SELECT 1, '1one' UNION ALL SELECT 2, '11eleven' UNION ALL SELECT 3, '2two' UNION ALL SELECT 4, '22twentytwo' UNION ALL SELECT 5, '111oneeleven' GO -- Select Data SELECT * FROM MyTable GO -- Select Data SELECT * FROM MyTable ORDER BY Col1 GO -- Use of PATINDEX SELECT ID, Col1 'Original Character' FROM MyTable ORDER BY LEFT(Col1,PATINDEX('%[^0-9]%',Col1)-1) GO DROP TABLE MyTable GO Well, isn’t it an interesting solution. Any suggestion for better solution? Additionally any suggestion for changing the title of this blog post? Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL String, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – BI Quiz Hint – Performance Tuning Cubes – Hints

    - by pinaldave
    I earlier wrote about SQL BI Quiz over here and here. The details of the quiz is here: Working with huge data is very common when it is about Data Warehousing. It is necessary to create Cubes on the data to make it meaningful and consumable. There are cases when retrieving the data from cube takes lots of the time. Let us assume that your cube is returning you data very quickly. Suddenly on one day it is returning the data very slowly. What are the three things will you to diagnose this. After diagnose what you will do to resolve performance issue. Participate in my question over here I required BI Expert Jason Thomas to help with few hints to blog readers. He is one of the leading SSAS expert and writes a complicated subject in simple words. If queries were executing properly before but now take a long time to return the data, it means that there has been a change in the environment in which it is running. Some possible changes are listed below:-  1) Data factors:- Compare the data size then and now. Increase in data can result in different execution times. Poorly written queries as well as poor design will not start showing issues till the data grows. How to find it out? (Ans : SQL Server profiler and Perfmon Counters can be used for identifying the issues and performance  tuning the MDX queries)  2) Internal Factors:- Is some slow MDX query / multiple mdx queries running at the same time, which was not running when you had tested it before? Is there any locking happening due to proactive caching or processing operations? Are the measure group caches being cleared by processing operations? (Ans : Again, profiler and perfmon counters will help in finding it out. Load testing can be done using AS Performance Workbench (http://asperfwb.codeplex.com/) by running multiple queries at once)  3) External factors:- Is some other application competing for the same resources?  HINT : Read “Identifying and Resolving MDX Query Performance Bottlenecks in SQL Server 2005 Analysis Services” (http://sqlcat.com/whitepapers/archive/2007/12/16/identifying-and-resolving-mdx-query-performance-bottlenecks-in-sql-server-2005-analysis-services.aspx) Well, these are great tips. Now win big prizes by participate in my question over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What's Old is New Again

    - by David Dorf
    Last night I told my son he could stream music to his tablet "from the cloud" (in this case, the Amazon Cloud).  He paused, then said, "what is the cloud?"  I replied, "a bunch of servers connected to the internet."  Apparently he had visions of something much more magnificent.  Another similar term is "big data."  These marketing terms help to quickly convey topics but are oversimplifications that are open to many interpretations.  At their core, those terms a shiny packages holding recycled ideas. I see many headlines declaring big data changes everything, but it doesn't.  Savvy retailers have been dealing with large volumes of data since the electronic cash register was invented.  But the there have a been a few changes to the landscape that make big data a topic of conversation: 1. Computing power has caught up to storage volumes. Its now possible to more thoroughly analyze the copious volumes of data retailers have been squirreling away.  CPUs are faster, sold state drives more plentiful, and new ways to store and search data are available.  My iPhone is more power than the computer used in the Apollo mission to the moon. 2. Unstructured data is everywhere.  The Web used to be where retailers published product information, but now users are generating the bulk of the content in the form of comments, videos, and "likes."  The variety of information available to retailers is huge, and it meaning difficult to discern. 3. Everything is connected.  Looking at a report from my router, there are no less than 20 active devices on my home network.  We can track the location of mobile phones, tag products with RFID, and set our thermostats (I love my Nest) from a thousand miles away.  Not only is there more data, but its arriving at higher velocity. Careful readers will note the three Vs that help define so-called big data: volume, variety, and velocity. We now have more volume, more variety, and more velocity and different technologies to deal with them.  But at the heart, the objectives are still the same: Informed decisions Accurate forecasts Improved optimizations So don't let the term "big data" throw you off the scent.  Retailers still need to execute on the basics.  But do take a fresh look at the data that's available and the new technologies to process it.  The landscape will continue to change and agile organizations will always be reevaluating their approaches.  You can just add some more weapons to the arsenal.

    Read the article

  • Architecture for a template-building, WYSIWIG application

    - by Sam Selikoff
    I'm building a WYSIWYG designer in Ember.js. The designer will allow users to create campaigns - think MailChimp. To build a campaign, users will choose an existing template. The template will have a defined layout. The user will then be taken to the designer, where he will be able to edit the text and style, and additionally change some layout options. I've been thinking about how best to go about structuring this app, and there are a few hurdles. Specifically, the output of the campaign will be dynamic: eventually, it will be published somewhere, and when the consumers (not my users, but the people clicking on the campaign that my user created) visit the campaign, certain pieces of data will change, depending on the type of consumer viewing the campaign. That means the ultimate output of the designer will be a dynamic site. The data that is dynamic for this site - the end product - will not be manipulated by the user in the designer. However, the data that will be manipulated by the user in the designer are things like copy, styles, layout options, etc. I'll call the first set of variables server-side data, and the second client-side data. It seems, then, that the process will go something like this: I'll need to create templates for this designer that have two dynamic segments. For instance, the server-side data could be Liquid expressions, and the client-side data Handlebars expressions. When the user creates a campaign, I would compile the template on the back end using some dummy data for the server-side variables, and serve up a handlebars template to the Ember app. The user would then edit the template, and the Ember app would save all his edits to the JS variables that were powering the template. This way he'd be able to preview the template. When he saves, he'll send back the selected template, along with all the data and options he's made. When it comes time to publish, the back-end system will have to do two things: compile the template with Handlebars using the campaign data, and then compile the template with Liquid using the server-side data Is my thinking roughly accurate about this, or is there a simpler way?

    Read the article

  • c# send recive object over network?

    - by Data-Base
    Hello, I'm working on a server/client project the client will be asking the server for info and the server will send them back to the client the info may be string,number, array, list, arraylist or any other object I found allot of examples but I faced issues!!!! the solution I found so far is to serialize the object (data) and send it then de-serialize it to process here is the server code public void RunServer(string SrvIP,int SrvPort) { try { var ipAd = IPAddress.Parse(SrvIP); /* Initializes the Listener */ if (ipAd != null) { var myList = new TcpListener(ipAd, SrvPort); /* Start Listeneting at the specified port */ myList.Start(); Console.WriteLine("The server is running at port "+SrvPort+"..."); Console.WriteLine("The local End point is :" + myList.LocalEndpoint); Console.WriteLine("Waiting for a connection....."); while (true) { Socket s = myList.AcceptSocket(); Console.WriteLine("Connection accepted from " + s.RemoteEndPoint); var b = new byte[100]; int k = s.Receive(b); Console.WriteLine("Recieved..."); for (int i = 0; i < k; i++) Console.Write(Convert.ToChar(b[i])); string cmd = Encoding.ASCII.GetString(b); if (cmd.Contains("CLOSE-CONNECTION")) break; var asen = new ASCIIEncoding(); // sending text s.Send(asen.GetBytes("The string was received by the server.")); // the line bove to be modified to send serialized object? Console.WriteLine("\nSent Acknowledgement"); s.Close(); Console.ReadLine(); } /* clean up */ myList.Stop(); } } catch (Exception e) { Console.WriteLine("Error..... " + e.StackTrace); } } here is the client code that should return an object public object runClient(string SrvIP, int SrvPort) { object obj = null; try { var tcpclnt = new TcpClient(); Console.WriteLine("Connecting....."); tcpclnt.Connect(SrvIP, SrvPort); // use the ipaddress as in the server program Console.WriteLine("Connected"); Console.Write("Enter the string to be transmitted : "); var str = Console.ReadLine(); Stream stm = tcpclnt.GetStream(); var asen = new ASCIIEncoding(); if (str != null) { var ba = asen.GetBytes(str); Console.WriteLine("Transmitting....."); stm.Write(ba, 0, ba.Length); } var bb = new byte[2000]; var k = stm.Read(bb, 0, bb.Length); string data = null; for (var i = 0; i < k; i++) Console.Write(Convert.ToChar(bb[i])); //convert to object code ?????? Console.ReadLine(); tcpclnt.Close(); } catch (Exception e) { Console.WriteLine("Error..... " + e.StackTrace); } return obj; } I need to know a good serialize/serialize and how to integrate it into the solution above :-( I would be really thankful for any help cheers

    Read the article

  • Apache server still running but user can not connect website, after "sudo apachectl restart" user can connect website, what'r wrong? [on hold]

    - by Tinyfool
    My website is http://ourcoders.com/, recently I found sometime user report can not connect to my website, but I ssh to server, I found Apache still running, like this: root@AY1401261057077842eaZ:~# ps aux|grep apache root 873 0.0 1.3 290496 13528 ? Ss Aug18 0:28 /usr/sbin/apache2 -k start www-data 3490 0.0 1.8 299004 18764 ? S Aug21 0:01 /usr/sbin/apache2 -k start www-data 3612 0.0 1.5 296008 15540 ? S Aug21 0:03 /usr/sbin/apache2 -k start www-data 3860 0.0 1.5 296636 16268 ? S Aug21 0:00 /usr/sbin/apache2 -k start www-data 3913 0.0 1.2 295468 13084 ? S Aug21 0:00 /usr/sbin/apache2 -k start www-data 3931 0.0 1.7 298488 18228 ? S 16:02 0:01 /usr/sbin/apache2 -k start www-data 3938 0.0 1.9 299128 19724 ? S 16:02 0:02 /usr/sbin/apache2 -k start www-data 4465 0.0 1.6 296688 16404 ? S Aug21 0:00 /usr/sbin/apache2 -k start www-data 5075 0.0 1.2 295468 13044 ? S 16:16 0:00 /usr/sbin/apache2 -k start www-data 5153 0.0 1.5 295880 15612 ? S 16:17 0:00 /usr/sbin/apache2 -k start www-data 5770 0.0 1.5 296608 16016 ? S 16:30 0:00 /usr/sbin/apache2 -k start www-data 5773 0.0 1.6 296948 16640 ? S 16:30 0:00 /usr/sbin/apache2 -k start www-data 5816 0.0 1.6 297216 16976 ? S 16:31 0:01 /usr/sbin/apache2 -k start www-data 5918 0.0 1.7 298228 17820 ? S 16:33 0:01 /usr/sbin/apache2 -k start www-data 6023 0.0 1.9 299864 19840 ? S 16:35 0:13 /usr/sbin/apache2 -k start www-data 6073 0.0 1.7 298480 18120 ? S 16:36 0:02 /usr/sbin/apache2 -k start www-data 6088 0.0 2.0 300488 21008 ? S 16:36 0:12 /usr/sbin/apache2 -k start www-data 6114 0.0 1.7 298548 18268 ? S 16:37 0:12 /usr/sbin/apache2 -k start www-data 6134 0.0 1.6 296688 16532 ? S 16:37 0:04 /usr/sbin/apache2 -k start www-data 6193 0.0 1.7 297908 17420 ? S 16:38 0:08 /usr/sbin/apache2 -k start www-data 6821 0.0 1.8 299556 19072 ? S 16:43 0:11 /usr/sbin/apache2 -k start www-data 7058 0.0 1.7 298676 18204 ? S 16:48 0:10 /usr/sbin/apache2 -k start www-data 7065 0.0 1.8 299028 18868 ? S 16:48 0:11 /usr/sbin/apache2 -k start www-data 7084 0.0 1.8 299508 19020 ? S 16:48 0:11 /usr/sbin/apache2 -k start www-data 7221 0.0 1.8 299160 18768 ? S 16:51 0:09 /usr/sbin/apache2 -k start www-data 11453 0.0 1.7 298484 18256 ? S 09:39 0:02 /usr/sbin/apache2 -k start root 26324 0.0 0.0 8084 920 pts/0 S+ 22:52 0:00 grep --color=auto apache root 28517 0.0 0.0 4404 612 ? S Aug21 0:00 /bin/sh -c /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28518 0.0 0.0 4404 616 ? S Aug21 0:00 /bin/sh -c /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28519 0.0 0.0 4404 612 ? S Aug21 0:00 /bin/sh -c /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28520 0.0 0.0 4404 616 ? S Aug21 0:00 /bin/sh -c /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28521 0.0 0.0 4312 552 ? S Aug21 0:00 /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28522 0.0 0.0 4308 548 ? S Aug21 0:07 /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28523 0.0 0.0 4176 352 ? S Aug21 0:00 /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28524 0.0 0.0 4180 356 ? S Aug21 0:00 /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log Today's only error log is blow. [Sat Aug 23 22:52:47 2014] [notice] SIGHUP received. Attempting to restart [Sat Aug 23 22:52:47 2014] [notice] Apache/2.2.22 (Ubuntu) PHP/5.3.10-1ubuntu3.13 with Suhosin-Patch configured -- resuming normal operations traffic information: cat access-2014-08-23.log | cut -d " " -f4 |cut -d":" -f2 |sort|uniq -c |sort -nr 5692 14 5291 15 5083 16 4723 23 4463 12 4057 17 4011 11 3926 13 3852 10 3187 05 3176 09 3055 06 2790 07 2672 00 2608 02 2591 01 2577 04 2514 03 2497 08 707 22 88 18 After I use "sudo apachectl restart", user can connect my website. So I want to know? What is the problem? And if "sudo apachectl restart" is needed, can I automate run this command? Today this kind struts appear again, and I run netstat -a -n Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN tcp 0 0 115.28.146.116:80 125.39.208.120:50708 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.158:50278 SYN_RECV tcp 0 0 115.28.146.116:80 220.173.142.152:23320 SYN_RECV tcp 0 0 115.28.146.116:80 60.173.247.132:52851 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.158:39397 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.158:56894 SYN_RECV tcp 0 0 115.28.146.116:80 183.129.174.2:21291 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.120:44499 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.120:34017 SYN_RECV tcp 0 0 115.28.146.116:80 124.65.50.210:3774 SYN_RECV tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:15770 0.0.0.0:* LISTEN tcp 1 0 115.28.146.116:80 14.127.65.219:61633 CLOSE_WAIT tcp 305 0 115.28.146.116:80 125.39.208.120:37593 ESTABLISHED tcp 0 0 10.144.142.201:52866 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:52873 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:52868 10.146.6.61:3306 TIME_WAIT tcp 343 0 115.28.146.116:80 182.118.20.215:50709 ESTABLISHED tcp 0 0 115.28.146.116:54784 173.194.127.243:80 ESTABLISHED tcp 1 0 115.28.146.116:80 116.192.2.185:41253 CLOSE_WAIT tcp 0 0 10.144.142.201:52876 10.146.6.61:3306 ESTABLISHED tcp 559 0 115.28.146.116:80 218.241.144.114:54501 ESTABLISHED tcp 376 0 115.28.146.116:80 116.213.196.119:50604 CLOSE_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59339 CLOSE_WAIT tcp 214 0 115.28.146.116:80 142.4.215.40:34443 ESTABLISHED tcp 0 0 115.28.146.116:48635 115.28.146.116:80 ESTABLISHED tcp 187 0 115.28.146.116:80 115.28.146.116:48635 ESTABLISHED tcp 0 0 10.144.142.201:52853 10.146.6.61:3306 TIME_WAIT tcp 594 0 115.28.146.116:80 183.129.174.2:7090 CLOSE_WAIT tcp 0 0 10.144.142.201:52874 10.146.6.61:3306 TIME_WAIT tcp 0 0 115.28.146.116:80 182.118.20.166:44081 TIME_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59028 CLOSE_WAIT tcp 1 0 115.28.146.116:80 14.127.65.219:61665 CLOSE_WAIT tcp 0 0 10.144.142.201:52860 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:46983 10.146.6.61:3306 ESTABLISHED tcp 0 2290 115.28.146.116:80 14.154.179.243:41049 FIN_WAIT1 tcp 0 0 10.144.142.201:42900 10.146.6.61:3306 ESTABLISHED tcp 571 0 115.28.146.116:80 220.173.142.152:23295 CLOSE_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59337 CLOSE_WAIT tcp 438 0 115.28.146.116:80 42.120.74.202:31567 CLOSE_WAIT tcp 0 0 115.28.146.116:80 113.36.238.28:59498 ESTABLISHED tcp 259 0 115.28.146.116:80 66.249.65.56:36739 ESTABLISHED tcp 0 0 115.28.146.116:80 113.36.238.28:59341 ESTABLISHED tcp 0 0 115.28.146.116:80 142.4.215.40:34267 FIN_WAIT2 tcp 799 0 115.28.146.116:80 180.173.88.1:52779 ESTABLISHED tcp 0 0 115.28.146.116:80 117.136.25.132:25207 FIN_WAIT2 tcp 0 0 115.28.146.116:80 220.181.108.186:42540 TIME_WAIT tcp 0 0 10.144.142.201:59902 10.242.174.13:80 TIME_WAIT tcp 0 1820 115.28.146.116:80 218.22.140.90:39266 LAST_ACK tcp 0 0 115.28.146.116:80 66.249.65.64:56977 TIME_WAIT tcp 669 0 115.28.146.116:80 83.251.90.61:49664 ESTABLISHED tcp 0 0 10.144.142.201:52872 10.146.6.61:3306 TIME_WAIT tcp 233 0 115.28.146.116:80 54.202.88.0:43398 CLOSE_WAIT tcp 479 0 115.28.146.116:80 65.49.44.149:25739 ESTABLISHED tcp 378 0 115.28.146.116:80 148.251.124.173:39313 CLOSE_WAIT tcp 1 0 115.28.146.116:80 14.127.65.219:61697 CLOSE_WAIT tcp 1 0 115.28.146.116:80 49.4.158.2:52986 CLOSE_WAIT tcp 769 0 115.28.146.116:80 14.127.65.219:61537 ESTABLISHED tcp 0 0 10.144.142.201:52859 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:55734 10.164.2.163:9200 TIME_WAIT tcp 563 0 115.28.146.116:80 202.55.20.10:22577 CLOSE_WAIT tcp 194 0 115.28.146.116:80 37.58.100.165:50908 CLOSE_WAIT tcp 791 0 115.28.146.116:80 116.192.2.185:45628 ESTABLISHED tcp 709 0 115.28.146.116:80 113.116.61.178:65209 ESTABLISHED tcp 706 0 115.28.146.116:80 183.227.44.237:54519 ESTABLISHED tcp 301 0 115.28.146.116:80 118.198.243.127:31180 ESTABLISHED tcp 0 0 10.144.142.201:55721 10.164.2.163:9200 TIME_WAIT tcp 0 0 10.144.142.201:55726 10.164.2.163:9200 TIME_WAIT tcp 0 0 10.144.142.201:55723 10.164.2.163:9200 TIME_WAIT tcp 681 0 115.28.146.116:80 83.251.90.61:49662 ESTABLISHED tcp 0 0 115.28.146.116:80 83.251.90.61:65274 TIME_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59022 CLOSE_WAIT tcp 1 0 115.28.146.116:80 180.173.88.1:52781 CLOSE_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59037 CLOSE_WAIT tcp 0 0 10.144.142.201:55728 10.164.2.163:9200 TIME_WAIT tcp 231 0 115.28.146.116:37596 110.75.102.62:80 CLOSE_WAIT tcp 1 0 115.28.146.116:80 14.127.65.219:61569 CLOSE_WAIT tcp 0 0 10.144.142.201:51310 10.146.6.61:3306 ESTABLISHED tcp 299 0 115.28.146.116:80 123.125.71.16:36281 ESTABLISHED tcp 0 0 115.28.146.116:48620 115.28.146.116:80 ESTABLISHED tcp 1 0 115.28.146.116:80 183.227.44.237:54520 CLOSE_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59026 CLOSE_WAIT tcp 479 0 115.28.146.116:80 65.49.44.149:5490 ESTABLISHED tcp 665 0 115.28.146.116:80 83.251.90.61:49663 ESTABLISHED tcp 0 0 115.28.146.116:53744 173.194.127.147:80 ESTABLISHED tcp 1 0 115.28.146.116:80 113.36.238.28:59023 CLOSE_WAIT tcp 0 0 115.28.146.116:22 116.192.2.185:34205 ESTABLISHED tcp 333 0 115.28.146.116:80 149.174.113.111:54338 CLOSE_WAIT tcp 0 0 10.144.142.201:52861 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:52863 10.146.6.61:3306 TIME_WAIT tcp 1 0 115.28.146.116:80 116.192.2.185:43272 CLOSE_WAIT tcp 767 0 115.28.146.116:80 49.4.158.2:52947 CLOSE_WAIT tcp 668 0 115.28.146.116:80 83.251.90.61:49665 ESTABLISHED tcp 642 0 115.28.146.116:80 222.78.185.50:55788 ESTABLISHED tcp 710 0 115.28.146.116:80 113.116.61.178:65264 ESTABLISHED tcp 284 0 115.28.146.116:80 157.55.39.243:65185 ESTABLISHED tcp 450 0 115.28.146.116:80 65.49.44.149:55496 ESTABLISHED tcp 1 0 115.28.146.116:80 116.192.2.185:36629 CLOSE_WAIT tcp 233 0 115.28.146.116:80 54.202.88.0:42424 CLOSE_WAIT tcp 187 0 115.28.146.116:80 115.28.146.116:48620 ESTABLISHED tcp 1 0 115.28.146.116:80 14.127.65.219:61601 CLOSE_WAIT tcp 776 0 115.28.146.116:80 202.118.253.102:64883 CLOSE_WAIT tcp 841 0 115.28.146.116:80 37.228.105.28:49472 ESTABLISHED tcp 787 0 115.28.146.116:80 112.65.226.198:52192 ESTABLISHED tcp 0 0 10.144.142.201:55717 10.164.2.163:9200 TIME_WAIT tcp 233 0 115.28.146.116:80 54.202.88.0:42855 CLOSE_WAIT tcp 379 0 115.28.146.116:80 101.226.166.219:2322 ESTABLISHED tcp 0 0 115.28.146.116:80 183.60.212.152:43063 CLOSE_WAIT tcp 1 0 115.28.146.116:80 180.173.88.1:52780 CLOSE_WAIT tcp 784 0 115.28.146.116:80 101.95.29.26:63094 ESTABLISHED tcp 463 0 115.28.146.116:80 65.49.44.149:53876 ESTABLISHED tcp 1 0 115.28.146.116:80 116.192.2.185:37946 CLOSE_WAIT tcp 479 0 115.28.146.116:80 65.49.44.149:41157 ESTABLISHED tcp 1 0 115.28.146.116:80 113.36.238.28:59036 CLOSE_WAIT tcp 1 0 115.28.146.116:80 49.4.158.2:52984 CLOSE_WAIT tcp 1 0 115.28.146.116:80 116.192.2.185:38100 CLOSE_WAIT tcp 0 0 10.144.142.201:52865 10.146.6.61:3306 TIME_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59027 CLOSE_WAIT tcp 0 0 115.28.146.116:36508 173.194.127.81:80 ESTABLISHED tcp 210 0 115.28.146.116:80 188.143.232.123:47775 ESTABLISHED tcp 1 0 115.28.146.116:80 113.36.238.28:59025 CLOSE_WAIT tcp 0 0 10.144.142.201:52857 10.146.6.61:3306 TIME_WAIT tcp 654 0 115.28.146.116:80 49.4.158.2:52985 ESTABLISHED tcp 0 0 115.28.146.116:58627 110.75.102.62:80 ESTABLISHED tcp 782 0 115.28.146.116:80 180.153.219.13:40293 ESTABLISHED tcp 792 0 115.28.146.116:80 116.192.2.185:48187 CLOSE_WAIT tcp6 0 0 :::22 :::* LISTEN udp 0 0 115.28.146.116:123 0.0.0.0:* udp 0 0 10.144.142.201:123 0.0.0.0:* udp 0 0 127.0.0.1:123 0.0.0.0:* udp 0 0 0.0.0.0:123 0.0.0.0:* udp6 0 0 :::123 :::* Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 8447 /var/run/mysqld/mysqld.sock unix 2 [ ACC ] SEQPACKET LISTENING 6678 /run/udev/control unix 2 [ ACC ] STREAM LISTENING 6482 @/com/ubuntu/upstart unix 2 [ ACC ] STREAM LISTENING 7543 /var/run/dbus/system_bus_socket unix 7 [ ] DGRAM 7551 /dev/log unix 2 [ ACC ] STREAM LISTENING 7650 /var/run/nscd/socket unix 2 [ ] DGRAM 7156424 unix 3 [ ] STREAM CONNECTED 7156137 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 7156136 unix 2 [ ] DGRAM 7156135 unix 2 [ ] DGRAM 7155834 unix 2 [ ] DGRAM 9734 unix 3 [ ] STREAM CONNECTED 9151 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 9150 unix 3 [ ] STREAM CONNECTED 9136 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 9135 unix 3 [ ] STREAM CONNECTED 9106 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 9105 unix 2 [ ] DGRAM 9073 unix 3 [ ] STREAM CONNECTED 7575 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 7574 unix 3 [ ] STREAM CONNECTED 7565 unix 3 [ ] STREAM CONNECTED 7564 unix 3 [ ] STREAM CONNECTED 7332 @/com/ubuntu/upstart unix 3 [ ] STREAM CONNECTED 7330 unix 3 [ ] DGRAM 6712 unix 3 [ ] DGRAM 6711 unix 3 [ ] STREAM CONNECTED 6662 @/com/ubuntu/upstart unix 3 [ ] STREAM CONNECTED 6635

    Read the article

  • I want to change DPI with Imagemagick without changing the actual byte-size of the image data

    - by user1694803
    I feel so horribly sorry that I have to ask this question here, but after hours of researching how to do an actually very simple task I'm still failing... In Gimp there is a very simple way to do what I want. I only have the German dialog installed but I'll try to translate it. I'm talking about going to "Picture-PrintingSize" and then adjusting the Values "X-Resolution" and "Y-Resolution" which are known to me as so called DPI values. You can also choose the format which by default is "Pixel/Inch". (In German the dialog is "Bild-Druckgröße" and there "X-Auflösung" and "Y-Auflösung") Ok, the values there are often "72" by default. When I change them to e.g. "300" this has the effect that the image stays the same on the computer, but if I print it, it will be smaller if you look at it, but all the details are still there, just smaller - it has a higher resolution on the printed paper (but smaller size... which is fine for me). I am often doing that when I am working with LaTeX, or to be exact with the command "pdflatex" on a recent Ubuntu-Machine. When I'm doing the above process with Gimp manually everything works just fine. The images will appear smaller in the resulting PDF but with high printing quality. What I am trying to do is to automate the process of going into Gimp and adjusting the DPI values. Since Imagemagick is known to be superb and I used it for many other tasks I tried to achieve my goal with this tool. But it does just not do what I want. After trying a lot of things I think this actually is be the command that should be my friend: convert input.png -density 300 output.png This should set the DPI to 300, as I can read everywhere in the web. It seems to work. When I check the file it stays the same. file input.png output.png input.png: PNG image data, 611 x 453, 8-bit grayscale, non-interlaced output.png: PNG image data, 611 x 453, 8-bit grayscale, non-interlaced When I use this command, it seems like it did what I wanted: identify -verbose output.png | grep 300 Resolution: 300x300 PNG:pHYs : x_res=300, y_res=300, units=0 (Funny enough, the same output comes for input.png which confuses me... so this might be the wrong parameters to watch?) But when I now render my TeX with "pdflatex" the image is still big and blurry. Also when I open the image with Gimp again the DPI values are set to "72" instead of "300". So there actually was no effect at all. Now what is the problem here. Am I getting something completely wrong? I can't be that wrong since everything works just fine with Gimp... Thanks for any help in this. I am also open to other automated solutions which are easily done on a Linux system...

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 3 – Table per Concrete Type (TPC) and Choosing Strategy Guidelines

    - by mortezam
    This is the third (and last) post in a series that explains different approaches to map an inheritance hierarchy with EF Code First. I've described these strategies in previous posts: Part 1 – Table per Hierarchy (TPH) Part 2 – Table per Type (TPT)In today’s blog post I am going to discuss Table per Concrete Type (TPC) which completes the inheritance mapping strategies supported by EF Code First. At the end of this post I will provide some guidelines to choose an inheritance strategy mainly based on what we've learned in this series. TPC and Entity Framework in the Past Table per Concrete type is somehow the simplest approach suggested, yet using TPC with EF is one of those concepts that has not been covered very well so far and I've seen in some resources that it was even discouraged. The reason for that is just because Entity Data Model Designer in VS2010 doesn't support TPC (even though the EF runtime does). That basically means if you are following EF's Database-First or Model-First approaches then configuring TPC requires manually writing XML in the EDMX file which is not considered to be a fun practice. Well, no more. You'll see that with Code First, creating TPC is perfectly possible with fluent API just like other strategies and you don't need to avoid TPC due to the lack of designer support as you would probably do in other EF approaches. Table per Concrete Type (TPC)In Table per Concrete type (aka Table per Concrete class) we use exactly one table for each (nonabstract) class. All properties of a class, including inherited properties, can be mapped to columns of this table, as shown in the following figure: As you can see, the SQL schema is not aware of the inheritance; effectively, we’ve mapped two unrelated tables to a more expressive class structure. If the base class was concrete, then an additional table would be needed to hold instances of that class. I have to emphasize that there is no relationship between the database tables, except for the fact that they share some similar columns. TPC Implementation in Code First Just like the TPT implementation, we need to specify a separate table for each of the subclasses. We also need to tell Code First that we want all of the inherited properties to be mapped as part of this table. In CTP5, there is a new helper method on EntityMappingConfiguration class called MapInheritedProperties that exactly does this for us. Here is the complete object model as well as the fluent API to create a TPC mapping: public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } }          public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } }          public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } }      public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; }              protected override void OnModelCreating(ModelBuilder modelBuilder)     {         modelBuilder.Entity<BankAccount>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("BankAccounts");         });         modelBuilder.Entity<CreditCard>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("CreditCards");         });                 } } The Importance of EntityMappingConfiguration ClassAs a side note, it worth mentioning that EntityMappingConfiguration class turns out to be a key type for inheritance mapping in Code First. Here is an snapshot of this class: namespace System.Data.Entity.ModelConfiguration.Configuration.Mapping {     public class EntityMappingConfiguration<TEntityType> where TEntityType : class     {         public ValueConditionConfiguration Requires(string discriminator);         public void ToTable(string tableName);         public void MapInheritedProperties();     } } As you have seen so far, we used its Requires method to customize TPH. We also used its ToTable method to create a TPT and now we are using its MapInheritedProperties along with ToTable method to create our TPC mapping. TPC Configuration is Not Done Yet!We are not quite done with our TPC configuration and there is more into this story even though the fluent API we saw perfectly created a TPC mapping for us in the database. To see why, let's start working with our object model. For example, the following code creates two new objects of BankAccount and CreditCard types and tries to add them to the database: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount();     CreditCard creditCard = new CreditCard() { CardType = 1 };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Running this code throws an InvalidOperationException with this message: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges. The reason we got this exception is because DbContext.SaveChanges() internally invokes SaveChanges method of its internal ObjectContext. ObjectContext's SaveChanges method on its turn by default calls AcceptAllChanges after it has performed the database modifications. AcceptAllChanges method merely iterates over all entries in ObjectStateManager and invokes AcceptChanges on each of them. Since the entities are in Added state, AcceptChanges method replaces their temporary EntityKey with a regular EntityKey based on the primary key values (i.e. BillingDetailId) that come back from the database and that's where the problem occurs since both the entities have been assigned the same value for their primary key by the database (i.e. on both BillingDetailId = 1) and the problem is that ObjectStateManager cannot track objects of the same type (i.e. BillingDetail) with the same EntityKey value hence it throws. If you take a closer look at the TPC's SQL schema above, you'll see why the database generated the same values for the primary keys: the BillingDetailId column in both BankAccounts and CreditCards table has been marked as identity. How to Solve The Identity Problem in TPC As you saw, using SQL Server’s int identity columns doesn't work very well together with TPC since there will be duplicate entity keys when inserting in subclasses tables with all having the same identity seed. Therefore, to solve this, either a spread seed (where each table has its own initial seed value) will be needed, or a mechanism other than SQL Server’s int identity should be used. Some other RDBMSes have other mechanisms allowing a sequence (identity) to be shared by multiple tables, and something similar can be achieved with GUID keys in SQL Server. While using GUID keys, or int identity keys with different starting seeds will solve the problem but yet another solution would be to completely switch off identity on the primary key property. As a result, we need to take the responsibility of providing unique keys when inserting records to the database. We will go with this solution since it works regardless of which database engine is used. Switching Off Identity in Code First We can switch off identity simply by placing DatabaseGenerated attribute on the primary key property and pass DatabaseGenerationOption.None to its constructor. DatabaseGenerated attribute is a new data annotation which has been added to System.ComponentModel.DataAnnotations namespace in CTP5: public abstract class BillingDetail {     [DatabaseGenerated(DatabaseGenerationOption.None)]     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } As always, we can achieve the same result by using fluent API, if you prefer that: modelBuilder.Entity<BillingDetail>()             .Property(p => p.BillingDetailId)             .HasDatabaseGenerationOption(DatabaseGenerationOption.None); Working With The Object Model Our TPC mapping is ready and we can try adding new records to the database. But, like I said, now we need to take care of providing unique keys when creating new objects: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount()      {          BillingDetailId = 1                          };     CreditCard creditCard = new CreditCard()      {          BillingDetailId = 2,         CardType = 1     };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Polymorphic Associations with TPC is Problematic The main problem with this approach is that it doesn’t support Polymorphic Associations very well. After all, in the database, associations are represented as foreign key relationships and in TPC, the subclasses are all mapped to different tables so a polymorphic association to their base class (abstract BillingDetail in our example) cannot be represented as a simple foreign key relationship. For example, consider the the domain model we introduced here where User has a polymorphic association with BillingDetail. This would be problematic in our TPC Schema, because if User has a many-to-one relationship with BillingDetail, the Users table would need a single foreign key column, which would have to refer both concrete subclass tables. This isn’t possible with regular foreign key constraints. Schema Evolution with TPC is Complex A further conceptual problem with this mapping strategy is that several different columns, of different tables, share exactly the same semantics. This makes schema evolution more complex. For example, a change to a base class property results in changes to multiple columns. It also makes it much more difficult to implement database integrity constraints that apply to all subclasses. Generated SQLLet's examine SQL output for polymorphic queries in TPC mapping. For example, consider this polymorphic query for all BillingDetails and the resulting SQL statements that being executed in the database: var query = from b in context.BillingDetails select b; Just like the SQL query generated by TPT mapping, the CASE statements that you see in the beginning of the query is merely to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type). TPC's SQL Queries are Union Based As you can see in the above screenshot, the first SELECT uses a FROM-clause subquery (which is selected with a red rectangle) to retrieve all instances of BillingDetails from all concrete class tables. The tables are combined with a UNION operator, and a literal (in this case, 0 and 1) is inserted into the intermediate result; (look at the lines highlighted in yellow.) EF reads this to instantiate the correct class given the data from a particular row. A union requires that the queries that are combined, project over the same columns; hence, EF has to pad and fill up nonexistent columns with NULL. This query will really perform well since here we can let the database optimizer find the best execution plan to combine rows from several tables. There is also no Joins involved so it has a better performance than the SQL queries generated by TPT where a Join is required between the base and subclasses tables. Choosing Strategy GuidelinesBefore we get into this discussion, I want to emphasize that there is no one single "best strategy fits all scenarios" exists. As you saw, each of the approaches have their own advantages and drawbacks. Here are some rules of thumb to identify the best strategy in a particular scenario: If you don’t require polymorphic associations or queries, lean toward TPC—in other words, if you never or rarely query for BillingDetails and you have no class that has an association to BillingDetail base class. I recommend TPC (only) for the top level of your class hierarchy, where polymorphism isn’t usually required, and when modification of the base class in the future is unlikely. If you do require polymorphic associations or queries, and subclasses declare relatively few properties (particularly if the main difference between subclasses is in their behavior), lean toward TPH. Your goal is to minimize the number of nullable columns and to convince yourself (and your DBA) that a denormalized schema won’t create problems in the long run. If you do require polymorphic associations or queries, and subclasses declare many properties (subclasses differ mainly by the data they hold), lean toward TPT. Or, depending on the width and depth of your inheritance hierarchy and the possible cost of joins versus unions, use TPC. By default, choose TPH only for simple problems. For more complex cases (or when you’re overruled by a data modeler insisting on the importance of nullability constraints and normalization), you should consider the TPT strategy. But at that point, ask yourself whether it may not be better to remodel inheritance as delegation in the object model (delegation is a way of making composition as powerful for reuse as inheritance). Complex inheritance is often best avoided for all sorts of reasons unrelated to persistence or ORM. EF acts as a buffer between the domain and relational models, but that doesn’t mean you can ignore persistence concerns when designing your classes. SummaryIn this series, we focused on one of the main structural aspect of the object/relational paradigm mismatch which is inheritance and discussed how EF solve this problem as an ORM solution. We learned about the three well-known inheritance mapping strategies and their implementations in EF Code First. Hopefully it gives you a better insight about the mapping of inheritance hierarchies as well as choosing the best strategy for your particular scenario. Happy New Year and Happy Code-Firsting! References ADO.NET team blog Java Persistence with Hibernate book a { color: #5A99FF; } a:visited { color: #5A99FF; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } .exception { background-color: #f0f0f0; font-style: italic; padding-bottom: 5px; padding-left: 5px; padding-top: 5px; padding-right: 5px; }

    Read the article

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

  • Testing Entity Framework applications, pt. 3: NDbUnit

    - by Thomas Weller
    This is the third of a three part series that deals with the issue of faking test data in the context of a legacy app that was built with Microsoft's Entity Framework (EF) on top of an MS SQL Server database – a scenario that can be found very often. Please read the first part for a description of the sample application, a discussion of some general aspects of unit testing in a database context, and of some more specific aspects of the here discussed EF/MSSQL combination. Lately, I wondered how you would ‘mock’ the data layer of a legacy application, when this data layer is made up of an MS Entity Framework (EF) model in combination with a MS SQL Server database. Originally, this question came up in the context of how you could enable higher-level integration tests (automated UI tests, to be exact) for a legacy application that uses this EF/MSSQL combo as its data store mechanism – a not so uncommon scenario. The question sparked my interest, and I decided to dive into it somewhat deeper. What I've found out is, in short, that it's not very easy and straightforward to do it – but it can be done. The two strategies that are best suited to fit the bill involve using either the (commercial) Typemock Isolator tool or the (free) NDbUnit framework. The use of Typemock was discussed in the previous post, this post now will present the NDbUnit approach... NDbUnit is an Apache 2.0-licensed open-source project, and like so many other Nxxx tools and frameworks, it is basically a C#/.NET port of the corresponding Java version (DbUnit namely). In short, it helps you in flexibly managing the state of a database in that it lets you easily perform basic operations (like e.g. Insert, Delete, Refresh, DeleteAll)  against your database and, most notably, lets you feed it with data from external xml files. Let's have a look at how things can be done with the help of this framework. Preparing the test data Compared to Typemock, using NDbUnit implies a totally different approach to meet our testing needs.  So the here described testing scenario requires an instance of an SQL Server database in operation, and it also means that the Entity Framework model that sits on top of this database is completely unaffected. First things first: For its interactions with the database, NDbUnit relies on a .NET Dataset xsd file. See Step 1 of their Quick Start Guide for a description of how to create one. With this prerequisite in place then, the test fixture's setup code could look something like this: [TestFixture, TestsOn(typeof(PersonRepository))] [Metadata("NDbUnit Quickstart URL",           "http://code.google.com/p/ndbunit/wiki/QuickStartGuide")] [Description("Uses the NDbUnit library to provide test data to a local database.")] public class PersonRepositoryFixture {     #region Constants     private const string XmlSchema = @"..\..\TestData\School.xsd";     #endregion // Constants     #region Fields     private SchoolEntities _schoolContext;     private PersonRepository _personRepository;     private INDbUnitTest _database;     #endregion // Fields     #region Setup/TearDown     [FixtureSetUp]     public void FixtureSetUp()     {         var connectionString = ConfigurationManager.ConnectionStrings["School_Test"].ConnectionString;         _database = new SqlDbUnitTest(connectionString);         _database.ReadXmlSchema(XmlSchema);         var entityConnectionStringBuilder = new EntityConnectionStringBuilder         {             Metadata = "res://*/School.csdl|res://*/School.ssdl|res://*/School.msl",             Provider = "System.Data.SqlClient",             ProviderConnectionString = connectionString         };         _schoolContext = new SchoolEntities(entityConnectionStringBuilder.ConnectionString);         _personRepository = new PersonRepository(this._schoolContext);     }     [FixtureTearDown]     public void FixtureTearDown()     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         _schoolContext.Dispose();     }     ...  As you can see, there is slightly more fixture setup code involved if your tests are using NDbUnit to provide the test data: Because we're dealing with a physical database instance here, we first need to pick up the test-specific connection string from the test assemblies' App.config, then initialize an NDbUnit helper object with this connection along with the provided xsd file, and also set up the SchoolEntities and the PersonRepository instances accordingly. The _database field (an instance of the INdUnitTest interface) will be our single access point to the underlying database: We use it to perform all the required operations against the data store. To have a flexible mechanism to easily insert data into the database, we can write a helper method like this: private void InsertTestData(params string[] dataFileNames) {     _database.PerformDbOperation(DbOperationFlag.DeleteAll);     if (dataFileNames == null)     {         return;     }     try     {         foreach (string fileName in dataFileNames)         {             if (!File.Exists(fileName))             {                 throw new FileNotFoundException(Path.GetFullPath(fileName));             }             _database.ReadXml(fileName);             _database.PerformDbOperation(DbOperationFlag.InsertIdentity);         }     }     catch     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         throw;     } } This lets us easily insert test data from xml files, in any number and in a  controlled order (which is important because we eventually must fulfill referential constraints, or we must account for some other stuff that imposes a specific ordering on data insertion). Again, as with Typemock, I won't go into API details here. - Unfortunately, there isn't too much documentation for NDbUnit anyway, other than the already mentioned Quick Start Guide (and the source code itself, of course) - a not so uncommon problem with smaller Open Source Projects. Last not least, we need to provide the required test data in xml form. A snippet for data from the People table might look like this, for example: <?xml version="1.0" encoding="utf-8" ?> <School xmlns="http://tempuri.org/School.xsd">   <Person>     <PersonID>1</PersonID>     <LastName>Abercrombie</LastName>     <FirstName>Kim</FirstName>     <HireDate>1995-03-11T00:00:00</HireDate>   </Person>   <Person>     <PersonID>2</PersonID>     <LastName>Barzdukas</LastName>     <FirstName>Gytis</FirstName>     <EnrollmentDate>2005-09-01T00:00:00</EnrollmentDate>   </Person>   <Person>     ... You can also have data from various tables in one single xml file, if that's appropriate for you (but beware of the already mentioned ordering issues). It's true that your test assembly may end up with dozens of such xml files, each containing quite a big amount of text data. But because the files are of very low complexity, and with the help of a little bit of Copy/Paste and Excel magic, this appears to be well manageable. Executing some basic tests Here are some of the possible tests that can be written with the above preparations in place: private const string People = @"..\..\TestData\School.People.xml"; ... [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] public void GetNameList_ListOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.List);     Assert.Count(34, names);     Assert.AreEqual("Abercrombie, Kim", names.First());     Assert.AreEqual("Zheng, Roger", names.Last()); } [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] [DependsOn("RemovePerson_CalledOnce_DecreasesCountByOne")] public void GetNameList_NormalOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.Normal);     Assert.Count(34, names);     Assert.AreEqual("Alexandra Walker", names.First());     Assert.AreEqual("Yan Li", names.Last()); } [Test, TestsOn("PersonRepository.AddPerson")] public void AddPerson_CalledOnce_IncreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.AddPerson(new Person { FirstName = "Thomas", LastName = "Weller" });     Assert.AreEqual(count + 1, _personRepository.Count); } [Test, TestsOn("PersonRepository.RemovePerson")] public void RemovePerson_CalledOnce_DecreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.RemovePerson(new Person { PersonID = 33 });     Assert.AreEqual(count - 1, _personRepository.Count); } Not much difference here compared to the corresponding Typemock versions, except that we had to do a bit more preparational work (and also it was harder to get the required knowledge). But this picture changes quite dramatically if we look at some more demanding test cases: Ok, and what if things are becoming somewhat more complex? Tests like the above ones represent the 'easy' scenarios. They may account for the biggest portion of real-world use cases of the application, and they are important to make sure that it is generally sound. But usually, all these nasty little bugs originate from the more complex parts of our code, or they occur when something goes wrong. So, for a testing strategy to be of real practical use, it is especially important to see how easy or difficult it is to mimick a scenario which represents a more complex or exceptional case. The following test, for example, deals with the case that there is some sort of invalid input from the caller: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] [Row(null, typeof(ArgumentNullException))] [Row("", typeof(ArgumentException))] [Row("NotExistingCourse", typeof(ArgumentException))] public void GetCourseMembers_WithGivenVariousInvalidValues_Throws(string courseTitle, Type expectedInnerExceptionType) {     var exception = Assert.Throws<RepositoryException>(() =>                                 _personRepository.GetCourseMembers(courseTitle));     Assert.IsInstanceOfType(expectedInnerExceptionType, exception.InnerException); } Apparently, this test doesn't need an 'Arrange' part at all (see here for the same test with the Typemock tool). It acts just like any other client code, and all the required business logic comes from the database itself. This doesn't always necessarily mean that there is less complexity, but only that the complexity happens in a different part of your test resources (in the xml files namely, where you sometimes have to spend a lot of effort for carefully preparing the required test data). Another example, which relies on an underlying 1-n relationship, might be this: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] public void GetCourseMembers_WhenGivenAnExistingCourse_ReturnsListOfStudents() {     InsertTestData(People, Course, Department, StudentGrade);     List<Person> persons = _personRepository.GetCourseMembers("Macroeconomics");     Assert.Count(4, persons);     Assert.ForAll(         persons,         @p => new[] { 10, 11, 12, 14 }.Contains(@p.PersonID),         "Person has none of the expected IDs."); } If you compare this test to its corresponding Typemock version, you immediately see that the test itself is much simpler, easier to read, and thus much more intention-revealing. The complexity here lies hidden behind the call to the InsertTestData() helper method and the content of the used xml files with the test data. And also note that you might have to provide additional data which are not even directly relevant to your test, but are required only to fulfill some integrity needs of the underlying database. Conclusion The first thing to notice when comparing the NDbUnit approach to its Typemock counterpart obviously deals with performance: Of course, NDbUnit is much slower than Typemock. Technically,  it doesn't even make sense to compare the two tools. But practically, it may well play a role and could or could not be an issue, depending on how much tests you have of this kind, how often you run them, and what role they play in your development cycle. Also, because the dataset from the required xsd file must fully match the database schema (even in parts that otherwise wouldn't be relevant to you), it can be quite cumbersome to be in a team where different people are working with the database in parallel. My personal experience is – as already said in the first part – that Typemock gives you a better development experience in a 'dynamic' scenario (when you're working in some kind of TDD-style, you're oftentimes executing the tests from your dev box, and your database schema changes frequently), whereas the NDbUnit approach is a good and solid solution in more 'static' development scenarios (when you need to execute the tests less frequently or only on a separate build server, and/or the underlying database schema can be kept relatively stable), for example some variations of higher-level integration or User-Acceptance tests. But in any case, opening Entity Framework based applications for testing requires a fair amount of resources, planning, and preparational work – it's definitely not the kind of stuff that you would call 'easy to test'. Hopefully, future versions of EF will take testing concerns into account. Otherwise, I don't see too much of a future for the framework in the long run, even though it's quite popular at the moment... The sample solution A sample solution (VS 2010) with the code from this article series is available via my Bitbucket account from here (Bitbucket is a hosting site for Mercurial repositories. The repositories may also be accessed with the Git and Subversion SCMs - consult the documentation for details. In addition, it is possible to download the solution simply as a zipped archive – via the 'get source' button on the very right.). The solution contains some more tests against the PersonRepository class, which are not shown here. Also, it contains database scripts to create and fill the School sample database. To compile and run, the solution expects the Gallio/MbUnit framework to be installed (which is free and can be downloaded from here), the NDbUnit framework (which is also free and can be downloaded from here), and the Typemock Isolator tool (a fully functional 30day-trial is available here). Moreover, you will need an instance of the Microsoft SQL Server DBMS, and you will have to adapt the connection strings in the test projects App.config files accordingly.

    Read the article

  • jqGrid (Delete row) - How to send additional POST data???

    - by ronanray
    Hi experts, I'm having problem with jqGrid delete mechanism as it only send "oper" and "id" parameters in form of POST data (id is the primary key of the table). The problem is, I need to delete a row based on the id and another column value, let's say user_id. How to add this user_id to the POST data??? I can summarize the issue as the following: How to get the cell value (user_id) of the selected row? AND, how to add that user_id to the POST data so it can be retrieved from the code behind where the actual delete process takes place. Sample codes: jQuery("#tags").jqGrid({ url: "subgrid.process.php, editurl: "subgrid.process.php?, datatype: "json", mtype: "POST", colNames:['id','user_id','status_type_id'], colModel:[{name:'id', index:'id', width:100, editable:true}, {name:'user_id', index:'user_id', width:200, editable:true}, {name:'status_type_id', index:'status_type_id', width:200} ], pager: '#pagernav2', rowNum:10, rowList:[10,20,30,40,50,100], sortname: 'id', sortorder: "asc", caption: "Test", height: 200 }); jQuery("#tags").jqGrid('navGrid','#pagernav2', {add:true,edit:false,del:true,search:false}, {}, {mtype:"POST",closeAfterAdd:true,reloadAfterSubmit:true}, // add options {mtype:"POST",reloadAfterSubmit:true}, // del options {} // search options ); Help....

    Read the article

  • Implementing the double-click event on Silverlight 4 Datagrid

    - by Mohammed Mudassir Azeemi
    Any good soul have an example of implementing the "Command Pattern" introduced by Prism on "Double-click event" of Silverlight 4.0 DataGrid. I did try the following: <data:DataGrid x:Name="dgUserRoles" AutoGenerateColumns="False" Margin="0" Grid.Row="0" ItemsSource="{Binding Path=SelectedUser.UserRoles}" IsReadOnly="False" > <data:DataGrid.Columns> <data:DataGridTemplateColumn Header=" "> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Button Width="20" Height="20" Click="Button_Click" Command="{Binding EditRoleClickedCommand}" CommandParameter="{Binding SelectedRole}" > </Button> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> <data:DataGridTextColumn Header="Role Name" Binding="{Binding RoleName}" /> <data:DataGridTextColumn Header="Role Code" Binding="{Binding UserroleCode}" IsReadOnly="True"/> <data:DataGridCheckBoxColumn Header="UDFM Managed" Binding="{Binding RoleIsManaged}" IsReadOnly="True" /> <data:DataGridCheckBoxColumn Header="UDFM Role Assigned" Binding="{Binding UserroleIsUdfmRoleAssignment}" IsReadOnly="True" /> <data:DataGridTextColumn Header="Source User" Binding="{Binding SourceUser}" IsReadOnly="True" /> </data:DataGrid.Columns> </data:DataGrid> As you see I did try to hook up the Command there and it is not firing the event in my View Model. Looking for a good alternative.

    Read the article

  • jqGrid local data manipulation; problem with row ids when deleting and adding new rows

    - by Sam
    I'm using jqGrid as a client side grid input, allowing the user to input multiple records before POSTing all the data back at once. I'm having a problem where if the user has added a few records (say 3 ) the id's for the records will be 1,2,3. if the user deletes record 2, you're left with 1 and 3 for the id of the records. When the user now adds a new records, jqGrid assigns that records the id 3 again since it just seems to count the total records and increments it by one for the new record. This causes problems when selecting rows as now the row id's are 1, 3 and 3. Does anyone know how to access the row ids of records as I could probably use the afterSubmit event and reassign the row id's increasing from 1. ( so after i delete row id 2, this will set the other row id's to 1 and 2) Any other suggestions to solve this problem? Thanks edit I've solved this with the following code for the delete navGrid button }).navGrid('#pager', {add:true, del:true, refresh:false, search:false}, { ... }, ##edit parameters { ... }, ##add parameters {reloadAfterSubmit:false, clearAfterAdd:false, afterComplete: function () { ## clear and readd the row data so the row ids are sequential var savedData= $("#inputgrid").jqGrid('getRowData'); $("#inputgrid").jqGrid('clearGridData'); $("#inputgrid").jqGrid('addRowData', 'rn', savedData); } } ##delete parameters ); Basically just saving the grid data and then re-adding it so that the rowids are sequential again. For some reason it causes the row numbers down the left side to go start from 2 instead of one. Edit this was solved by using the latest jqGrid code in GitHub (27th April 2010)

    Read the article

  • How do I get SSIS Data Flow to put '0.00' in a flat file?

    - by theog
    I have an SSIS package with a Data Flow that takes an ADO.NET data source (just a small table), executes a select * query, and outputs the query results to a flat file (I've also tried just pulling the whole table and not using a SQL select). The problem is that the data source pulls a column that is a Money datatype, and if the value is not zero, it comes into the text flat file just fine (like '123.45'), but when the value is zero, it shows up in the destination flat file as '.00'. I need to know how to get the leading zero back into the flat file. I've tried various datatypes for the output (in the Flat File Connection Manager), including currency and string, but this seems to have no effect. I've tried a case statement in my select, like this: CASE WHEN columnValue = 0 THEN '0.00' ELSE columnValue END (still results in '.00') I've tried variations on that like this: CASE WHEN columnValue = 0 THEN convert(decimal(12,2), '0.00') ELSE convert(decimal(12,2), columnValue) END (Still results in '.00') and: CASE WHEN columnValue = 0 THEN convert(money, '0.00') ELSE convert(money, columnValue) END (results in '.0000000000000000000') This silly little issue is killin' me. Can anybody tell me how to get a zero Money datatype database value into a flat file as '0.00'?

    Read the article

  • jstree will not fire onchange event

    - by vasion
    i have been really stuck on this. this is the code: js: var treeoptions={"data":{"type":"json","opts":{"url":"\/surveytags\/treejson"}}}; $('#treecontainer').tree(treeoptions); $("#treecontainer").tree({ callback : { ondblclk : function (node, tree) { alert(node.id); }, onmove : function (node,ref,type){ data= new Object(); data.node= new Object(); data.node.id = node.id; data.ref=new Object(); data.ref.id = ref.id; data.type = type; moveitem(data); }, onchange : function (){ alert('focused'); }, oncreate : function(node){ alert('create'); alert(node.data); } } }); this is the json: {"attributes":{"id":"1"},"data":{"title":"root"},"children":[{"attributes":{"id":"2"},"data":{"title":"blah"},"children":[{"attributes":{"id":"3"},"data":{"title":"tworows down"}},{"attributes":{"id":"4"},"data":{"title":"tooope"}}]}]} it loads. other events fire. BUT onchange will not...

    Read the article

  • Unit Testing.... a data provider ?

    - by TomTom
    Given problem: I like unit tests. I develop connectivity software to external systems that pretty much and often use a C++ library The return of this systems is nonndeterministic. Data is received while running, but making sure it is all correctly interpreted is hard. How can I test this properly? I can run a unit test that does a connect. Sadly, it will then process a life data stream. I can say I run the test for 30 or 60 seconds before disconnecting, but getting code ccoverage is impossible - I simply dont even comeclose to get all code paths EVERY ONCE PER DAY (error code paths are rarely run). I also can not really assert every result. Depending on the time of the day we talk of 20.000 data callbacks per second - all of which are not relly determined good enough to validate each of them for consistency. Mocking? Well, that would leave me testing an empty shell of myself because the code handling the events basically is the to be tested case, and in many cases we talk here of a COMPLEX c level structure - hard to have mocking frameworks that integrate from Csharp to C++ Anyone any idea? I am short on giving up using unit tests for this part of the application.

    Read the article

  • Request for the permission of type 'System.Data.Odbc.OdbcPermission.. help needed

    - by Matt
    I'm getting the following error when trying to connect to a remote mysql server. Request for the permission of type 'System.Data.Odbc.OdbcPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. I've installed the odbc 5.1 driver, and can connect to the database using the Data Sources (ODBC) tool in Control Panel. However when I try and run my C# scrip to connect, I get the above error. I've read its something to do with trust levels or something? I don't quite understand what people were talking about though. I went to C:... Framework/v2.0.50727/CONFIG and added to the medium and high trust.config files, but that didn't help.. Can someone help me out here please? My connection string is MyConString = "DRIVER={MySQL ODBC 5.1 Driver};" + "SERVER=" + m_strHost + ";" + "PORT=3306;" + "DATABASE=" + m_strDatabase + ";" + "UID=" + m_strUserName + ";" + "PWD=" + m_strPassword + ";" + "OPTION=3;";

    Read the article

  • N-Tier Architecture - Structure with multiple projects in VB.NET

    - by focus.nz
    I would like some advice on the best approach to use in the following situation... I will have a Windows Application and a Web Application (presentation layers), these will both access a common business layer. The business layer will look at a configuration file to find the name of the dll (data layer) which it will create a reference to at runtime (is this the best approach?). The reason for creating the reference at runtime to the data access layer is because the application will interface with a different 3rd party accounting system depending on what the client is using. So I would have a separate data access layer to support each accounting system. These could be separate setup projects, each client would use one or the other, they wouldn't need to switch between the two. Projects: MyCompany.Common.dll - Contains interfaces, all other projects have a reference to this one. MyCompany.Windows.dll - Windows Forms Project, references MyCompany.Business.dll MyCompany.Web.dll - Website project, references MyCompany.Business.dll MyCompany.Busniess.dll - Business Layer, references MyCompany.Data.* (at runtime) MyCompany.Data.AccountingSys1.dll - Data layer for accounting system 1 MyCompany.Data.AccountingSys2.dll - Data layer for accounting system 2 The project MyCompany.Common.dll would contain all the interfaces, each other project would have a reference to this one. Public Interface ICompany ReadOnly Property Id() as Integer Property Name() as String Sub Save() End Interface Public Interface ICompanyFactory Function CreateCompany() as ICompany End Interface The project MyCompany.Data.AccountingSys1.dll and MyCompany.Data.AccountingSys2.dll would contain the classes like the following: Public Class Company Implements ICompany Protected _id As Integer Protected _name As String Public ReadOnly Property Id As Integer Implements MyCompany.Common.ICompany.Id Get Return _id End Get End Property Public Property Name As String Implements MyCompany.Common.ICompany.Name Get Return _name End Get Set(ByVal value as String) _name = value End Set End Property Public Sub Save() Implements MyCompany.Common.ICompany.Save Throw New NotImplementedException() End Sub End Class Public Class CompanyFactory Implements ICompanyFactory Public Function CreateCompany() As ICompany Implements MyCompany.Common.ICompanyFactory.CreateCompany Return New Company() End Function End Class The project MyCompany.Business.dll would provide the business rules and retrieve data form the data layer: Public Class Companies Public Shared Function CreateCompany() As ICompany Dim factory as New MyCompany.Data.CompanyFactory Return factory.CreateCompany() End Function End Class Any opinions/suggestions would be greatly appreciated.

    Read the article

  • Copied Netbeans IDE Maven Web project failes to compile

    - by Quaternion
    If I am going to make serious changes to my project I like to make a copy (right click project and select copy). I have created copies of my projects many times (maven web applications included) without issue. Now all I do is copy the project making no changes, leaving the copy settings on default (project name etc) and then try to run it and spring decides to take issue with it. Again I try running the original and it works. I deleted the faulty copy, restarted and tried again and still it does not work. Here are my maven/system details: Apache Maven 2.2.1 (rdebian-4) Java version: 1.6.0_20 Java home: /usr/lib/jvm/java-6-openjdk/jre Default locale: en_CA, platform encoding: UTF-8 OS name: "linux" version: "2.6.35-23-generic" arch: "i386" Family: "unix" Here is the exception: INFO: 2010-12-21 16:08:23,715 ERROR org.springframework.web.context.ContextLoader.initWebApplicationContext:215 - Context initialization failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'personService': Injection of persistence methods failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [applicationContext.xml]: Instantiation of bean failed; nested exception is java.lang.NoClassDefFoundError: org/springframework/jdbc/datasource/lookup/DataSourceLookup at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.postProcessPropertyValues(PersistenceAnnotationBeanPostProcessor.java:324) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:998) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:472) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409) at java.security.AccessController.doPrivileged(Native Method) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:429) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:728) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:380) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:255) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) at org.apache.catalina.core.StandardContext.contextListenerStart(StandardContext.java:4664) at com.sun.enterprise.web.WebModule.contextListenerStart(WebModule.java:535) at org.apache.catalina.core.StandardContext.start(StandardContext.java:5266) at com.sun.enterprise.web.WebModule.start(WebModule.java:499) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:928) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:912) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:694) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1947) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1619) at com.sun.enterprise.web.WebApplication.start(WebApplication.java:90) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:126) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:241) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:236) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:339) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:183) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:272) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:305) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:320) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1176) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$900(CommandRunnerImpl.java:83) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1235) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1224) at com.sun.enterprise.v3.admin.AdminAdapter.doCommand(AdminAdapter.java:365) at com.sun.enterprise.v3.admin.AdminAdapter.service(AdminAdapter.java:204) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:166) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:100) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:245) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:636) Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [applicationContext.xml]: Instantiation of bean failed; nested exception is java.lang.NoClassDefFoundError: org/springframework/jdbc/datasource/lookup/DataSourceLookup at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:883) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:839) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.getSingletonFactoryBeanForTypeCheck(AbstractAutowireCapableBeanFactory.java:682) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.getTypeForFactoryBean(AbstractAutowireCapableBeanFactory.java:614) at org.springframework.beans.factory.support.AbstractBeanFactory.isTypeMatch(AbstractBeanFactory.java:450) at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeanNamesForType(DefaultListableBeanFactory.java:223) at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeanNamesForType(DefaultListableBeanFactory.java:202) at org.springframework.beans.factory.BeanFactoryUtils.beanNamesForTypeIncludingAncestors(BeanFactoryUtils.java:143) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.findDefaultEntityManagerFactory(PersistenceAnnotationBeanPostProcessor.java:503) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.findEntityManagerFactory(PersistenceAnnotationBeanPostProcessor.java:473) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor$PersistenceElement.resolveEntityManager(PersistenceAnnotationBeanPostProcessor.java:599) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor$PersistenceElement.getResourceToInject(PersistenceAnnotationBeanPostProcessor.java:570) at org.springframework.beans.factory.annotation.InjectionMetadata$InjectedElement.inject(InjectionMetadata.java:192) at org.springframework.beans.factory.annotation.InjectionMetadata.injectMethods(InjectionMetadata.java:117) INFO: at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.postProcessPropertyValues(PersistenceAnnotationBeanPostProcessor.java:321) ... 57 more Caused by: java.lang.NoClassDefFoundError: org/springframework/jdbc/datasource/lookup/DataSourceLookup at java.lang.Class.getDeclaredConstructors0(Native Method) at java.lang.Class.privateGetDeclaredConstructors(Class.java:2406) at java.lang.Class.getConstructor0(Class.java:2716) at java.lang.Class.getDeclaredConstructor(Class.java:2002) at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:54) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:877) ... 71 more Caused by: java.lang.ClassNotFoundException: org.springframework.jdbc.datasource.lookup.DataSourceLookup at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at org.glassfish.web.loader.WebappClassLoader.findClass(WebappClassLoader.java:959) at org.glassfish.web.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1430) ... 77 more SEVERE: PWC1306: Startup of context /quickstart_upgrade-0.1-SNAPSHOT failed due to previous errors SEVERE: PWC1305: Exception during cleanup after start failed org.apache.catalina.LifecycleException: PWC2769: Manager has not yet been started at org.apache.catalina.session.StandardManager.stop(StandardManager.java:892) at org.apache.catalina.core.StandardContext.stop(StandardContext.java:5456) at com.sun.enterprise.web.WebModule.stop(WebModule.java:530) at org.apache.catalina.core.StandardContext.start(StandardContext.java:5284) at com.sun.enterprise.web.WebModule.start(WebModule.java:499) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:928) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:912) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:694) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1947) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1619) at com.sun.enterprise.web.WebApplication.start(WebApplication.java:90) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:126) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:241) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:236) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:339) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:183) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:272) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:305) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:320) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1176) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$900(CommandRunnerImpl.java:83) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1235) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1224) at com.sun.enterprise.v3.admin.AdminAdapter.doCommand(AdminAdapter.java:365) at com.sun.enterprise.v3.admin.AdminAdapter.service(AdminAdapter.java:204) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:166) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:100) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:245) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:636) SEVERE: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'personService': Injection of persistence methods failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [applicationContext.xml]: Instantiation of bean failed; nested exception is java.lang.NoClassDefFoundError: org/springframework/jdbc/datasource/lookup/DataSourceLookup at org.apache.catalina.core.StandardContext.start(StandardContext.java:5289) at com.sun.enterprise.web.WebModule.start(WebModule.java:499) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:928) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:912) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:694) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1947) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1619) at com.sun.enterprise.web.WebApplication.start(WebApplication.java:90) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:126) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:241) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:236) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:339) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:183) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:272) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:305) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:320) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1176) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$900(CommandRunnerImpl.java:83) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1235) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1224) at com.sun.enterprise.v3.admin.AdminAdapter.doCommand(AdminAdapter.java:365) at com.sun.enterprise.v3.admin.AdminAdapter.service(AdminAdapter.java:204) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:166) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:100) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:245) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:636) Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'personService': Injection of persistence methods failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [applicationContext.xml]: Instantiation of bean failed; nested exception is java.lang.NoClassDefFoundError: org/springframework/jdbc/datasource/lookup/DataSourceLookup at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.postProcessPropertyValues(PersistenceAnnotationBeanPostProcessor.java:324) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:998) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:472) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409) at java.security.AccessController.doPrivileged(Native Method) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:429) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:728) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:380) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:255) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) at org.apache.catalina.core.StandardContext.contextListenerStart(StandardContext.java:4664) at com.sun.enterprise.web.WebModule.contextListenerStart(WebModule.java:535) at org.apache.catalina.core.StandardContext.start(StandardContext.java:5266) ... 38 more Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [applicationContext.xml]: Instantiation of bean failed; nested exception is java.lang.NoClassDefFoundError: org/springframework/jdbc/datasource/lookup/DataSourceLookup at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:883) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:839) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.getSingletonFactoryBeanForTypeCheck(AbstractAutowireCapableBeanFactory.java:682) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.getTypeForFactoryBean(AbstractAutowireCapableBeanFactory.java:614) at org.springframework.beans.factory.support.AbstractBeanFactory.isTypeMatch(AbstractBeanFactory.java:450) at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeanNamesForType(DefaultListableBeanFactory.java:223) at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeanNamesForType(DefaultListableBeanFactory.java:202) at org.springframework.beans.factory.BeanFactoryUtils.beanNamesForTypeIncludingAncestors(BeanFactoryUtils.java:143) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.findDefaultEntityManagerFactory(PersistenceAnnotationBeanPostProcessor.java:503) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.findEntityManagerFactory(PersistenceAnnotationBeanPostProcessor.java:473) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor$PersistenceElement.resolveEntityManager(PersistenceAnnotationBeanPostProcessor.java:599) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor$PersistenceElement.getResourceToInject(PersistenceAnnotationBeanPostProcessor.java:570) at org.springframework.beans.factory.annotation.InjectionMetadata$InjectedElement.inject(InjectionMetadata.java:192) at org.springframework.beans.factory.annotation.InjectionMetadata.injectMethods(InjectionMetadata.java:117) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.postProcessPropertyValues(PersistenceAnnotationBeanPostProcessor.java:321) ... 57 more Caused by: java.lang.NoClassDefFoundError: org/springframework/jdbc/datasource/lookup/DataSourceLookup at java.lang.Class.getDeclaredConstructors0(Native Method) at java.lang.Class.privateGetDeclaredConstructors(Class.java:2406) at java.lang.Class.getConstructor0(Class.java:2716) at java.lang.Class.getDeclaredConstructor(Class.java:2002) at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:54) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:877) ... 71 more Caused by: java.lang.ClassNotFoundException: org.springframework.jdbc.datasource.lookup.DataSourceLookup at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at org.glassfish.web.loader.WebappClassLoader.findClass(WebappClassLoader.java:959) at org.glassfish.web.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1430) ... 77 more

    Read the article

  • ASP.NET Sql Timeout

    - by Petoj
    Well we have this Asp.Net application that we installed at a customer but now some times we get a SqlException that says "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding." now the wired thing is that the exception comes instantly when i press the button, this does not happen every time i press the button so its random.. any idea what i could try to pinpoint the problem? We are using the EnterpriseLibrary Database block if that matters... Stack trace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) at System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior) at Microsoft.Practices.EnterpriseLibrary.Data.Database.DoExecuteReader(DbCommand command, CommandBehavior cmdBehavior) at Microsoft.Practices.EnterpriseLibrary.Data.Database.ExecuteReader(DbCommand command)

    Read the article

  • Why does ObservableCollection throws an exception when being modified?

    - by Oliver Hanappi
    Hi! My application uses a WPF DataGrid. One of the columns is a template column that containts a combobox bound to an ObservableCollection of the entity which feeds the row. When I add a value to the ObservableCollection, a NullReferenceException is thrown. Has anybody an idea why this happens? Here is the stacktrace of the exception: at MS.Internal.Data.PropertyPathWorker.DetermineWhetherDBNullIsValid() at MS.Internal.Data.PropertyPathWorker.get_IsDBNullValidForUpdate() at MS.Internal.Data.ClrBindingWorker.get_IsDBNullValidForUpdate() at System.Windows.Data.BindingExpression.ConvertProposedValue(Object value) at System.Windows.Data.BindingExpressionBase.UpdateValue() at System.Windows.Data.BindingExpression.Update(Boolean synchronous) at System.Windows.Data.BindingExpressionBase.Dirty() at System.Windows.Data.BindingExpression.SetValue(DependencyObject d, DependencyProperty dp, Object value) at System.Windows.DependencyObject.SetValueCommon(DependencyProperty dp, Object value, PropertyMetadata metadata, Boolean coerceWithDeferredReference, OperationType operationType, Boolean isInternal) at System.Windows.DependencyObject.SetValue(DependencyProperty dp, Object value) at System.Windows.Controls.Primitives.Selector.UpdatePublicSelectionProperties() at System.Windows.Controls.Primitives.Selector.SelectionChanger.End() at System.Windows.Controls.Primitives.Selector.OnItemsChanged(NotifyCollectionChangedEventArgs e) at System.Windows.Controls.ItemsControl.OnItemCollectionChanged(Object sender, NotifyCollectionChangedEventArgs e) at System.Collections.Specialized.NotifyCollectionChangedEventHandler.Invoke(Object sender, NotifyCollectionChangedEventArgs e) at System.Windows.Data.CollectionView.OnCollectionChanged(NotifyCollectionChangedEventArgs args) at System.Windows.Controls.ItemCollection.System.Windows.IWeakEventListener.ReceiveWeakEvent(Type managerType, Object sender, EventArgs e) at System.Windows.WeakEventManager.DeliverEventToList(Object sender, EventArgs args, ListenerList list) at System.Windows.WeakEventManager.DeliverEvent(Object sender, EventArgs args) at System.Collections.Specialized.CollectionChangedEventManager.OnCollectionChanged(Object sender, NotifyCollectionChangedEventArgs args) at System.Windows.Data.CollectionView.OnCollectionChanged(NotifyCollectionChangedEventArgs args) at System.Windows.Data.ListCollectionView.ProcessCollectionChangedWithAdjustedIndex(NotifyCollectionChangedEventArgs args, Int32 adjustedOldIndex, Int32 adjustedNewIndex) at System.Windows.Data.ListCollectionView.ProcessCollectionChanged(NotifyCollectionChangedEventArgs args) at System.Windows.Data.CollectionView.OnCollectionChanged(Object sender, NotifyCollectionChangedEventArgs args) at System.Collections.ObjectModel.ObservableCollection`1.OnCollectionChanged(NotifyCollectionChangedEventArgs e) at System.Collections.ObjectModel.ObservableCollection`1.InsertItem(Int32 index, T item) at System.Collections.ObjectModel.Collection`1.Add(T item) at ORF.PersonBook.IdentityModule.Model.SubsidiaryModel.AddRoom(RoomModel room) in C:\Project\Phoenix\Development\src\ORF.PersonBook.IdentityModule\Model\SubsidiaryModel.cs:line 127

    Read the article

  • Problem with copying local data onto HDFS on a Hadoop cluster using Amazon EC2/ S3.

    - by Deepak Konidena
    Hi, I have setup a Hadoop cluster containing 5 nodes on Amazon EC2. Now, when i login into the Master node and submit the following command bin/hadoop jar <program>.jar <arg1> <arg2> <path/to/input/file/on/S3> It throws the following errors (not at the same time.) The first error is thrown when i don't replace the slashes with '%2F' and the second is thrown when i replace them with '%2F': 1) Java.lang.IllegalArgumentException: Invalid hostname in URI S3://<ID>:<SECRETKEY>@<BUCKET>/<path-to-inputfile> 2) org.apache.hadoop.fs.S3.S3Exception: org.jets3t.service.S3ServiceException: S3 PUT failed for '/' XML Error Message: The request signature we calculated does not match the signature you provided. check your key and signing method. Note: 1)when i submitted jps to see what tasks were running on the Master, it just showed 1116 NameNode 1699 Jps 1180 JobTracker leaving DataNode and TaskTracker. 2)My Secret key contains two '/' (forward slashes). And i replace them with '%2F' in the S3 URI. PS: The program runs fine on EC2 when run on a single node. Its only when i launch a cluster, i run into issues related to copying data to/from S3 from/to HDFS. And, what does distcp do? Do i need to distribute the data even after i copy the data from S3 to HDFS?(I thought, HDFS took care of that internally) IF you could direct me to a link that explains running Map/reduce programs on a hadoop cluster using Amazon EC2/S3. That would be great. Regards, Deepak.

    Read the article

  • How to estimate size of data to transfer when using DbCommand.ExecuteXXX?

    - by Yadyn
    I want to show the user detailed progress information when performing potentially lengthy database operations. Specifically, when inserting/updating data that may be on the order of hundreds of KB or MB. Currently, I'm using in-memory DataTables and DataRows which are then synced with the database via TableAdapter.Update calls. This works fine and dandy, but the single call leaves little opportunity to glean any kind of progress info to show to the user. I have no idea how much data is passing through the network to the remote DB or its progress. Basically, all I know is when Update returns and it is assumed complete (barring any errors or exceptions). But this means all I can show is 0% and then a pause and then 100%. I can count the number of rows, even going so far to cound how many are actually Modified or Added, and I could even maybe calculate per DataRow its estimated size based on the datatype of each column, using sizeof for value types like int and checking length for things like strings or byte arrays. With that, I could probably determine, before updating, an estimated total transfer size, but I'm still stuck without any progress info once Update is called on the TableAdapter. Am I stuck just using an indeterminate progress bar or mouse waiting cursor? Would I need to radically change our data access layer to be able to hook into this kind of information? Even if I can't get it down to the precise KB transferred (like a web browser file download progress bar), could I at least know when each DataRow/DataTable finishes or something? How do you best show this kind of progress info using ADO.NET?

    Read the article

  • .net framework execution aborted while executing CLR sproc?

    - by Sean Ochoa
    I constructed a sproc that does the equivalent of FOR XML AUTO in SQL 2008. Now that I'm testing it, it gives me a really unhelpful error msg. Any idea what this error means? Msg 10329, Level 16, State 49, Procedure ForXML, Line 0 .Net Framework execution was aborted. System.Threading.ThreadAbortException: Thread was being aborted. System.Threading.ThreadAbortException: at System.Runtime.InteropServices.Marshal.PtrToStringUni(IntPtr ptr, Int32 len) at System.Data.SqlServer.Internal.CXVariantBase.WSTRToString() at System.Data.SqlServer.Internal.SqlWSTRLimitedBuffer.GetString(SmiEventSink sink) at System.Data.SqlServer.Internal.RowData.GetString(SmiEventSink sink, Int32 i) at Microsoft.SqlServer.Server.ValueUtilsSmi.GetValue(SmiEventSink_Default sink, ITypedGettersV3 getters, Int32 ordinal, SmiMetaData metaData, SmiContext context) at Microsoft.SqlServer.Server.ValueUtilsSmi.GetValue200(SmiEventSink_Default sink, SmiTypedGetterSetter getters, Int32 ordinal, SmiMetaData metaData, SmiContext context) at System.Data.SqlClient.SqlDataReaderSmi.GetValue(Int32 ordinal) at System.Data.SqlClient.SqlDataReaderSmi.GetValues(Object[] values) at System.Data.ProviderBase.DataReaderContainer.CommonLanguageSubsetDataReader.GetValues(Object[] values) at System.Data.ProviderBase.SchemaMapping.LoadDataRow() at System.Data.Common.DataAdapter.FillLoadDataRow(SchemaMapping mapping) at System.Data.Common.DataAdapter.FillFromReader(DataSet dataset, DataTable datatable, String srcTable, DataReaderContainer dataReader, Int32 startRecord, Int32 maxRecords, DataColumn parentChapterColumn, Object parentChapterValue) at System.Data.Common.DataAdapter.Fill(DataTable[] dataTables, IDataReader dataReader, Int32 startRecord, Int32 maxRecords) at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataTable[] dataTables, Int32 startRecord, Int32 maxRecords, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataTable dataTable) at ForXML.GetXML...

    Read the article

< Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >