Search Results

Search found 2467 results on 99 pages for 'pinal dave'.

Page 50/99 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • SQLAuthority News – Social Media Series – Twitter and Myself

    - by pinaldave
    Pinal Dave on Twitter! Frequent readers of my blog might know that I am trying to get more involved in all social media sites, both professionally and personally.  Readers might also know that I have often struggled with finding the purpose of some social media sites – Twitter especially.  One of the great uses of social media is to stay connected and updated with followers.  Twitter’s 140 character limit means that Twitter is a great place to get quick updates from the world, but not a lot of deep information.  In fact, I have the feeling that Twitter’s form might actually limit its usefulness – especially for complex subjects like SQL Server. However, #sqlhelp has tag has for sure overcome that belief. You can instantly talk about SQL and get help with your SQL problems on twitter. I believe in keeping up with the changing times, and it didn’t feel right to give up on Twitter.  So I have determined a good way to use Twitter and set rules for myself.  The problem I was facing that if I followed everyone who interested me and let anyone follow me, I was completely overwhelmed by the amount of information Twitter could give me every day.  It didn’t seem like 140 characters should be able to take up so much of my time, but it took hours to sort through all the updates to find things that were of interest to me and to SQL Server. First, I was forced to unfollow anyone who made too many updates every day.  This was not an easy decision, but just for my own sake I had to limit the amount of information I could take in every day.  I still let anyone follow me who wants to, because I didn’t want to limit my readership, and I hope that they do not feel the way I did – that there are too many updates! Next, I made sure that the information I put on Twitter is useful and to the point.  I try to announce new blog posts on Twitter at least once a day, and I also try to find five posts from other people every day that are worth re-Tweeting.  This forces me to stay active in the community.  But it is not all business on Twitter.  It is also a place for me to post updates about my family and home life, for anyone who is interested. In simple words, I talk every thing and anything on twitter. If you’d like to follow me, my Twitter handle is www.twitter.com/pinaldave.  It is a good place to start if you’d like to keep updated with my blog and find out who I follow and who my influences are.  Twitter is perfect for getting little “tastes” of things you’re interested in.  If you are interested in my blog, SQL Server, or both, I hope that my Twitter updates will be interesting and helpful. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: Social Media

    Read the article

  • SQLAuthority News – Social Media Series – LinkedIn and Professional Profile

    - by pinaldave
    Pinal Dave on LinkedIn! It seems like a few year ago, there was a big “boom” in social media websites.  All of a sudden there were so many sites to choose from.  MySpace or Orkut?  Blogging websites for your business or a LinkedIn account?  The nature of the internet is to always be changing, but I believe that out of this huge growth of websites, a few have come to stay.  Facebook is obviously the leader in social media networking, especially for your personal life.  Blogging is great, but it can be more of a way to get your ideas out there, rather than a place for people to connect to you professionally.  If you want to have a professional “face” on the internet, LinkedIn is the way to go. LinkedIn is best explained as “professional Facebook.”  This is simplifying things a little bit too much, but it is certainly a website where you link up with professional contacts, so that others can see where you have worked, who you have worked with, and what projects you have done.  This is a much better place for professional contacts to find you than someplace like Facebook, where all they will see is your face and maybe picture of you at a birthday party or something like that! Because so much of my SQL Server life is conducted on the internet, especially on my blog, I felt that it would be a good idea to have a well-maintained LinkedIn web page as well, so that if anyone is curious about me and my credentials they can quickly and easily find me and see that I am for real, and not someone pretending to know a lot about SQL Server. My linked in profile is www.linkedin.com/in/pinaldave.  I keep all my professional information here, and I update it as often as possible.  Feel free to come find me, especially if you would like to “link up” and share professional information.  The technology world is becoming more and more interconnected, and more and more international.  I feel that it is very important to stay linked up virtually, because so many of us are so far apart physically. I try to keep very connected with my LinkedIn profile.  I let anyone connect with me, and I read updates from the professional world very often.  I keep this profile updated, but do not post things about my personal life or anything that I might put on Twitter, for example.  I also include my e-mail address here, if you would like to contact me professionally.  This is the best place for me to conduct business. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: Social Media

    Read the article

  • Microsoft TechEd 2010 - Day 3 @ Bangalore

    - by sathya
    Microsoft TechEd 2010 - Day 3 @ Bangalore Sorry for my delayed post on day 3 because I had to travel from Blore to Chennai So I couldnt write for the past two days. On day 3 as usual we had lot of simultaneous tracks on various sessions. This day I choose the Your Data, Our Platform Track. It had sessions on the following 5 topics :   Developing Data-tier Applications in Visual Studio 2010 - by Sanjay Nagamangalam SQL Server Query Optimization, Execution and Debugging Query Performance - by Vinod Kumar M SQL Server Utility - Its about more than 1 SQL Server - by Vinod Kumar Jagannathan Data Recovery / Consistency with CheckDB - by Vinod Kumar M Developing with SQL Server Spatial and Deep dive into Spatial Indexing - by Pinal Dave Developing Data-tier Applications in Visual Studio 2010 - by Sanjay Nagamangalam This was one of the superb sessions i have attended. He explained all the concepts in detail with a demo. The important thing in this is there is something called Data-Tier application project which is newly introduced in this VS2010 with which we can manage all our data along with our application inside our VS itself. We can create DB,Tables,Procs,Views etc. here itself and once we deploy it creates a compressed file called .dacpac which stores all the changes in Table Schema,Created procs, etc. on to that single file which reduces our (developer's) effort in preparing the deployment scripts and giving it to the DBA. It also has some policy configurations which can be managed easily by checking some rules like in outlook. For Ex : IF the SQL Server Version > 10 then deploy else dont. This rule specifies that even if we try to deploy on SQL Server DB with version less than 10 It will not do it. And if we deploy some .dacpac to SQL server production db with the option upgrade DB with this dacpac once everything completes successfully it will say success else it rollsback to the prior version. Even if it gets deployed successfully and later @ a point of time you wish to revert it back to the prior version, you can go ahead and delete the existing dacpac version so that it reverts to the older version of the db changes. And for the good questions that were asked in the session T-Shirts were given. SQL Server Query Optimization, Execution and Debugging Query Performance - by Vinod Kumar M This one too was the best session. The speaker Vinod explained everything very much clearly. This was really useful session and you dont believe, as per my knowledge, in the total 3 days in the TechEd except the Keynote, for this session seats were full (House FULL)  People were even standing out to attend this session. Such a great one it was. The speaker did a deep dive in to the Query Plan section and showed which actually causes the problem. Its all about the thing that we need to understand about the execution of SQL server Queries. We think in a way and SQL Server never executes in that way. We need to understand that first. He also told about there might be two plans generated for a single query at a point of time because of parallel processors in the system. The Key is here in every query. There is something called Estimated Row Count and Actual Row Count in the query plan. If the estimated row count by SQL server tallies with the actual row count your performance will be awesome. He said some tweaks to achieve the same. After this as usual we had lunch SQL Server Utility - Its about more than 1 SQL Server - by Vinod Kumar Jagannathan This was more of a DBA's session. Am really sorry I was totally blank and I was not interested to attend this session and walked out to attend Migrating to the cloud by Harish Ranganathan (My favorite Speaker) but unfortunately that was some other persons session. There the speaker was telling about how to configure the connection strings in such a way that we can connect to the SQL Azure platform from our VS and also showed us how to deploy the same in to Windows Azure. In between there were lot of technical problems like laptop hang, user locked and he was switching between systems, also i came in the half so i wasnt able to listen that fully. In between, Since I got an MCTS certification they gave me T-Shirt with the lines 'Iam Certified. Are you?' and they asked me to wear that. If we wear that we might get spotted and they would give us some goodies  So on the 3rd day I was wearing that T-Shirt. I got spotted by the person Tarun who was coordinating things about the certification, and he was accompanied with a cameraman and they interviewed me about the certification and I was shown live in the Teched and was seen by 60000 live viewers of the TechEd. I was really happy on that. Data Recovery / Consistency with CheckDB - by Vinod Kumar M This was one of the best sessions too in the TechEd. This guy is really amazing. In front of us he crashed a DB and showed how to recover the same in 6 different ways for different no of failures. Showed about Different types of error msgs like : 823,824,825 msdb..suspect_pages DBCC CheckDB (different parameters to it) I am really waiting for his session to get uploaded live in the Teched Website. Here is his contact info If you wish to connect to him : Twitter : @vinodk_sql Website : www.ExtremeExperts.com Blog : http://blogs.sqlxml.org/vinodkumar Developing with SQL Server Spatial and Deep dive into Spatial Indexing - by Pinal Dave Pinal Dave is a King in SQL and he is a SQL MVP and he is the owner of SQLAuthority.com He took the session on Spatial Databases from the start. Showed about the different types of Spatial : Geometric and Geographic Geometric : x and y axis its a planar surface Geographic : Spherical surface with 3600  as the maximum which is used to represent the geographic points on the earth and easy to draw maps of different kinds. He had a lot of obstacles during his session like rain coming inside the hall, mic wires got bursted due to rain, Videos off on the display screens. In spite of that he asked the audience to come in the front rows and managed to take a good session without ppts and finally we got the displays on and he was showing demos on the same what he explained orally. That was really a fun filled informative session. He gave some books for the persons who asked good questions and answered well for his questions and I got one too  (It was a book on Data Mining - Wrox Publishers) And finally after all these things there was Keynote session for close of the TechEd. and we all assembled in a big hall where Mr.Ashok Soota, a man of age around 70  co-founder of Mindtree was called to give some lecture on his successes. He was explaining about his past and what all companies he switched and for what reasons and what are all his successes and what are all his failures and the learnings of him from his past failures. and his success and failures on his partnerships with the other concern. And there were some questions for him like What is your suggestion on young entrepreneur? How did you learn from past failures? What is reiterating your success? What is your suggestion on partnerships? How to choose partnerships? etc. And they said @ 7.30 Pm there would be a party night, but unfortunately i was not able to attend that because I had to catch my train and before that i had to pack things, so I started @ 7 itself. Thats it about the TechED!!! Stay tuned for further Technology updates.

    Read the article

  • SQLAuthority News – Social Media Series – Facebook and Google+

    - by pinaldave
    Pinal on Facebook and Google+ Unless you have been living under a rock for the last few years, you know that Facebook is the first and last word in social networking.  Everyone has a Facebook account – from your local store to the 10-year old school child.  Because of this ability to be completely connected to everyone in your entire life, keeping a Facebook page for a professional business can be tricky. For the most part, I use Facebook strictly for personal matters.  I am friends only with friends I know in the “real” world (as opposed to my “virtual” online friends) and with family, of course.  I chat with friends on Facebook and upload personal photos to share with family who are far away.  I hope this doesn’t make readers from my professional life feel left out.  You can follow me on Facebook at www.facebook.com/SQLAuth, but you should know that Twitter is probably the better place to find updates about SQL Server and my blog (you can follow me on Twitter at www.twitter.com/pinaldave). There are definitely businesses who keep in touch with their clients using Facebook, but I felt the need to keep my personal and professional life separate.  That’s why I was so excited to find out Google was coming out with their own social media site, Google+.  On Google+ I post some personal things as well, and there is a lot of overlap between what I put on Facebook and what I put on Google+.  But since Google+ has become so popular amongst the “techie” crowd, I have found that it’s a good place to follow some of the stars of the Microsoft world, like Scott Hanselman and Buck Woody. If you are also a member of Google+, I am looking to expand my circle there.  You can find me at https://plus.google.com/104990425207662620918/posts.  Google+ is the newest face in the social media world, and it still hasn’t found a good footing between personal and professional yet.  That’s why I felt it would be a good idea to jump on the site early and help them determine which way to go.  Maybe someday it will be a place where business and personal can mix. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Social Media

    Read the article

  • SQLAuthority News – Social Media Series – YouTube and Movies

    - by pinaldave
    Pinal Dave on Youtube! Some people might not know it, but YouTube is actually more than a place to watch funny cat videos and people singing their favorite pop songs – it’s actually a social media site.  When you are a member of YouTube you can follow people who regularly post videos, post video responses of your own, and even gain a following for your own videos.  I myself was not aware of YouTube’s potential until recently, when I started to make SQL Server in Sixty Seconds videos. YouTube is very different than other types of social media, and a big factor is that anyone can look at videos without being a member.  Unlike other social media sites, like Twitter and Facebook, you have to have an account in order to participate.  But on YouTube you are even more anonymous.  To make and post videos you need an account, but anyone who comes to the site can look at what you’ve made without signing in or leaving any trace of having seen your material.  This makes YouTube very anonymous and hard to track. However, we should not overlook the power of video on the internet.  Over the past few months I have been making SQL Server in Sixty Second videos and have come to love it.  It is very exciting to be able to talk about a subject that mostly I write about, and for many people video is far more accessible and easy to understand.   I have really enjoyed diving into something new, and would love to have more people check out these videos and give me feedback.  You can find me at www.youtube.com/user/pinaldave. I am very excited with all the possibilities on YouTube and it might just be the technology evangelist in me, but I would love for other people to discover how fun and exciting this site can be, too.  Don’t think of it as just a place to find funny videos and waste a few minutes of your time, think of it as a place to learn and interact with interesting people.  Come watch a few of my videos, while you’re there.  Remember, everything is free and there are no contracts to sign, but I hope that you get as excited as I am and join up.  We need more people creating good content on this site! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: Social Media

    Read the article

  • SQL SERVER – NTFS File System Performance for SQL Server

    - by pinaldave
    Note: Before practicing any of the suggestion of this article, consult your IT Infrastructural Admin, applying the suggestion without proper testing can only damage your system. Question: “Pinal, we have 80 GB of data including all the database files, we have our data in NTFS file system. We have proper backups are set up. Any suggestion for our NTFS file system performance improvement. Our SQL Server box is running only SQL Server and nothing else. Please advise.” When I receive questions which I have just listed above, it often sends me deep thought. Honestly, I know a lot but there are plenty of things, I believe can be built with community knowledge base. Today I need you to help me to complete this list. I will start the list and you help me complete it. NTFS File System Performance Best Practices for SQL Server Disable Indexing on disk volumes Disable generation of 8.3 names (command: FSUTIL BEHAVIOR SET DISABLE8DOT3 1) Disable last file access time tracking (command: FSUTIL BEHAVIOR SET DISABLELASTACCESS 1) Keep some space empty (let us say 15% for reference) on drive is possible (Only on Filestream Data storage volume) Defragement the volume Add your suggestions here… The one which I often get a pretty big debate is NTFS allocation size. I have seen that on the disk volume which stores filestream data, when increased allocation to 64K from 4K, it reduces the fragmentation. Again, I suggest you attempt this after proper testing on your server. Every system is different and the file stored is different. Here is when I would like to request you to share your experience with related to NTFS allocation size. If you do not agree with any of the above suggestions, leave a comment with reference and I will modify it. Please note that above list prepared assuming the SQL Server application is only running on the computer system. The next question does all these still relevant for SSD – I personally have no experience with SSD with large database so I will refrain from comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Creating All New Database with Full Recovery Model

    - by pinaldave
    Sometimes, complex problems have very simple solutions. Let us see the following email which I received recently. “Hi Pinal, In our system when we create new database, by default, they are all created with the Simple Recovery Model. We have to manually change the recovery model after we create the database. We used the following simple T-SQL code: CREATE DATABASE dbname. We are very frustrated with this situation. We want all our databases to have the Full Recovery Model option by default. We are considering the following methods; please suggest the most efficient one among them. 1) Creating a Policy; when it is violated, the database model can be fixed 2) Triggers at Server Level 3) Automated Job which goes through all the databases and checks their recovery model; if the DBA has not changed the model, then the job will list the Databases and change their recovery model Also, we have a situation where we need a database in the Simple Recovery Model as well – how to white list them? Please suggest the best method.” Indeed, an interesting email! The answer to their question, i.e., which is the best method to fit their needs (white list, default, etc)? It will be NONE of the above. Here is the solution in one line and also the easiest way: Just go to your Model database: Path in SSMS >> Databases > System Databases >> model >> Right Click Properties >> Options >> Recovery Model - Select Full from dropdown. Every newly created database takes its base template from the Model Database. If you create a custom SP in the Model Database, when you create a new database, it will automatically exist in that database. Any database that was already created before making changes in the Model Database will not be affected at all. Creating Policy is also a good method, and I will blog about this in a separate blog post, but looking at current specifications of the reader, I think the Model Database should be modified to have a Full Recovery Option. While writing this blog post, I remembered my another blog post where the model database log file was growing drastically even though there were no transactions SQL SERVER – Log File Growing for Model Database – model Database Log File Grew Too Big. NOTE: Please do not touch the Model Database unnecessary. It is a strict “No.” If you want to create an object that you need in all the databases, then instead of creating it in model database, I suggest that you create a new database called maintenance and create the object there. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, Readers Question, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Fix: Error: 10920 Cannot drop user-defined function. It is being used as a resource governor classifier

    - by pinaldave
    If you have not read my SQL SERVER – Simple Example to Configure Resource Governor – Introduction to Resource Governor yesterday’s detailed primer on Resource Governor, I suggest you go ahead and read it before continuing this article. After reading the article the very first email I received was as follows: “Pinal, I configured resource governor on my development server and it worked fine with tests I ran. After doing some tests, I decided to remove the resource governor and as a first step I disabled it however, I was not able to drop the classification function during the process of the clean up. It was continuously giving me following error. Msg 10920, Level 16, State 1, Line 1 Cannot drop user-defined function myudfname. It is being used as a resource governor classifier. Would you please give me solution?” The original email was really this short and there is no other information. I am glad he has done experiments on development server and not on the production server. Production server must not be the playground of the experiments. I think I have covered the answer of this error in an earlier blog post. If the user disables the Resource Governor it is still not possible to drop the function because it can be enabled again and when enabled it can still use the same function. Here is the simple resolution of the how one can drop the classifier function (do this only if you are not going to use the function). The reason the classifier function can’t be dropped because it is associated with resource governor. Create a new classified function for your resource governor or just assign NULL as described in the following T-SQL Script and you will be able to drop the function without error. ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION = NULL) GO ALTER RESOURCE GOVERNOR DISABLE GO DROP FUNCTION dbo.UDFClassifier GO I am glad that user asked me question instead of doing something radically different, which can leave the server in the unusable state. I am aware of this only method to avoid this error. Is there any better way to achieve the same? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News Win MS Office License Last 2 days

    Just a note for everybody who is from India and want to win FREE Office License, participate in very easy contest here. SQLAuthority News Virtual Launch Event for Office 2010 Contest Win MS Office License Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER Disable Clustered Index and Data Insert

    Earlier today I received following email. “Dear Pinal, [Removed unrelated content] We looked at your script and found out that in your script of disabling indexes, you have only included non-clustered index during the bulk insert and missed to disabled all the clustered index. Our DBA[name removed] has changed your script a bit and included [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER FIX : ERROR : 4214 BACKUP LOG cannot be performed because there is no current database b

    I recently got following email from one of the reader.Hi Pinal,Even thought my database is in full recovery mode when I try to take log backup I am getting following error.BACKUP LOG cannot be performed because there is no current database backup. (Microsoft.SqlServer.Smo)How to fix it?Thanks,[name and email removed as requested]Solution / Fix:This error can [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • The backup set holds a backup of a database other than the existing database (Microsoft SQL Server,

    - by Shravan
    Fewdays back i took the backup of my development server, I am trying to restore it in my own system. I am getting the following error.  The backup set holds a backup of a database other than the existing 'sample' database. RESTORE DATABASE is terminating abnormally. (Microsoft SQL Server, Error: 3154)   I found a following solution form pinal dev blog. It's very simple. Ex: RESTORE DATABASE Sample FROM DISK = 'C:\Sample.bak' WITH REPLACE

    Read the article

  • How do I resolve "Unable to resolve attribute [organizationType.id] against path" exception?

    - by Dave
    I'm using Spring 3.1.1.RELEASE, Hibernate 4.1.0.Final, JUnit 4.8, and JPA 2.0 (hibernate-jpa-2.0-api). I'm trying to write a query and search based on fields of member fields. What I mean is I have this entity … @GenericGenerator(name = "uuid-strategy", strategy = "uuid.hex") @Entity @Table(name = "cb_organization", uniqueConstraints = {@UniqueConstraint(columnNames={"organization_id"})}) public class Organization implements Serializable { @Id @NotNull @GeneratedValue(generator = "uuid-strategy") @Column(name = "id") /* the database id of the Organization */ private String id; @ManyToOne @JoinColumn(name = "state_id", nullable = true, updatable = false) /* the State for the organization */ private State state; @ManyToOne @JoinColumn(name = "country_id", nullable = false, updatable = false) /* The country the Organization is in */ private Country country; @ManyToOne(optional = false) @JoinColumn(name = "organization_type_id", nullable = false, updatable = false) /* The type of the Organization */ private OrganizationType organizationType; Notice the members "organizationType," "state," and "country," which are all objects. I wish to build a query based on their id fields. This code @Override public List<Organization> findByOrgTypesCountryAndState(Set<String> organizationTypes, String countryId, String stateId) { CriteriaBuilder builder = entityManager.getCriteriaBuilder(); CriteriaQuery<Organization> criteria = builder.createQuery(Organization.class); Root<Organization> org = criteria.from(Organization.class); criteria.select(org).where(builder.and(org.get("organizationType.id").in(organizationTypes), builder.equal(org.get("state.id"), stateId), builder.equal(org.get("country.id"), countryId))); return entityManager.createQuery(criteria).getResultList(); } is throwing the exception below. How do I heal the pain? java.lang.IllegalArgumentException: Unable to resolve attribute [organizationType.id] against path at org.hibernate.ejb.criteria.path.AbstractPathImpl.unknownAttribute(AbstractPathImpl.java:116) at org.hibernate.ejb.criteria.path.AbstractPathImpl.locateAttribute(AbstractPathImpl.java:221) at org.hibernate.ejb.criteria.path.AbstractPathImpl.get(AbstractPathImpl.java:192) at org.mainco.subco.organization.repo.OrganizationDaoImpl.findByOrgTypesCountryAndState(OrganizationDaoImpl.java:248) at org.mainco.subco.organization.repo.OrganizationDaoTest.testFindByOrgTypesCountryAndState(OrganizationDaoTest.java:55) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.springframework.test.context.junit4.statements.RunBeforeTestMethodCallbacks.evaluate(RunBeforeTestMethodCallbacks.java:74) at org.springframework.test.context.junit4.statements.RunAfterTestMethodCallbacks.evaluate(RunAfterTestMethodCallbacks.java:83) at org.springframework.test.context.junit4.statements.SpringRepeat.evaluate(SpringRepeat.java:72) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:231) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61) at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:71) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:174) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)

    Read the article

  • WCF service dataContractSerializer maxItemsInObjectGraph in web.config

    - by Dave
    I am having issues specifying the dataContractSerializer maxItemsInObjectGraph in host's web.config. <behaviors> <serviceBehaviors> <behavior name="beSetting"> <serviceMetadata httpGetEnabled="True"/> <serviceDebug includeExceptionDetailInFaults="True" /> <dataContractSerializer maxItemsInObjectGraph="2147483646"/> </behavior> </serviceBehaviors> </behaviors> <services> <service name="MyNamespace.MyService" behaviorConfiguration="beSetting" > <endpoint address="http://localhost/myservice/" binding="webHttpBinding" bindingConfiguration="webHttpBinding1" contract="MyNamespace.IMyService" bindingNamespace="MyNamespace"> </endpoint> </service> </services> The above has no effect on my data pull. The server times out because of the large volume of data. I can however specify the max limit in code and that works [ServiceBehavior(MaxItemsInObjectGraph=2147483646, IncludeExceptionDetailInFaults = true)] public abstract class MyService : MyService { blah... } Does anyone know why I can't make this work through a web.config setting? I would like to keep in the web.config so it is easier for future updates.

    Read the article

  • MVC Html.ActionLink not passing querystring properly

    - by Dave Hanna
    This seems like it should be pretty straight forward, but I'm apparently confused. I have a List view that displays a paged list. At the bottom I have a set of actionlinks: <%= Html.ActionLink("First Page", "List", new { page = 1} ) %> &nbsp; <%= Html.ActionLink("Prev Page", "List", new { page = Model.PageNumber - 1 }) %> &nbsp; <%= Html.ActionLink("Next Page", "List", new { page = Model.PageNumber + 1 }) %> &nbsp; <%= Html.ActionLink("Last Page", "List", new { page = Model.LastPage } )%> I'm using the basic default routes setup, except with "List" substituted for "Index": routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "List", id = UrlParameter.Optional } // Parameter defaults ); The problem is that the ActionLink helpers are generating links of the form: http://localhost:2083/Retrofit?page=2 rather than http://localhost:2083/Retrofit/?page=2 (with the trailing slash after the controller name & before the query string). When the first URL is routed, it completely loses the query string - if I look at Request.QueryString by the time it gets to the controller, it's null. If I enter the second URL (with the trailing slash), it comes in properly (i.e., QueryString of "page=2"). So how can I either get the ActionLink helper to generate the right URL, or get the Routing to properly parse what ActionLink is generating? Thanks.

    Read the article

  • Rounded Rect UIButton without the border

    - by Dave DeLong
    I'm trying to draw a normal rounded rect UIButton, but without the border. Ideally I'd like to be able to change some setting on the UIButton to disable the border. My problem is that if I change the button type to "custom", I don't get the nice blue selection gradient (which I want to keep), and I have no idea how to draw it manually.

    Read the article

  • WCF set bindings on service at runtime

    - by Dave
    My app has to be installed on my client's webservers. Some clients want to use SSL and some do not. My app has a WCF service and I currently have to go into the web.config for each install and switch the security mode from to depending on the client's SSL situation. I am able to set the client bindings at runtime. However, I would like to know if there is a way to set the service bindings at runtime(on the server side).

    Read the article

  • export data from WCF Service to excel

    - by Dave
    I need to provide an export to excel feature for a large amount of data returned from a WCF web service. The code to load the datalist is as below: List<resultSet> r = myObject.ReturnResultSet(myWebRequestUrl); //call to WCF service myDataList.DataSource = r; myDataList.DataBind(); I am using the Reponse object to do the job: Response.Clear(); Response.Buffer = true; Response.ContentType = "application/vnd.ms-excel"; Response.AddHeader("Content-Disposition", "attachment; filename=MyExcel.xls"); StringBuilder sb = new StringBuilder(); StringWriter sw = new StringWriter(sb); HtmlTextWriter tw = new HtmlTextWriter(sw); myDataList.RenderControl(tw); Response.Write(sb.ToString()); Response.End(); The problem is that WCF Service times out for large amount of data (about 5000 rows) and the result set is null. When I debug the service, I can see the window for saving/opening the excel sheet appear before the service returns the result and hence the excel sheet is always empty. Please help me figure this out.

    Read the article

  • IE8 Developer Tools won't open

    - by Dave
    Just installed IE8 on WinXP. Trying to use the Dev Tools but, clicking on the menu item or hitting F12 doesn't do anything. I can see the option in the tools menu, just can't use it. I've checked to make sure it wasn't opening minimized or off screen and I've checked the registry settings used to disable them. Those registry keys don't even exists. Suggestions? I was going to reinstall, but, thought I'd check here first.

    Read the article

  • WebView and Cookies on Android

    - by dave smith
    I have an application on appspot that works fine through regular browser, however when used through Android WebView, it cannot set and read cookies. I am not trying to get cookies "outside" this web application BTW, once the URL is visited by WebView, all processing, ids, etc. can stay there, all I need is session management inside that application. First screen also loads fine, so I know WebView + server interactivity is not broken. I looked at WebSettings class, there was no call like setEnableCookies. I load url like this: public class MyActivity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); WebView webview = new WebView(this); setContentView(webview); webview.loadUrl([MY URL]); } .. } Any ideas?

    Read the article

  • View all ntext column text in SQL Server Management Studio for SQL CE database

    - by Dave
    I often want to do a "quick check" of the value of a large text column in SQL Server Management Studio (SSMS). The maximum number of characters that SSMS will let you view, in grid results mode, is 65535. (It is even less in text results mode.) Sometimes I need to see something beyond that range. Using SQL Server 2005 databases, I often used the trick of converting it to XML, because SSMS lets you view much larger amounts of text that way: SELECT CONVERT(xml, MyCol) FROM MyTable WHERE ... But now I am using SQL CE, and there is no Xml data type. There is still a "Maximum Characters Retreived XML" value under Options; I suppose this is useful when connecting to other data sources. I know I can just get the full value by running a little console app or something, but is there a way within SSMS to see the entire ntext column value? [Edit] OK, this didn't get much attention the first time around (18 views?!). It's not a huge concern, but maybe I'm just obsessed with it. There has to be some good way around this, doesn't there? So a modest bounty is active. What I am willing to accept as answers, in order from best-to-worst: A solution that works just as easy as the XML trick in SQL CE. That is, a single function (convert, cast, etc.) that does the job. A not-too-invasive way to hack SSMS to get it to display more text in the results. An equivalent SQL query (perhaps something that creatively uses SUBSTRING and generates multiple ad-hoc columns??) to see the results. The solution should work with nvarchar and ntext columns of any length in SQL CE from SSMS. Any ideas?

    Read the article

  • Creating a semi-transparent blurred background WPF

    - by Dave Colwell
    Hi guys, I have a border, i want the background of this border to be partially transparent (opacity 0.8) but i do not want the image behind it to be well defined. The effect i am after is similar to the Windows Vista window border effect, where you can see that something is behind it but you cant tell what it is. A few clarifications: I am working on Windows XP, so i cant use Vista Glass I need this solution to be portable across any windows platform Any help would be appreciated :)

    Read the article

  • Problem with jqGrid in Internet Explorer 8

    - by Dave Swersky
    I have developed an ASP.NET MVC (version 2 RC) application with a ton of jqGrids. It's working like a champ in Firefox, but I've discovered a problem in IE8. The "Main View" grids can be filtered by a search box or one of a few dropdowns above the grid. I use some javascript to reset the url for the grid, then trigger a reload, thusly: function filterByName(filter) { if (filter == 'All') { $('#list').setGridParam({ url: 'Application/GetApplications' }); $('#list').trigger("reloadGrid"); } else { $('#list').setGridParam({ url: 'Application/GetAppByName/' + filter + '/' }); $('#list').trigger("reloadGrid"); } } This works like magic in Firefox, but I'm getting an HTTP 400 Bad Request when I do this in IE8. The IE8 client-side debugger is like flint and tinder compared to Firebug's flamethrower, so I'm not having much luck figuring out why it breaks in IE8. Has anyone seen this? Also, the jqGrid "trigger" method here is swallowing the AJAX exception. Is there a way to get it to bubble up so I can get to the exception details? UPDATE: The problem was with the syntax in my "onchange" event for the dropdowns. I was using: onchange="filterByMnemonic($('#drpMnemonic')[0].value); Which Firefox apparently doesn't mind but IE sees that as nuthin'. This, however, works in both browsers: onchange = "filterByMnemonic($('#drpMnemonic > option:selected').attr('value'));"

    Read the article

  • Can plugins loaded with MEF resolve their own internal dependencies with the same MEF container for

    - by Dave
    From my experimentation, I think the answer is "kind of", but I could have made a mistake. I have an application that loads appliance plugins with MEF. That part is working fine. Now let's say that my BlenderAppliance wants to resolve several of its dependencies with MEF, which each implement IApplianceFeature. I've just used the ImportMany attribute to my plugin. I made sure to create the plugin using MEF so that the Imports work properly. I said "kind of" because some of the plugin's internals (i.e. the model) are loading with MEF just fine, but the IApplianceFeatures aren't. The difference here is that the IApplianceFeatures are themselves, assemblies. And at the moment, they are in one folder above that of the plugin itself, i.e. + application folder | IApplianceFeature1.dll | IApplianceFeature2.dll +---+ plugin folder | BlenderAppliance.dll Now if my application uses an AggregateCatalog to load the "." and ".\plugins" folders, why doesn't it ever load the IApplianceFeature assemblies for me? Is it possible / advisable to have the plugin create its own MEF container to resolve its dependencies, or does really nasty stuff happen? If you have any stories about this scenario, please share. :)

    Read the article

  • VS 2010 debugger not loading symbols when attaching to NUnit

    - by Dave Hanna
    (I just posted this in the NUnit discussion group on groups.google.com) Under VS 2008, I would run my tests under NUnit, and, if I needed to debug, I would attach the VS2008 debugger to the running Nunit process (Debug - Attach to Process), and set any breakpoints on code I wanted to examine. When I hit the Run buttion in NUnit, it would hit the breakpoint. (BTW, if it matters, this was running NUnit 2.5.2). I just upgraded to NUnit 2.5.4 and VS 2010. When I set a breakpoint, then attach to NUnit, I get a little warning symbol on the breakpoint dot, and hovering over it gives the tooltip "Breakpoint will not be hit. No symbols are currently loaded". Going to the Debug - Windows - Modules window shows a whole bunch of Windows and NUnit modules loaded, with the Symbol Status of "Skipped loading symbols", and then 1 module with a funny name that changes each time (r1euhmh5 right now), and Symbol Status of "No symbols loaded". (There is no trace of a module with a name remotely like my DLL under test). Right clicking the funny filename (assuming that to be some mapping from my DLL under test), and clicking Load Symbols From - Symbol Path, and navigating to the bin\debug folder, then clicking the pdb file of my DLL under test, I get the message "A matching symbol was not found in this folder". (The top of the Open dialog box has a line that says "Original location: r1euhmh5.pdb") So what's changed? And how do I go about debugging/breakpointing under VS 2010/NUnit 2.5.4 (Or is it possible I screwed something up when I decided to go through my VS2010 options and set some of them to more advanced levels than I knew what I was doing?) I appreciate any help.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >