Search Results

Search found 31328 results on 1254 pages for 'sql join'.

Page 415/1254 | < Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >

  • Windows Azure End to End Examples

    - by BuckWoody
    I’m fascinated by the way people learn. I’m told there are several methods people use to understand new information, from reading to watching, from experiencing to exploring. Personally, I use multiple methods of learning when I encounter a new topic, usually starting with reading a bit about the concepts. I quickly want to put those into practice, however, especially in the technical realm. I immediately look for examples where I can start trying out the concepts. But I often want a “real” example – not just something that represents the concept, but something that is real-world, showing some feature I could actually use. And it’s no different with the Windows Azure platform – I like finding things I can do now, and actually use. So when I started learning Windows Azure, I of course began with the Windows Azure Training Kit – which has lots of examples and labs, presentations and so on. But from there, I wanted more examples I could learn from, and eventually teach others with. I was asked if I would write a few of those up, so here are the ones I use. CodePlex CodePlex is Microsoft’s version of an “Open Source” repository. Anyone can start a project, add code, documentation and more to it and make it available to the world, free of charge, using various licenses as they wish. Microsoft also uses this location for most of the examples we publish, and sample databases for SQL Server. If you search in CodePlex for “Azure”, you’ll come back with a list of projects that folks have posted, including those of us at Microsoft. The source code and documentation are there, so you can learn using actual examples of code that will do what you need. There’s everything from a simple table query to a full project that is sort of a “Corporate Dropbox” that uses Windows Azure Storage. The advantage is that this code is immediately usable. It’s searchable, and you can often find a complete solution to meet your needs. The disadvantage is that the code is pretty specific – it may not cover a huge project like you’re looking for. Also, depending on the author(s), you might not find the documentation level you want. Link: http://azureexamples.codeplex.com/site/search?query=Azure&ac=8    Tailspin Microsoft Patterns and Practices is a group here that does an amazing job at sharing standard ways of doing IT – from operations to coding. If you’re not familiar with this resource, make sure you read up on it. Long before I joined Microsoft I used their work in my daily job – saved a ton of time. It has resources not only for Windows Azure but other Microsoft software as well. The Patterns and Practices group also publishes full books – you can buy these, but many are also online for free. There’s an end-to-end example for Windows Azure using a company called “Tailspin”, and the work covers not only the code but the design of the full solution. If you really want to understand the thought that goes into a Platform-as-a-Service solution, this is an excellent resource. The advantages are that this is a book, it’s complete, and it includes a discussion of design decisions. The disadvantage is that it’s a little over a year old – and in “Cloud” years that’s a lot. So many things have changed, improved, and have been added that you need to treat this as a resource, but not the only one. Still, highly recommended. Link: http://msdn.microsoft.com/en-us/library/ff728592.aspx Azure Stock Trader Sometimes you need a mix of a CodePlex-style application, and a little more detail on how it was put together. And it would be great if you could actually play with the completed application, to see how it really functions on the actual platform. That’s the Azure Stock Trader application. There’s a place where you can read about the application, and then it’s been published to Windows Azure – the production platform – and you can use it, explore, and see how it performs. I use this application all the time to demonstrate Windows Azure, or a particular part of Windows Azure. The advantage is that this is an end-to-end application, and online as well. The disadvantage is that it takes a bit of self-learning to work through.  Links: Learn it: http://msdn.microsoft.com/en-us/netframework/bb499684 Use it: https://azurestocktrader.cloudapp.net/

    Read the article

  • SQL Saturday 43 in Redmond

    - by AjarnMark
    I attended my first SQLSaturday a couple of days ago, SQLSaturday #43 in Redmond (at Microsoft).  I got there really early, primarily because I forgot how fast I can get there from my home when nobody else is on the road.  On a weekday in rush hour traffic, that would have taken two hours to get there.  I gave myself 90 minutes, and actually got there in about 45.  Crazy! I made the mistake of going to the main Microsoft campus, but that’s not where the event was being held.  Instead it was in a big Microsoft conference center on the other side of the highway.  Fortunately, I had the address with me and quickly realized my mistake.  When I got back on track, I noticed that there were bright yellow signs out on the street corner that looked like they said they were for SOL Saturday, which actually was appropriate since it was the sunniest day around here in a long time. Since I was there so early, the registration was just getting setup, so I found Greg Larsen who was coordinating things and offered to help.  He put me to work with a group of people organizing the pre-printed raffle tickets and stuffing swag bags. I had never been to a SQLSaturday before this one, so I wasn’t exactly sure what to expect even though I have read about a few on some blogs.  It makes sense that each one will be a little bit different since they are almost completely volunteer driven, and the whole concept is still in its early stages.  I have been to the PASS Summit for the last several years, and was hoping for a smaller version of that.  Now, it’s not really fair to compare one free day of training run entirely by volunteers with a multi-day, $1,000+ event put on under the direction of a professional event management company.  But there are some parallels. At this SQLSaturday, there was no opening general session, just coffee and pastries in the common area / expo hallway and straight into the first group of sessions.  I don’t know if that was because there was no single room large enough to hold everyone, or for other reasons.  This worked out okay, but the organization guy in me would have preferred to have even a 15 minute welcome message from the organizers with a little overview of the day.  Even something as simple as, “Thanks to persons X, Y, and Z for helping put this together…Sessions will start in 20 minutes and are all in rooms down this hallway…the bathrooms are on the other side of the conference center…lunch today is pizza and we would like to thank sponsor Q for providing it.”  It doesn’t need to be much, certainly not a full-blown Keynote like at the PASS Summit, but something to use as a rallying point to pull everyone together and get the day off to an official start would be nice.  Again, there may have been logistical reasons why that was not feasible here.  I’m just putting out my thoughts for other SQLSaturday coordinators to consider. The event overall was great.  I believe that there were over 300 in attendance, and everything seemed to run smoothly.  At least from an attendee’s point of view where there was plenty of muffins in the morning and pizza in the afternoon, with plenty of pop to drink.  And hey, if you’ve got the food and drink covered, a lot of other stuff could go wrong and people will be very forgiving.  But as I said, everything appeared to run pretty smoothly, at least until Buck Woody showed up in his Oracle shirt.  Other than that, the volunteers did a great job! I was a little surprised by how few people in my own backyard that I know.  It makes sense if you really think about it, given how many companies must be using SQL Server around here.  I guess I just got spoiled coming into the PASS Summit with a few contacts that I already knew would be there.  Perhaps I have been spending too much time with too few people at the Summits and I need to step out and meet more folks.  Of course, it also is different since the Summit is the big national event and a number of the folks I know are spread out across the country, so the Summit is the only time we’re all in the same place at the same time.  I did make a few new contacts at SQLSaturday, and bumped into a couple of people that I knew (and a couple others that I only knew from Twitter, and didn’t even realize that they were here in the area). Other than the sheer entertainment value of Buck Woody’s session, the one that was probably the greatest value for me was a quick introduction to PowerShell.  I have not done anything with it yet, but I think it will be a good tool to use to implement my plans for automated database recovery testing.  I saw just enough at the session to take away some of the intimidation factor, and I am getting ready to jump in and see what I can put together in the next few weeks.  And that right there made the investment worthwhile.  So I encourage you, if you have the opportunity to go to a SQLSaturday event near you, go for it!

    Read the article

  • Windows Azure End to End Examples

    - by BuckWoody
    I’m fascinated by the way people learn. I’m told there are several methods people use to understand new information, from reading to watching, from experiencing to exploring. Personally, I use multiple methods of learning when I encounter a new topic, usually starting with reading a bit about the concepts. I quickly want to put those into practice, however, especially in the technical realm. I immediately look for examples where I can start trying out the concepts. But I often want a “real” example – not just something that represents the concept, but something that is real-world, showing some feature I could actually use. And it’s no different with the Windows Azure platform – I like finding things I can do now, and actually use. So when I started learning Windows Azure, I of course began with the Windows Azure Training Kit – which has lots of examples and labs, presentations and so on. But from there, I wanted more examples I could learn from, and eventually teach others with. I was asked if I would write a few of those up, so here are the ones I use. CodePlex CodePlex is Microsoft’s version of an “Open Source” repository. Anyone can start a project, add code, documentation and more to it and make it available to the world, free of charge, using various licenses as they wish. Microsoft also uses this location for most of the examples we publish, and sample databases for SQL Server. If you search in CodePlex for “Azure”, you’ll come back with a list of projects that folks have posted, including those of us at Microsoft. The source code and documentation are there, so you can learn using actual examples of code that will do what you need. There’s everything from a simple table query to a full project that is sort of a “Corporate Dropbox” that uses Windows Azure Storage. The advantage is that this code is immediately usable. It’s searchable, and you can often find a complete solution to meet your needs. The disadvantage is that the code is pretty specific – it may not cover a huge project like you’re looking for. Also, depending on the author(s), you might not find the documentation level you want. Link: http://azureexamples.codeplex.com/site/search?query=Azure&ac=8    Tailspin Microsoft Patterns and Practices is a group here that does an amazing job at sharing standard ways of doing IT – from operations to coding. If you’re not familiar with this resource, make sure you read up on it. Long before I joined Microsoft I used their work in my daily job – saved a ton of time. It has resources not only for Windows Azure but other Microsoft software as well. The Patterns and Practices group also publishes full books – you can buy these, but many are also online for free. There’s an end-to-end example for Windows Azure using a company called “Tailspin”, and the work covers not only the code but the design of the full solution. If you really want to understand the thought that goes into a Platform-as-a-Service solution, this is an excellent resource. The advantages are that this is a book, it’s complete, and it includes a discussion of design decisions. The disadvantage is that it’s a little over a year old – and in “Cloud” years that’s a lot. So many things have changed, improved, and have been added that you need to treat this as a resource, but not the only one. Still, highly recommended. Link: http://msdn.microsoft.com/en-us/library/ff728592.aspx Azure Stock Trader Sometimes you need a mix of a CodePlex-style application, and a little more detail on how it was put together. And it would be great if you could actually play with the completed application, to see how it really functions on the actual platform. That’s the Azure Stock Trader application. There’s a place where you can read about the application, and then it’s been published to Windows Azure – the production platform – and you can use it, explore, and see how it performs. I use this application all the time to demonstrate Windows Azure, or a particular part of Windows Azure. The advantage is that this is an end-to-end application, and online as well. The disadvantage is that it takes a bit of self-learning to work through.  Links: Learn it: http://msdn.microsoft.com/en-us/netframework/bb499684 Use it: https://azurestocktrader.cloudapp.net/

    Read the article

  • My Right-to-Left Foot (T-SQL Tuesday #13)

    - by smisner
    As a business intelligence consultant, I often encounter the situation described in this month's T-SQL Tuesday, hosted by Steve Jones ( Blog | Twitter) – “What the Business Says Is Not What the  Business Wants.” Steve posed the question, “What issues have you had in interacting with the business to get your job done?” My profession requires me to have one foot firmly planted in the technology world and the other foot planted in the business world. I learned long ago that the business never says exactly what the business wants because the business doesn't have the words to describe what the business wants accurately enough for IT. Not only do technological-savvy barriers exist, but there are also linguistic barriers between the two worlds. So how do I cope? The adage "a picture is worth a thousand words" is particularly helpful when I'm called in to help design a new business intelligence solution. Many of my students in BI classes have heard me explain ("rant") about left-to-right versus right-to-left design. To understand what I mean about these two design options, let's start with a picture: When we design a business intelligence solution that includes some sort of traditional data warehouse or data mart design, we typically place the data sources on the left, the new solution in the middle, and the users on the right. When I've been called in to help course-correct a failing BI project, I often find that IT has taken a left-to-right approach. They look at the data sources, decide how to model the BI solution as a _______ (fill in the blank with data warehouse, data mart, cube, etc.), and then build the new data structures and supporting infrastructure. (Sometimes, they actually do this without ever having talked to the business first.) Then, when they show what they've built to the business, the business says that is not what we want. Uh-oh. I prefer to take a right-to-left approach. Preferably at the beginning of a project. But even if the project starts left-to-right, I'll do my best to swing it around so that we’re back to a right-to-left approach. (When circumstances are beyond my control, I carry on, but it’s a painful project for everyone – not because of me, but because the approach just doesn’t get to what the business wants in the most effective way.) By using a right to left approach, I try to understand what it is the business is trying to accomplish. I do this by having them explain reports to me, and explaining the decision-making process that relates to these reports. Sometimes I have them explain to me their business processes, or better yet show me their business processes in action because I need pictures, too. I (unofficially) call this part of the project "getting inside the business's head." This is starting at the right side of the diagram above. My next step is to start moving leftward. I do this by preparing some type of prototype. Depending on the nature of the project, this might mean that I simply mock up some data in a relational database and build a prototype report in Reporting Services. If I'm lucky, I might be able to use real data in a relational database. I'll either use a subset of the data in the prototype report by creating a prototype database to hold the sample data, or select data directly from the source. It all depends on how much data there is, how complex the queries are, and how fast I need to get the prototype completed. If the solution will include Analysis Services, then I'll build a prototype cube. Analysis Services makes it incredibly easy to prototype. You can sit down with the business, show them the prototype, and have a meaningful conversation about what the BI solution should look like. I know I've done a good job on the prototype when I get knocked out of my chair so that the business user can explore the solution further independently. (That's really happened to me!) We can talk about dimensions, hierarchies, levels, members, measures, and so on with something tangible to look at and without using those terms. It's not helpful to use sample data like Adventure Works or to use BI terms that they don't really understand. But when I show them their data using the BI technology and talk to them in their language, then they truly have a picture worth a thousand words. From that, we can fine tune the prototype to move it closer to what they want. They have a better idea of what they're getting, and I have a better idea of what to build. So right to left design is not truly moving from the right to the left. But it starts from the right and moves towards the middle, and once I know what the middle needs to look like, I can then build from the left to meet in the middle. And that’s how I get past what the business says to what the business wants.

    Read the article

  • SQL Server MVP Deep Dives 2. The Awesome Returns.

    - by Mladen Prajdic
    Two years ago 59 SQL Server MVP's came together and helped make one of the best book on SQL Server out there. Each chapter was written by an MVP about a part of SQL Server they loved working with. This resulted in superb quality content and excellent ratings from the readers. To top it off all earnings went to a good cause, the War Child International organization. That book was SQL Server MVP Deep Dives. This year 63 SQL Server MVPs, me included, decided it was time do repeat the success of the first book. Let me introduce you the: SQL Server MVP Deep Dives 2 The topics in 60 chapters are grouped in 5 groups: Architecture, Database Administration, Database Development, Performance Tuning and Optimization, Business Intelligence. They represent over 1000 years of daily experience in various areas of SQL Server. I have contributed chapter 28 in Database Development group titled Getting asynchronous with Service Broker. In it I show you the Service Broker template you can use for secure communication between two or more SQL server instances for whatever purpose you may have. If you haven't heard of Service Broker it's a part of the database engine that enables you to do completely async operations in the database itself or between databases and instances. The official release of the book will be next week at PASS where there will be 2 slots where most of the authors will be there signing the books you bring. This is also a great opportunity to meet everyone and ask about any problems you may have. So definitely come say hi. Again we decided on a charity that will be supported by this book. It's called Operation Smile. They provide free surgeries to repair cleft lip, cleft palate and other facial deformities for children around the globe. You can also help them by donating. You can preorder it on at Manning Publications website or on Amazon. By having it you not only get to learn a lot, improve your skills and have fun but you also help a child have a normal life. If that's not a good cause then I don't know what it is.

    Read the article

  • Null Values And The T-SQL IN Operator

    - by Jesse
    I came across some unexpected behavior while troubleshooting a failing test the other day that took me long enough to figure out that I thought it was worth sharing here. I finally traced the failing test back to a SELECT statement in a stored procedure that was using the IN t-sql operator to exclude a certain set of values. Here’s a very simple example table to illustrate the issue: Customers CustomerId INT, NOT NULL, Primary Key CustomerName nvarchar(100) NOT NULL SalesRegionId INT NULL   The ‘SalesRegionId’ column contains a number representing the sales region that the customer belongs to. This column is nullable because new customers get created all the time but assigning them to sales regions is a process that is handled by a regional manager on a periodic basis. For the purposes of this example, the Customers table currently has the following rows: CustomerId CustomerName SalesRegionId 1 Customer A 1 2 Customer B NULL 3 Customer C 4 4 Customer D 2 5 Customer E 3   How could we write a query against this table for all customers that are NOT in sales regions 2 or 4? You might try something like this: 1: SELECT 2: CustomerId, 3: CustomerName, 4: SalesRegionId 5: FROM Customers 6: WHERE SalesRegionId NOT IN (2,4)   Will this work? In short, no; at least not in the way that you might expect. Here’s what this query will return given the example data we’re working with: CustomerId CustomerName SalesRegionId 1 Customer A 1 5 Customer E 5   I was expecting that this query would also return ‘Customer B’, since that customer has a NULL SalesRegionId. In my mind, having a customer with no sales region should be included in a set of customers that are not in sales regions 2 or 4.When I first started troubleshooting my issue I made note of the fact that this query should probably be re-written without the NOT IN clause, but I didn’t suspect that the NOT IN clause was actually the source of the issue. This particular query was only one minor piece in a much larger process that was being exercised via an automated integration test and I simply made a poor assumption that the NOT IN would work the way that I thought it should. So why doesn’t this work the way that I thought it should? From the MSDN documentation on the t-sql IN operator: If the value of test_expression is equal to any value returned by subquery or is equal to any expression from the comma-separated list, the result value is TRUE; otherwise, the result value is FALSE. Using NOT IN negates the subquery value or expression. The key phrase out of that quote is, “… is equal to any expression from the comma-separated list…”. The NULL SalesRegionId isn’t included in the NOT IN because of how NULL values are handled in equality comparisons. From the MSDN documentation on ANSI_NULLS: The SQL-92 standard requires that an equals (=) or not equal to (<>) comparison against a null value evaluates to FALSE. When SET ANSI_NULLS is ON, a SELECT statement using WHERE column_name = NULL returns zero rows even if there are null values in column_name. A SELECT statement using WHERE column_name <> NULL returns zero rows even if there are nonnull values in column_name. In fact, the MSDN documentation on the IN operator includes the following blurb about using NULL values in IN sub-queries or expressions that are used with the IN operator: Any null values returned by subquery or expression that are compared to test_expression using IN or NOT IN return UNKNOWN. Using null values in together with IN or NOT IN can produce unexpected results. If I were to include a ‘SET ANSI_NULLS OFF’ command right above my SELECT statement I would get ‘Customer B’ returned in the results, but that’s definitely not the right way to deal with this. We could re-write the query to explicitly include the NULL value in the WHERE clause: 1: SELECT 2: CustomerId, 3: CustomerName, 4: SalesRegionId 5: FROM Customers 6: WHERE (SalesRegionId NOT IN (2,4) OR SalesRegionId IS NULL)   This query works and properly includes ‘Customer B’ in the results, but I ultimately opted to re-write the query using a LEFT OUTER JOIN against a table variable containing all of the values that I wanted to exclude because, in my case, there could potentially be several hundred values to be excluded. If we were to apply the same refactoring to our simple sales region example we’d end up with: 1: DECLARE @regionsToIgnore TABLE (IgnoredRegionId INT) 2: INSERT @regionsToIgnore values (2),(4) 3:  4: SELECT 5: c.CustomerId, 6: c.CustomerName, 7: c.SalesRegionId 8: FROM Customers c 9: LEFT OUTER JOIN @regionsToIgnore r ON r.IgnoredRegionId = c.SalesRegionId 10: WHERE r.IgnoredRegionId IS NULL By performing a LEFT OUTER JOIN from Customers to the @regionsToIgnore table variable we can simply exclude any rows where the IgnoredRegionId is null, as those represent customers that DO NOT appear in the ignored regions list. This approach will likely perform better if the number of sales regions to ignore gets very large and it also will correctly include any customers that do not yet have a sales region.

    Read the article

  • FREE! SQL Scripts Manager – a Christmas gift from Red Gate

    Red Gate has released SQL Scripts Manager, a free tool containing over 25 scripts written by SQL Server experts, to help you automate common troubleshooting, diagnostic, and maintenance tasks. Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • What to leave when you're leaving

    - by BuckWoody
    There's already a post on this topic - sort of. I read this entry, where the author did a good job on a few steps, but I found that a few other tips might be useful, so if you want to check that one out and then this post, you might be able to put together your own plan for when you leave your job.  I once took over the system administrator (of which the Oracle and SQL Server servers were a part) at a mid-sized firm. The outgoing administrator had about a two- week-long scheduled overlap with me, but was angry at the company and told me "hey, I know this is going to be hard on you, but I want them to know how important I was. I'm not telling you where anything is or what the passwords are. Good luck!" He then quit that day. It took me about three days to find all of the servers and crack the passwords. Yes, the company tried to take legal action against the guy and all that, but he moved back to his home country and so largely got away with it. Obviously, this isn't the way to leave a job. Many of us have changed jobs in the past, and most of us try to be very professional about the transition to a new team, regardless of the feelings about a particular company. I've been treated badly at a firm, but that is no reason to leave a mess for someone else. So here's what you should put into place at a minimum before you go. Most of this is common sense - which of course isn't very common these days - and another good rule is just to ask yourself "what would I want to know"? The article I referenced at the top of this post focuses on a lot of documentation of the systems. I think that's fine, but in actuality, I really don't need that. Even with this kind of documentation, I still perform a full audit on the systems, so in the end I create my own system documentation. There are actually only four big items I need to know to get started with the systems: 1. Where is everything/everybody?The first thing I need to know is where all of the systems are. I mean not only the street address, but the closet or room, the rack number, the IU number in the rack, the SAN luns, all that. A picture here is worth a thousand words, which is why I really like Visio. It combines nice graphics, full text and all that. But use whatever you have to tell someone the physical locations of the boxes. Also, tell them the physical location of the folks in charge of those boxes (in case you aren't) or who share that responsibility. And by "where" in this case, I mean names and phones.  2. What do they do?For both the servers and the people, tell them what they do. If it's a database server, detail what each database does and what application goes to that, and who "owns" that application. In my mind, this is one of hte most important things a Data Professional needs to know. In the case of the other administrtors or co-owners, document each person's responsibilities.   3. What are the credentials?Logging on/in and gaining access to the buildings are things that the new Data Professional will need to do to successfully complete their job. This means service accounts, certificates, all of that. The first thing they should do, of course, is change the passwords on all that, but the first thing they need is the ability to do that!  4. What is out of the ordinary?This is the most tricky, and perhaps the next most important thing to know. Did you have to use a "special" driver for that video card on server X? Is the person that co-owns an application with you mentally unstable (like me) or have special needs, like "don't talk to Buck before he's had coffee. Nothing will make any sense"? Do you have service pack requirements for a specific setup? Write all that down. Anything that took you a day or longer to make work is probably a candidate here. This is my short list - anything you care to add? Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • KB2494088 Always Fails : How to get it to install (or stop trying to)?

    - by Peter K.
    I have a problem where KB2494088 (a SQL Server 2008 R2 patch) continually fails to install when I reboot my system. Downloading the manual patch and trying to apply it says: The SQL Server patch package is part of a general distribution release (GDR). This package cannot be applied since this SQL Server feature has already been patched. To continue, you must install a higher version of the SQL Server patch. Most other posts (this, this) suggest it's either a failed (or cancelled) past installation or requires a SQL server repair. I've looked at registry settings and tried the repair option, but nothing seems to work.

    Read the article

  • MSDTC Port 135 open bi-directionally

    - by Stephen Lacy
    I have two servers, a web application server and an SQL Server database running on its own server. I have a firewall between these two servers. Do I have to open port 135 on both the SQL Server and the Web Application Server. Does the SQL Server open its own connection to the Web Application Server on port 135 or any other port? Do I have to in component services point the Web Application Server MSDTC at the SQL Database Server? If the firewall is completely open, the settings in component services set to allow remote connections, remote administration etc is there any other settings that need to be changed in order to allow remote connections to the SQL Server MSDTC?

    Read the article

  • Linq to SQL NullReferenceException's: A random needle in a haystack!

    - by Shane
    I'm getting NullReferenceExeceptions at seemly random times in my application and can't track down what could be causing the error. I'll do my best to describe the scenario and setup. Any and all suggestions greatly appreciated! C# .net 3.5 Forms Application, but I use the WebFormRouting library built by Phil Haack (http://haacked.com/archive/2008/03/11/using-routing-with-webforms.aspx) to leverage the Routing libraries of .net (usually used in conjunction with MVC) - intead of using url rewriting for my urls. My database has 60 tables. All Normalized. It's just a massive application. (SQL server 2008) All queries are built with Linq to SQL in code (no SP's). Each time a new instance of my data context is created. I use only one data context with all relationships defined in 4 relationship diagrams in SQL Server. the data context gets created a lot. I let the closing of the data context be handled automatically. I've heard arguments both sides about whether you should leave to be closed automatically or do it yourself. In this case I do it myself. It doesnt seem to matter if I'm creating a lot of instances of the data context or just one. For example, I've got a vote-up button. with the following code, and it errors probably 1 in 10-20 times. protected void VoteUpLinkButton_Click(object sender, EventArgs e) { DatabaseDataContext db = new DatabaseDataContext(); StoryVote storyVote = new StoryVote(); storyVote.StoryId = storyId; storyVote.UserId = Utility.GetUserId(Context); storyVote.IPAddress = Utility.GetUserIPAddress(); storyVote.CreatedDate = DateTime.Now; storyVote.IsDeleted = false; db.StoryVotes.InsertOnSubmit(storyVote); db.SubmitChanges(); // If this story is not yet published, check to see if we should publish it. Make sure that // it is already approved. if (story.PublishedDate == null && story.ApprovedDate != null) { Utility.MakeUpcommingNewsPopular(storyId); } // Refresh our page. Response.Redirect("/news/" + category.UniqueName + "/" + RouteData.Values["year"].ToString() + "/" + RouteData.Values["month"].ToString() + "/" + RouteData.Values["day"].ToString() + "/" + RouteData.Values["uniquename"].ToString()); } The last thing I tried was the "Auto Close" flag setting on SQL Server. This was set to true and I changed to false. Doesnt seem to have done the trick although has had a good overall effect. Here's a detailed that wasnt caught. I also get slighly different errors when caught by my try/catch's. System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.NullReferenceException: Object reference not set to an instance of an object. at System.Web.Util.StringUtil.GetStringHashCode(String s) at System.Web.UI.ClientScriptManager.EnsureEventValidationFieldLoaded() at System.Web.UI.ClientScriptManager.ValidateEvent(String uniqueId, String argument) at System.Web.UI.WebControls.TextBox.LoadPostData(String postDataKey, NameValueCollection postCollection) at System.Web.UI.Page.ProcessPostData(NameValueCollection postData, Boolean fBeforeLoad) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) --- End of inner exception stack trace --- at System.Web.UI.Page.HandleError(Exception e) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.forms_news_detail_aspx.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) HELP!!!

    Read the article

  • How can I get SQL Server transactions to use record-level locks?

    - by Joe White
    We have an application that was originally written as a desktop app, lo these many years ago. It starts a transaction whenever you open an edit screen, and commits if you click OK, or rolls back if you click Cancel. This worked okay for a desktop app, but now we're trying to move to ADO.NET and SQL Server, and the long-running transactions are problematic. I found that we'll have a problem when multiple users are all trying to edit (different subsets of) the same table at the same time. In our old database, each user's transaction would acquire record-level locks to every record they modified during their transaction; since different users were editing different records, everyone gets their own locks and everything works. But in SQL Server, as soon as one user edits a record inside a transaction, SQL Server appears to get a lock on the entire table. When a second user tries to edit a different record in the same table, the second user's app simply locks up, because the SqlConnection blocks until the first user either commits or rolls back. I'm aware that long-running transactions are bad, and I know that the best solution would be to change these screens so that they no longer keep transactions open for a long time. But since that would mean some invasive and risky changes, I also want to research whether there's a way to get this code up and running as-is, just so I know what my options are. How can I get two different users' transactions in SQL Server to lock individual records instead of the entire table? Here's a quick-and-dirty console app that illustrates the issue. I've created a database called "test1", with one table called "Values" that just has ID (int) and Value (nvarchar) columns. If you run the app, it asks for an ID to modify, starts a transaction, modifies that record, and then leaves the transaction open until you press ENTER. I want to be able to start the program and tell it to update ID 1; let it get its transaction and modify the record; start a second copy of the program and tell it to update ID 2; have it able to update (and commit) while the first app's transaction is still open. Currently it freezes at step 4, until I go back to the first copy of the app and close it or press ENTER so it commits. The call to command.ExecuteNonQuery blocks until the first connection is closed. public static void Main() { Console.Write("ID to update: "); var id = int.Parse(Console.ReadLine()); Console.WriteLine("Starting transaction"); using (var scope = new TransactionScope()) using (var connection = new SqlConnection(@"Data Source=localhost\sqlexpress;Initial Catalog=test1;Integrated Security=True")) { connection.Open(); var command = connection.CreateCommand(); command.CommandText = "UPDATE [Values] SET Value = 'Value' WHERE ID = " + id; Console.WriteLine("Updating record"); command.ExecuteNonQuery(); Console.Write("Press ENTER to end transaction: "); Console.ReadLine(); scope.Complete(); } } Here are some things I've already tried, with no change in behavior: Changing the transaction isolation level to "read uncommitted" Specifying a "WITH (ROWLOCK)" on the UPDATE statement

    Read the article

  • How to select chosen columns from two different entities into one DTO using NHibernate?

    - by Pawel Krakowiak
    I have two classes (just recreating the problem): public class User { public virtual int Id { get; set; } public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public virtual IList<OrgUnitMembership> OrgUnitMemberships { get; set; } } public class OrgUnitMembership { public virtual int UserId { get; set; } public virtual int OrgUnitId { get; set; } public virtual DateTime JoinDate { get; set; } public virtual DateTime LeaveDate { get; set; } } There's a Fluent NHibernate map for both, of course: public class UserMapping : ClassMap<User> { public UserMapping() { Table("Users"); Id(e => e.Id).GeneratedBy.Identity(); Map(e => e.FirstName); Map(e => e.LastName); HasMany(x => x.OrgUnitMemberships) .KeyColumn(TypeReflector<OrgUnitMembership> .GetPropertyName(p => p.UserId))).ReadOnly().Inverse(); } } public class OrgUnitMembershipMapping : ClassMap<OrgUnitMembership> { public OrgUnitMembershipMapping() { Table("OrgUnitMembership"); CompositeId() .KeyProperty(x=>x.UserId) .KeyProperty(x=>x.OrgUnitId); Map(x => x.JoinDate); Map(x => x.LeaveDate); References(oum => oum.OrgUnit) .Column(TypeReflector<OrgUnitMembership> .GetPropertyName(oum => oum.OrgUnitId)).ReadOnly(); References(oum => oum.User) .Column(TypeReflector<OrgUnitMembership> .GetPropertyName(oum => oum.UserId)).ReadOnly(); } } What I want to do is to retrieve some users based on criteria, but I would like to combine all columns from the Users table with some columns from the OrgUnitMemberships table, analogous to a SQL query: select u.*, m.JoinDate, m.LeaveDate from Users u inner join OrgUnitMemberships m on u.Id = m.UserId where m.OrgUnitId = :ouid I am totally lost, I tried many different options. Using a plain SQL query almost works, but because there are some nullable enums in the User class AliasToBean fails to transform, otherwise wrapping a SQL query would work like this: return Session .CreateSQLQuery(sql) .SetParameter("ouid", orgUnitId) .SetResultTransformer(Transformers.AliasToBean<UserDTO>()) .List<UserDTO>() I tried the code below as a test (a few different variants), but I'm not sure what I'm doing. It works partially, I get instances of UserDTO back, the properties coming from OrgUnitMembership (dates) are filled, but all properties from User are null: User user = null; OrgUnitMembership membership = null; UserDTO dto = null; var users = Session.QueryOver(() => user) .SelectList(list => list .Select(() => user.Id) .Select(() => user.FirstName) .Select(() => user.LastName)) .JoinAlias(u => u.OrgUnitMemberships, () => membership) //.JoinQueryOver<OrgUnitMembership>(u => u.OrgUnitMemberships) .SelectList(list => list .Select(() => membership.JoinDate).WithAlias(() => dto.JoinDate) .Select(() => membership.LeaveDate).WithAlias(() => dto.LeaveDate)) .TransformUsing(Transformers.AliasToBean<UserDTO>()) .List<UserDTO>();

    Read the article

  • How to use SQL Expression Fields of Crystal Report 11.5 from VB.NET 2008

    - by Tareq
    I have the following SQL Expression Field in my Crystal Report 11.5 {fn CONCAT({fn CONCAT("SPR_PRODUCT"."PRODUCT_ID","SPR_PRODUCT_SUB_ITEM"."P_SUB_ITEM_ID" )},{fn CONCAT("SPR_PRODUCT_ITEM"."P_ITEM_ID","SPR_PRODUCT_GROUP"."P_GROUP_ID" )} )} It works well in the Preview Mode. But when I use the report in my VB.NET 2008 Project it says the following: Error in compiling SQL Expression : SQL Expressions can not be used in this report.. Error in File <...>.rpt: SQL Expression error: Error in compiling SQL Expression : SQL Expressions can not be used in this report... Please help me by telling how can I use the SQL Expression field in VB.NET ? Thanks in Advance.

    Read the article

  • ibatis isNotEmpty with multiple variables

    - by hibernate
    Suppose I have a massive table called inactiveUsers and a search form. I want to conditionally join the inactiveUsers table if any user related characteristic is chosen (address, name, phoneNumber, etc...). Is there any way to do this without the following: <isNotEmpty property="address">JOIN inactiveUsers</isNotEmpty> <isNotEmpty property="phoneNumber">JOIN inactiveUsers</isNotEmpty> <isNotEmpty property="name">JOIN inactiveUsers</isNotEmpty> and so on for another 10-20 isNotEmpty clauses. I would like to do something like: <isAnyNotEmpty properties="address, phoneNumber, name, ....">JOIN inactiveUsers</isNotEmpty> Is this possible with ibatis? If so, how?

    Read the article

  • Querying a self referencing join with NHibernate Linq

    - by Ben
    In my application I have a Category domain object. Category has a property Parent (of type category). So in my NHibernate mapping I have: <many-to-one name="Parent" column="ParentID"/> Before I switched to NHibernate I had the ParentId property on my domain model (mapped to the corresponding database column). This made it easy to query for say all top level categories (ParentID = 0): where(c => c.ParentId == 0) However, I have since removed the ParentId property from my domain model (because of NHibernate) so I now have to do the same query (using NHibernate.Linq) like so: public IList<Category> GetCategories(int parentId) { if (parentId == 0) return _catalogRepository.Categories.Where(x => x.Parent == null).ToList(); else return _catalogRepository.Categories.Where(x => x.Parent.Id == parentId).ToList(); } The real impact that I can see, is the sql generated. Instead of a simple 'select x,y,z from categories where parentid = 0' NHibernate generates a left outer join: SELECT this_.CategoryId as CategoryId4_1_, this_.ParentID as ParentID4_1_, this_.Name as Name4_1_, this_.Slug as Slug4_1_, parent1_.CategoryId as CategoryId4_0_, parent1_.ParentID as ParentID4_0_, parent1_.Name as Name4_0_, parent1_.Slug as Slug4_0_ FROM Categories this_ left outer join Categories parent1_ on this_.ParentID = parent1_.CategoryId WHERE this_.ParentID is null Which doesn't seems much less efficient that what I had before. Is there a better way of querying these self referencing joins as it's very tempting to drop the ParentID back onto my domain model for this reason. Thanks, Ben

    Read the article

  • Add comma-separated value of grouped rows to existing query

    - by Peter Lang
    I've got a view for reports, that looks something like this: SELECT a.id, a.value1, a.value2, b.value1, /* (+50 more such columns)*/ FROM a JOIN b ON (b.id = a.b_id) JOIN c ON (c.id = b.c_id) LEFT JOIN d ON (d.id = b.d_id) LEFT JOIN e ON (e.id = d.e_id) /* (+10 more inner/left joins) */ It joins quite a few tables and returns lots of columns, but indexes are in place and performance is fine. Now I want to add another column to the result, showing comma-separated values ordered by value from table y outer joined via intersection table x if a.value3 IS NULL, else take a.value3 To comma-separate the grouped values I use Tom Kyte's stragg, could use COLLECT later. Pseudo-code for the SELECT would look like that: SELECT xx.id, COALESCE( a.value3, stragg( xx.val ) ) value3 FROM ( SELECT x.id, y.val FROM x WHERE x.a_id = a.id JOIN y ON ( y.id = x.y_id ) ORDER BY y.val ASC ) xx GROUP BY xx.id What is the best way to do it? Any tips?

    Read the article

  • Retrive data from two tables in asp.net mvc using ADO.Net Entity Framework

    - by user192972
    Please read my question carefully and reply me. I have two tables as table1 and table2. In table1 i have columns as AddressID(Primary Key),Address1,Address2,City In table2 i have columns as ContactID(Primary Key),AddressID(Foriegn Key),Last Name,First Name. By using join operation i can retrive data from both the tables. I created a Model in my MVC Application.I can see both the tables in enitity editor. In the ViewData folder of my solution explorer i created two class as ContactViewData.cs and SLXRepository.cs In the ContactViewData.cs i have following code public IEnumerable<CONTACT> contacts { get; set; } In the SLXRepository.cs i have following code public IEnumerable<CONTACT> GetContacts() { var contact = ( from c in context.CONTACT join a in context.ADDRESS on c.ADDRESSID equals a.ADDRESSID select new { a.ADDRESS1, a.ADDRESS2, a.CITY, c.FIRSTNAME, c.LASTNAME } ); return contact; } I am getting the error in return type Cannot implicitly convert type 'System.Linq.IQueryable' to 'System.Collections.Generic.IEnumerable'. An explicit conversion exists (are you missing a cast?)

    Read the article

  • MYSQL fetch 10 posts, each w/ vote count, sorted by vote count, limited by where clause on posts

    - by nibblebot
    I want to fetch a set of Posts w/ vote count listed, sorted by vote count (e.g.) Post 1 - Post Body blah blah - Votes: 500 Post 2 - Post Body blah blah - Votes: 400 Post 3 - Post Body blah blah - Votes: 300 Post 4 - Post Body blah blah - Votes: 200 I have 2 tables: Posts - columns - id, body, is_hidden Votes - columns - id, post_id, vote_type_id Here is the query I've tried: SELECT p.*, v.yes_count FROM posts p LEFT JOIN (SELECT post_id, vote_type_id, COUNT(1) AS yes_count FROM votes WHERE (vote_type_id = 1) GROUP BY post_id ORDER BY yes_count DESC LIMIT 0, 10) v ON v.post_id = p.id WHERE (p.is_hidden = 0) ORDER BY yes_count DESC LIMIT 0, 10 Correctness: The above query almost works. The subselect is including votes for posts that have is_hidden = 1, so when I left join it to posts, if a hidden post is in the top 10 (ranked by votes), I can end up with records with NULL on the yes_count field. Performance: I have ~50k posts and ~500k votes. On my dev machine, the above query is running in .4sec. I'd like to stay at or below this execution time. Indexes: I have an index on the Votes table that covers the fields: vote_type_id and post_id EXPLAIN id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY p ALL NULL NULL NULL NULL 45985 Using where; Using temporary; Using filesort 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 10 2 DERIVED votes ref VotingPost VotingPost 4 319881 Using where; Using index; Using temporary; Using filesort

    Read the article

  • GET variable after doing a Join Query

    - by John
    Hello, For the code below, on the second link (http://www...com/sandbox/comments/index.php?submission='.$row["title"].'), I would like to pass $row["submissionid"], on as a GET variable. I tried this and it caused all of the code below to produce a blank result. Is there a way that I can do I want? Thanks in advance, John $sqlStr = "SELECT s.loginid ,s.title ,s.url ,s.displayurl ,l.username ,COUNT(c.commentid) countComments FROM submission s INNER JOIN login l ON s.loginid = l.loginid LEFT OUTER JOIN comment c ON s.submissionid = c.submissionid GROUP BY s.submissionid ORDER BY s.datesubmitted DESC LIMIT 10"; $result = mysql_query($sqlStr); $arr = array(); echo "<table class=\"samplesrec\">"; while ($row = mysql_fetch_array($result)) { echo '<tr>'; echo '<td class="sitename1"><a href="http://www.'.$row["url"].'">'.$row["title"].'</a></td>'; echo '</tr>'; echo '<tr>'; echo '<td class="sitename2"><a href="http://www...com/sandbox/members/index.php?profile='.$row["username"].'">'.$row["username"].'</a><a href="http://www...com/sandbox/comments/index.php?submission='.$row["title"].'">'.$row["countComments"].'</a></td>'; echo '</tr>'; } echo "</table>";

    Read the article

  • Linq join two dictionaries using a common key

    - by rboarman
    Hello, I am trying to join two Dictionary collections together based on a common lookup value. var idList = new Dictionary<int, int>(); idList.Add(1, 1); idList.Add(3, 3); idList.Add(5, 5); var lookupList = new Dictionary<int, int>(); lookupList.Add(1, 1000); lookupList.Add(2, 1001); lookupList.Add(3, 1002); lookupList.Add(4, 1003); lookupList.Add(5, 1004); lookupList.Add(6, 1005); lookupList.Add(7, 1006); // Something like this: var q = from id in idList.Keys join entry in lookupList on entry.Key equals id select entry.Value; The Linq statement above is only an example and does not compile. For each entry in the idList, pull the value from the lookupList based on matching Keys. The result should be a list of Values from lookupList (1000, 1002, 1004). What’s the easiest way to do this using Linq? Thank you, Rick

    Read the article

  • Join Query Not Working

    - by John
    Hello, I am using three MySQl tables: comment commentid loginid submissionid comment datecommented login loginid username password email actcode disabled activated created points submission submissionid loginid title url displayurl datesubmitted In these three tables, the "loginid" correspond. I would like to pull the top 10 loginids based on the number of "submissionid"s. I would like to display them in a 3-column HTML table that shows the "username" in the first column, the number of "submissionid"s in the second column, and the number of "commentid"s in the third column. I tried using the query below but it did not work. Any idea why not? Thanks in advance, John $sqlStr = "SELECT l.username ,l.loginid ,c.commentid ,count(s.commentid) countComments ,c.comment ,c.datecommented ,s.submissionid ,count(s.submissionid) countSubmissions ,s.title ,s.url ,s.displayurl ,s.datesubmitted FROM comment AS c INNER JOIN login AS l ON c.loginid = l.loginid INNER JOIN submission AS s ON c.loginid = s.loginid GROUP BY c.loginid ORDER BY countSubmissions DESC LIMIT 10"; $result = mysql_query($sqlStr); $arr = array(); echo "<table class=\"samplesrec1\">"; while ($row = mysql_fetch_array($result)) { echo '<tr>'; echo '<td class="sitename1"><a href="http://www...com/.../members/index.php?profile='.$row["username"].'">'.stripslashes($row["username"]).'</a></td>'; echo '</tr>'; echo '<td class="sitename1">'.stripslashes($row["countSubmissions"]).'</td>'; echo '</tr>'; echo '</tr>'; echo '<td class="sitename1">'.stripslashes($row["countComments"]).'</td>'; echo '</tr>'; } echo "</table>";

    Read the article

  • Please help optimizing a long running query (left outer join, with 2 subqueries)

    - by 46and2
    Hi all. The query I need help with is: SELECT d.bn, d.4700, d.4500, ... , p.`Activity Description` FROM ( SELECT temp.bn, temp.4700, temp.4500, .... FROM `tdata` temp GROUP BY temp.bn HAVING (COUNT(temp.bn) = 1) ) d LEFT OUTER JOIN ( SELECT temp2.bn, max(temp2.FPE) AS max_fpe, temp2.`Activity Description` FROM `pdata` temp2 GROUP BY temp2.bn ) p ON p.bn = d.bn; The ... represents other fields that aren't really important to solving this problem. The issue is on the the second subquery - it is not using the index I have created and I am not sure why, it seems to be because of the way TEXT fields are handled. The first subquery uses the index I have created and runs quite snappy, however an explain on the second shows a 'Using temporary; Using filesort'. Please see the indexes I have created in the below table create statements. Can anyone help me optimize this? By way of quick explanation the first subquery is meant to only select records that have unique bn's, the second, while it looks a bit wacky (with the max function there which is not being used in the result set) is making sure that only one record from the right part of the join is included in the result set. My table create statements are CREATE TABLE `tdata` ( `BN` varchar(15) DEFAULT NULL, `4000` varchar(3) DEFAULT NULL, `5800` varchar(3) DEFAULT NULL, .... KEY `BN` (`BN`), KEY `idx_t3010`(`BN`,`4700`,`4500`,`4510`,`4520`,`4530`,`4570`,`4950`,`5000`,`5010`,`5020`,`5050`,`5060`,`5070`,`5100`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 CREATE TABLE `pdata` ( `BN` varchar(15) DEFAULT NULL, `FPE` datetime DEFAULT NULL, `Activity Description` text, .... KEY `BN` (`BN`), KEY `idx_programs_2009` (`BN`,`FPE`,`Activity Description`(100)) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 Thanks!

    Read the article

  • How can I SELECT rows with MAX(Column value), DISTINCT by another column in SQL?

    - by Kaptah
    My table is: id home datetime player resource ---|-----|------------|--------|--------- 1 | 10 | 04/03/2009 | john | 399 2 | 11 | 04/03/2009 | juliet | 244 5 | 12 | 04/03/2009 | borat | 555 3 | 10 | 03/03/2009 | john | 300 4 | 11 | 03/03/2009 | juliet | 200 6 | 12 | 03/03/2009 | borat | 500 7 | 13 | 24/12/2008 | borat | 600 8 | 13 | 01/01/2009 | borat | 700 I need to select each distinct "home" holding the maximum value of "datetime". Result would be: id home datetime player resource ---|-----|------------|--------|--------- 1 | 10 | 04/03/2009 | john | 399 2 | 11 | 04/03/2009 | juliet | 244 5 | 12 | 04/03/2009 | borat | 555 7 | 13 | 24/12/2008 | borat | 600 8 | 13 | 01/01/2009 | borat | 700 I have tried: // 1 ..by the MySQL manual: SELECT DISTINCT home, id, datetime as dt, player, resource FROM topten t1 WHERE datetime = (SELECT MAX(t2.datetime) FROM topten t2 GROUP BY home ) GROUP BY daytime ORDER BY daytime DESC Doesn't work. Result-set has 130 rows although database holds 187. Result includes some dublicates of 'home'. // 2 ..join SELECT s1.id, s1.home, s1.datetime, s1.player, s1.resource FROM topten s1 JOIN (SELECT id, MAX(datetime) AS dt FROM topten GROUP BY id) AS s2 ON s1.id = s2.id ORDER BY daytime Nope. Gives all the records. // 3 ..something exotic: With various results.

    Read the article

  • How can I make this SQL query more efficient? PHP.

    - by Alan Grant
    Hi all, I have a system whereby a user can view categories that they've subscribed to individually, and also those that are available in the region they belong in by default. So, the tables are as follows: Categories UsersCategories RegionsCategories I'm querying the db for all the categories within their region, and also all the individual categories that they've subscribed to. My query is as follows: Select * FROM (categories c) LEFT JOIN users_categories uc on uc.category_id = c.id LEFT JOIN regions_categories rc on rc.category_id = c.id WHERE (rc.region_id = ? OR uc.user_id = ?) At least I believe that's the query, I'm creating it using Cake's ORM layer, so the exact one is: $conditions = array( array( "OR" => array ( 'RegionsCategories.region_id' => $region_id, 'UsersCategories.user_id' => $user_id ) )); $this->find('all', $conditions); This turns out to be incredibly slow (sometimes around 20 seconds or so. Each table has around 5,000 rows). Is my design at fault here? How can I retrieve both the users' individual categories and those within their region all in one query without it taking ages? Thanks!

    Read the article

< Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >