Search Results

Search found 36186 results on 1448 pages for 'sql 11'.

Page 264/1448 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • Selecting distinct values from mysql with largest timestamp

    - by user987048
    I am building a mail system. The inbox is only supposed to grab the last message (one with the highest time value) of a concatenation of user and sender, where the user or sender is the user ID. Here is the table structure: CREATE TABLE IF NOT EXISTS `mail` ( `user` int(11) NOT NULL, `sender` int(11) NOT NULL, `body` text NOT NULL, `new` enum('0','1') NOT NULL default '1', `time` int(11) NOT NULL, KEY `user` (`user`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; So, with a table with the following data: user sender new time ***************************************** 1 0 0 5 1 0 0 6 2 1 0 7 1 0 1 8 1 2 0 9 1 0 1 11 1 2 1 12 I want to select the following: WHERE USER OR SENDER = X (in this case, 1) user sender new time ***************************************** 2 1 0 7 1 2 0 9 1 0 1 11 How would I go about doing something like this?

    Read the article

  • Performance considerations for common SQL queries

    - by Jim Giercyk
    Originally posted on: http://geekswithblogs.net/NibblesAndBits/archive/2013/10/16/performance-considerations-for-common-sql-queries.aspxSQL offers many different methods to produce the same results.  There is a never-ending debate between SQL developers as to the “best way” or the “most efficient way” to render a result set.  Sometimes these disputes even come to blows….well, I am a lover, not a fighter, so I decided to collect some data that will prove which way is the best and most efficient.  For the queries below, I downloaded the test database from SQLSkills:  http://www.sqlskills.com/sql-server-resources/sql-server-demos/.  There isn’t a lot of data, but enough to prove my point: dbo.member has 10,000 records, and dbo.payment has 15,554.  Our result set contains 6,706 records. The following queries produce an identical result set; the result set contains aggregate payment information for each member who has made more than 1 payment from the dbo.payment table and the first and last name of the member from the dbo.member table.   /*************/ /* Sub Query  */ /*************/ SELECT  a.[Member Number] ,         m.lastname ,         m.firstname ,         a.[Number Of Payments] ,         a.[Average Payment] ,         a.[Total Paid] FROM    ( SELECT    member_no 'Member Number' ,                     AVG(payment_amt) 'Average Payment' ,                     SUM(payment_amt) 'Total Paid' ,                     COUNT(Payment_No) 'Number Of Payments'           FROM      dbo.payment           GROUP BY  member_no           HAVING    COUNT(Payment_No) > 1         ) a         JOIN dbo.member m ON a.[Member Number] = m.member_no         /***************/ /* Cross Apply  */ /***************/ SELECT  ca.[Member Number] ,         m.lastname ,         m.firstname ,         ca.[Number Of Payments] ,         ca.[Average Payment] ,         ca.[Total Paid] FROM    dbo.member m         CROSS APPLY ( SELECT    member_no 'Member Number' ,                                 AVG(payment_amt) 'Average Payment' ,                                 SUM(payment_amt) 'Total Paid' ,                                 COUNT(Payment_No) 'Number Of Payments'                       FROM      dbo.payment                       WHERE     member_no = m.member_no                       GROUP BY  member_no                       HAVING    COUNT(Payment_No) > 1                     ) ca /********/                    /* CTEs  */ /********/ ; WITH    Payments           AS ( SELECT   member_no 'Member Number' ,                         AVG(payment_amt) 'Average Payment' ,                         SUM(payment_amt) 'Total Paid' ,                         COUNT(Payment_No) 'Number Of Payments'                FROM     dbo.payment                GROUP BY member_no                HAVING   COUNT(Payment_No) > 1              ),         MemberInfo           AS ( SELECT   p.[Member Number] ,                         m.lastname ,                         m.firstname ,                         p.[Number Of Payments] ,                         p.[Average Payment] ,                         p.[Total Paid]                FROM     dbo.member m                         JOIN Payments p ON m.member_no = p.[Member Number]              )     SELECT  *     FROM    MemberInfo /************************/ /* SELECT with Grouping   */ /************************/ SELECT  p.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         COUNT(Payment_No) 'Number Of Payments' ,         AVG(payment_amt) 'Average Payment' ,         SUM(payment_amt) 'Total Paid' FROM    dbo.payment p         JOIN dbo.member m ON m.member_no = p.member_no GROUP BY p.member_no ,         m.lastname ,         m.firstname HAVING  COUNT(Payment_No) > 1   We can see what is going on in SQL’s brain by looking at the execution plan.  The Execution Plan will demonstrate which steps and in what order SQL executes those steps, and what percentage of batch time each query takes.  SO….if I execute all 4 of these queries in a single batch, I will get an idea of the relative time SQL takes to execute them, and how it renders the Execution Plan.  We can settle this once and for all.  Here is what SQL did with these queries:   Not only did the queries take the same amount of time to execute, SQL generated the same Execution Plan for each of them.  Everybody is right…..I guess we can all finally go to lunch together!  But wait a second, I may not be a fighter, but I AM an instigator.     Let’s see how a table variable stacks up.  Here is the code I executed: /********************/ /*  Table Variable  */ /********************/ DECLARE @AggregateTable TABLE     (       member_no INT ,       AveragePayment MONEY ,       TotalPaid MONEY ,       NumberOfPayments MONEY     ) INSERT  @AggregateTable         SELECT  member_no 'Member Number' ,                 AVG(payment_amt) 'Average Payment' ,                 SUM(payment_amt) 'Total Paid' ,                 COUNT(Payment_No) 'Number Of Payments'         FROM    dbo.payment         GROUP BY member_no         HAVING  COUNT(Payment_No) > 1   SELECT  at.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         at.NumberOfPayments 'Number Of Payments' ,         at.AveragePayment 'Average Payment' ,         at.TotalPaid 'Total Paid' FROM    @AggregateTable at         JOIN dbo.member m ON m.member_no = at.member_no In the interest of keeping things in groupings of 4, I removed the last query from the previous batch and added the table variable query.  Here’s what I got:     Since we first insert into the table variable, then we read from it, the Execution Plan renders 2 steps.  BUT, the combination of the 2 steps is only 22% of the batch.  It is actually faster than the other methods even though it is treated as 2 separate queries in the Execution Plan.  The argument I often hear against Table Variables is that SQL only estimates 1 row for the table size in the Execution Plan.  While this is true, the estimate does not come in to play until you read from the table variable.  In this case, the table variable had 6,706 rows, but it still outperformed the other queries.  People argue that table variables should only be used for hash or lookup tables.  The fact is, you have control of what you put IN to the variable, so as long as you keep it within reason, these results suggest that a table variable is a viable alternative to sub-queries. If anyone does volume testing on this theory, I would be interested in the results.  My suspicion is that there is a breaking point where efficiency goes down the tubes immediately, and it would be interesting to see where the threshold is. Coding SQL is a matter of style.  If you’ve been around since they introduced DB2, you were probably taught a little differently than a recent computer science graduate.  If you have a company standard, I strongly recommend you follow it.    If you do not have a standard, generally speaking, there is no right or wrong answer when talking about the efficiency of these types of queries, and certainly no hard-and-fast rule.  Volume and infrastructure will dictate a lot when it comes to performance, so your results may vary in your environment.  Download the database and try it!

    Read the article

  • Upgrading SSIS Custom Components for SQL Server 2012

    Having finally got around to upgrading my custom components to SQL Server 2012, I thought I’d share some notes on the process. One of the goals was minimal duplication, so the same code files are used to build the 2008 and 2012 components, I just have a separate project file. The high level steps are listed below, followed by some more details. Create a 2012 copy of the project file Upgrade project, just open the new project file is VS2010 Change target framework to .NET 4.0 Set conditional compilation symbol for DENALI Change any conditional code, including assembly version and UI type name Edit project file to change referenced assemblies for 2012 Change target framework to .NET 4.0 Open the project properties. On the Applications page, change the Target framework to .NET Framework 4. Set conditional compilation symbol for DENALI Re-open the project properties. On the Build tab, first change the Configuration to All Configurations, then set a Conditional compilation symbol of DENALI. Change any conditional code, including assembly version and UI type name The value doesn’t have to be DENALI, it can actually be anything you like, that is just what I use. It is how I control sections of code that vary between versions. There were several API changes between 2005 and 2008, as well as interface name changes. Whilst we don’t have the same issues between 2008 and 2012, I still have some sections of code that do change such as the assembly attributes. #if DENALI [assembly: AssemblyDescription("Data Generator Source for SQL Server Integration Services 2012")] [assembly: AssemblyCopyright("Copyright © 2012 Konesans Ltd")] [assembly: AssemblyVersion("3.0.0.0")] #else [assembly: AssemblyDescription("Data Generator Source for SQL Server Integration Services 2008")] [assembly: AssemblyCopyright("Copyright © 2008 Konesans Ltd")] [assembly: AssemblyVersion("2.0.0.0")] #endif The Visual Studio editor automatically formats the code based on the current compilation symbols, hence in this case the 2008 code is grey to indicate it is disabled. As you can see in the previous example I have distinct assembly version attributes, ensuring I can run both 2008 and 2012 versions of my component side by side. For custom components with a user interface, be sure to update the UITypeName property of the DtsTask or DtsPipelineComponent attributes. As above I use the conditional compilation symbol to control the code. #if DENALI [DtsTask ( DisplayName = "File Watcher Task", Description = "File Watcher Task", IconResource = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTask.ico", UITypeName = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTaskUI,Konesans.Dts.Tasks.FileWatcherTask,Version=3.0.0.0,Culture=Neutral,PublicKeyToken=b2ab4a111192992b", TaskContact = "File Watcher Task; Konesans Ltd; Copyright © 2012 Konesans Ltd; http://www.konesans.com" )] #else [DtsTask ( DisplayName = "File Watcher Task", Description = "File Watcher Task", IconResource = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTask.ico", UITypeName = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTaskUI,Konesans.Dts.Tasks.FileWatcherTask,Version=2.0.0.0,Culture=Neutral,PublicKeyToken=b2ab4a111192992b", TaskContact = "File Watcher Task; Konesans Ltd; Copyright © 2004-2008 Konesans Ltd; http://www.konesans.com" )] #endif public sealed class FileWatcherTask: Task, IDTSComponentPersist, IDTSBreakpointSite, IDTSSuspend { // .. code goes on... } Shown below is another example I found that needed changing. I borrow one of the MS editors, and use it against a custom property, but need to ensure I reference the correct version of the MS controls assembly. This section of code is actually shared between the 2005, 2008 and 2012 versions of my component hence it has test for both DENALI and KATMAI symbols. #if DENALI const string multiLineUI = "Microsoft.DataTransformationServices.Controls.ModalMultilineStringEditor, Microsoft.DataTransformationServices.Controls, Version=11.0.00.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; #elif KATMAI const string multiLineUI = "Microsoft.DataTransformationServices.Controls.ModalMultilineStringEditor, Microsoft.DataTransformationServices.Controls, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; #else const string multiLineUI = "Microsoft.DataTransformationServices.Controls.ModalMultilineStringEditor, Microsoft.DataTransformationServices.Controls, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; #endif // Create Match Expression parameter IDTSCustomPropertyCollection100 propertyCollection = outputColumn.CustomPropertyCollection; IDTSCustomProperty100 property = propertyCollection.New(); property = propertyCollection.New(); property.Name = MatchParams.Name; property.Description = MatchParams.Description; property.TypeConverter = typeof(MultilineStringConverter).AssemblyQualifiedName; property.UITypeEditor = multiLineUI; property.Value = MatchParams.DefaultValue; Edit project file to change referenced assemblies for 2012 We now need to edit the project file itself. Open the MyComponente2012.cproj  in you favourite text editor, and then perform a couple of find and replaces as listed below: Find Replace Comment Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Change the assembly references version from SQL Server 2008 to SQL Server 2012. Microsoft SQL Server\100\ Microsoft SQL Server\110\ Change any assembly reference hint path locations from from SQL Server 2008 to SQL Server 2012. If you use any Build Events during development, such as copying the component assembly to the DTS folder, or calling GACUTIL to install it into the GAC, you can also change these now. An example of my new post-build event for a pipeline component is shown below, which uses the .NET 4.0 path for GACUTIL. It also uses the 110 folder location, instead of 100 for SQL Server 2008, but that was covered the the previous find and replace. "C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\NETFX 4.0 Tools\gacutil.exe" /if "$(TargetPath)" copy "$(TargetPath)" "%ProgramFiles%\Microsoft SQL Server\110\DTS\PipelineComponents" /Y

    Read the article

  • Microsoft ReportViewer SetParameters continuous refresh issue

    - by Ilya Verbitskiy
    Originally posted on: http://geekswithblogs.net/ilich/archive/2013/10/16/microsoft-reportviewer-setparameters-continuous-refresh-issue.aspxI am a big fun of using ASP.NET MVC for building web-applications. It allows us to create simple, robust and testable solutions. However, .NET world is not perfect. There is tons of code written in ASP.NET web-forms. You cannot simply ignore it, even if you want to. Sometimes ASP.NET web-forms controls bring us non-obvious issues. The good example is Microsoft ReportViewer control. I have an example for you. 1: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %> 2: <%@ Register Assembly="Microsoft.ReportViewer.WebForms, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91" Namespace="Microsoft.Reporting.WebForms" TagPrefix="rsweb" %> 3:   4: <!DOCTYPE html> 5:   6: <html xmlns="http://www.w3.org/1999/xhtml"> 7: <head runat="server"> 8: <title>Report Viewer Continiuse Resfresh Issue Example</title> 9: </head> 10: <body> 11: <form id="form1" runat="server"> 12: <div> 13: <asp:ScriptManager runat="server"></asp:ScriptManager> 14: <rsweb:ReportViewer ID="_reportViewer" runat="server" Width="100%" Height="100%"></rsweb:ReportViewer> 15: </div> 16: </form> 17: </body> 18: </html>   The back-end code is simple as well. I want to show a report with some parameters to a user. 1: protected void Page_Load(object sender, EventArgs e) 2: { 3: _reportViewer.ProcessingMode = ProcessingMode.Remote; 4: _reportViewer.ShowParameterPrompts = false; 5:   6: var serverReport = _reportViewer.ServerReport; 7: serverReport.ReportServerUrl = new Uri("http://localhost/ReportServer_SQLEXPRESS"); 8: serverReport.ReportPath = "/Reports/TestReport"; 9:   10: var reportParameter1 = new ReportParameter("Parameter1"); 11: reportParameter1.Values.Add("Hello World!"); 12:   13: var reportParameter2 = new ReportParameter("Parameter2"); 14: reportParameter2.Values.Add("10/16/2013"); 15:   16: var reportParameter3 = new ReportParameter("Parameter3"); 17: reportParameter3.Values.Add("10"); 18:   19: serverReport.SetParameters(new[] { reportParameter1, reportParameter2, reportParameter3 }); 20: }   I set ShowParametersPrompts to false because I do not want user to refine the search. It looks good until you run the report. The report will refresh itself all the time. The problem caused by ServerReport.SetParameters method in Page_Load. The method cause ReportViewer control to execute the report on the NEXT post back. That is why the page has continuous post-backs. The fix is very simple: do nothing if Page_Load method executed during post-back. 1: protected void Page_Load(object sender, EventArgs e) 2: { 3: if (IsPostBack) 4: { 5: return; 6: } 7:   8: _reportViewer.ProcessingMode = ProcessingMode.Remote; 9: _reportViewer.ShowParameterPrompts = false; 10:   11: var serverReport = _reportViewer.ServerReport; 12: serverReport.ReportServerUrl = new Uri("http://localhost/ReportServer_SQLEXPRESS"); 13: serverReport.ReportPath = "/Reports/TestReport"; 14:   15: var reportParameter1 = new ReportParameter("Parameter1"); 16: reportParameter1.Values.Add("Hello World!"); 17:   18: var reportParameter2 = new ReportParameter("Parameter2"); 19: reportParameter2.Values.Add("10/16/2013"); 20:   21: var reportParameter3 = new ReportParameter("Parameter3"); 22: reportParameter3.Values.Add("10"); 23:   24: serverReport.SetParameters(new[] { reportParameter1, reportParameter2, reportParameter3 }); 25: } You can download sample code from GitHub - https://github.com/ilich/Examples/tree/master/ReportViewerContinuousRefresh

    Read the article

  • initrd.lz is corrupted error occured while installing 11.10

    - by zubendra
    C:\ubuntu\install\boot\initrd.lz is corrupted. Error pop-up comes up every time i am trying to install ubuntu-11.10-desktop-i386 using wubi. error comes when the installation process is almost completed. can anyone suggest a solution for this problem. Its occurring regularly. 03-19 18:01 DEBUG TaskList: ## Running copy_installation_files... 03-19 18:01 DEBUG WindowsBackend: Copying C:\DOCUME~1\HP_OWN~1.YOU\LOCALS~1\Temp\pyl59.tmp\data\custom-installation -> C:\ubuntu\install\custom-installation 03-19 18:01 DEBUG WindowsBackend: Copying C:\DOCUME~1\HP_OWN~1.YOU\LOCALS~1\Temp\pyl59.tmp\winboot -> C:\ubuntu\winboot 03-19 18:01 DEBUG WindowsBackend: Copying C:\DOCUME~1\HP_OWN~1.YOU\LOCALS~1\Temp\pyl59.tmp\data\images\Ubuntu.ico -> C:\ubuntu\Ubuntu.ico 03-19 18:01 DEBUG TaskList: ## Finished copy_installation_files 03-19 18:01 DEBUG TaskList: ## Running get_iso... 03-19 18:01 DEBUG CommonBackend: Trying to use pre-specified ISO X:\ubuntu-11.10-desktop-i386.iso 03-19 18:01 DEBUG TaskList: New task is_valid_iso 03-19 18:01 DEBUG TaskList: ### Running is_valid_iso... 03-19 18:01 DEBUG Distro: checking Ubuntu ISO X:\ubuntu-11.10-desktop-i386.iso 03-19 18:01 INFO Distro: Found a valid iso for Ubuntu: X:\ubuntu-11.10-desktop-i386.iso 03-19 18:01 DEBUG TaskList: ### Finished is_valid_iso 03-19 18:01 DEBUG TaskList: New task check_iso 03-19 18:01 DEBUG TaskList: ### Running check_iso... 03-19 18:01 DEBUG CommonBackend: Checking X:\ubuntu-11.10-desktop-i386.iso 03-19 18:01 DEBUG Distro: checking Ubuntu ISO X:\ubuntu-11.10-desktop-i386.iso 03-19 18:01 INFO Distro: Found a valid iso for Ubuntu: X:\ubuntu-11.10-desktop-i386.iso 03-19 18:01 DEBUG CommonBackend: Using distro Ubuntu i386 instead of Ubuntu amd64 03-19 18:01 DEBUG TaskList: New task get_metalink 03-19 18:01 DEBUG TaskList: #### Running get_metalink... 03-19 18:01 DEBUG downloader: downloading http://releases.ubuntu.com/11.10/ubuntu-11.10-desktop-i386.metalink > C:\ubuntu\install 03-19 18:01 ERROR CommonBackend: Cannot download metalink file http://releases.ubuntu.com/11.10/ubuntu-11.10-desktop-i386.metalink err=[Errno 4] IOError: <urlopen error (7, 'getaddrinfo failed')> 03-19 18:01 DEBUG downloader: downloading http://cdimage.ubuntu.com/daily-live/current/oneiric-desktop-i386.metalink > C:\ubuntu\install 03-19 18:01 ERROR CommonBackend: Cannot download metalink file2 http://cdimage.ubuntu.com/daily-live/current/oneiric-desktop-i386.metalink err=[Errno 4] IOError: <urlopen error (7, 'getaddrinfo failed')> 03-19 18:01 DEBUG TaskList: #### Finished get_metalink 03-19 18:01 ERROR CommonBackend: ERROR: the metalink file is not available, cannot check the md5 for X:\ubuntu-11.10-desktop-i386.iso, ignoring 03-19 18:01 DEBUG TaskList: ### Finished check_iso 03-19 18:01 DEBUG TaskList: New task copy_file 03-19 18:01 DEBUG CommonBackend: Copying X:\ubuntu-11.10-desktop-i386.iso > C:\ubuntu\install\installation.iso 03-19 18:01 DEBUG TaskList: ### Running copy_file... 03-19 18:01 DEBUG TaskList: ### Finished copy_file 03-19 18:01 DEBUG TaskList: ## Finished get_iso 03-19 18:01 DEBUG TaskList: ## Running extract_kernel... 03-19 18:01 DEBUG CommonBackend: Extracting files from ISO C:\ubuntu\install\installation.iso 03-19 18:01 DEBUG WindowsBackend: extracting md5sum.txt from C:\ubuntu\install\installation.iso 03-19 18:01 DEBUG WindowsBackend: extracting casper\vmlinuz from C:\ubuntu\install\installation.iso 03-19 18:01 DEBUG WindowsBackend: extracting casper\initrd.lz from C:\ubuntu\install\installation.iso 03-19 18:01 DEBUG CommonBackend: Checking kernel, initrd and md5sums 03-19 18:01 DEBUG CommonBackend: checking C:\ubuntu\install\boot\vmlinuz 03-19 18:01 DEBUG CommonBackend: C:\ubuntu\install\boot\vmlinuz md5 = fde150f5c6fd2de66ed7876efbfcc4c7 == fde150f5c6fd2de66ed7876efbfcc4c7 03-19 18:01 DEBUG CommonBackend: checking C:\ubuntu\install\boot\initrd.lz 03-19 18:01 DEBUG CommonBackend: C:\ubuntu\install\boot\initrd.lz md5 = 8900200c764438c1b124dff5ae92c763 != d6baee1e11f1d6de6eba6bd43dbde352 03-19 18:01 ERROR TaskList: File C:\ubuntu\install\boot\initrd.lz is corrupted Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\backend.py", line 623, in extract_kernel Exception: File C:\ubuntu\install\boot\initrd.lz is corrupted 03-19 18:01 DEBUG TaskList: # Cancelling tasklist 03-19 18:01 ERROR root: File C:\ubuntu\install\boot\initrd.lz is corrupted Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 132, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\backend.py", line 623, in extract_kernel Exception: File C:\ubuntu\install\boot\initrd.lz is corrupted 03-19 18:01 DEBUG TaskList: # Finished tasklist

    Read the article

  • SQL -- How to combine three SELECT statements with very tricky requirements

    - by Frederick
    I have a SQL query with three SELECT statements. A picture of the data tables generated by these three select statements is located at www.britestudent.com/pub/1.png. Each of the three data tables have identical columns. I want to combine these three tables into one table such that: (1) All rows in top table (Table1) are always included. (2) Rows in the middle table (Table2) are included only when the values in column1 (UserName) and column4 (CourseName) do not match with any row from Table1. Both columns need to match for the row in Table2 to not be included. (3) Rows in the bottom table (Table3) are included only when the value in column4 (CourseName) is not already in any row of the results from combining Table1 and Table2. I have had success in implementing (1) and (2) with an SQL query like this: SELECT DISTINCT UserName AS UserName, MAX(AmountUsed) AS AmountUsed, MAX(AnsweredCorrectly) AS AnsweredCorrectly, CourseName, MAX(course_code) AS course_code, MAX(NoOfQuestionsInCourse) AS NoOfQuestionsInCourse, MAX(NoOfQuestionSetsInCourse) AS NoOfQuestionSetsInCourse FROM ( "SELECT statement 1" UNION "SELECT statement 2" ) dt_derivedTable_1 GROUP BY CourseName, UserName Where "SELECT statement 1" is the query that generates Table1 and "SELECT statement 2" is the query that generates Table2. A picture of the data table generated by this query is located at www.britestudent.com/pub/2.png. I can get away with using the MAX() function because values in the AmountUsed and AnsweredCorrectly columns in Table1 will always be larger than those in Table2 (and they are identical in the last three columns of both tables). What I fail at is implementing (3). Any suggestions on how to do this will be appreciated. It is tricky because the UserName values in Table3 are null, and because the CourseName values in the combined Table1 and Table2 results are not unique (but they are unique in Table3). After implementing (3), the final table should look like the table in picture 2.png with the addition of the last row from Table3 (the row with the CourseName value starting with "4. Klasse..." I have tried to implement (3) using another derived table using SELECT, MAX() and UNION, but I could not get it to work. Below is my full SQL query with the lines from this failed attempt to implement (3) commented out. Cheers, Frederick PS--I am new to this forum (and new to SQL as well), but I have had more of my previous problems answered by reading other people's posts on this forum than from reading any other forum or Web site. This forum is a great resources. -- SELECT DISTINCT MAX(UserName), MAX(AmountUsed) AS AmountUsed, MAX(AnsweredCorrectly) AS AnsweredCorrectly, CourseName, MAX(course_code) AS course_code, MAX(NoOfQuestionsInCourse) AS NoOfQuestionsInCourse, MAX(NoOfQuestionSetsInCourse) AS NoOfQuestionSetsInCourse -- FROM ( SELECT DISTINCT UserName AS UserName, MAX(AmountUsed) AS AmountUsed, MAX(AnsweredCorrectly) AS AnsweredCorrectly, CourseName, MAX(course_code) AS course_code, MAX(NoOfQuestionsInCourse) AS NoOfQuestionsInCourse, MAX(NoOfQuestionSetsInCourse) AS NoOfQuestionSetsInCourse FROM ( -- Table 1 - All UserAccount/Course combinations that have had quizzez. SELECT DISTINCT dbo.win_user.user_name AS UserName, cast(dbo.GetAmountUsed(dbo.session_header.win_user_id, dbo.course.course_id, dbo.course.no_of_questionsets_in_course) as nvarchar(10)) AS AmountUsed, Isnull(cast(dbo.GetAnswerCorrectly(dbo.session_header.win_user_id, dbo.course.course_id, dbo.question_set.no_of_questions) as nvarchar(10)),0) AS AnsweredCorrectly, dbo.course.course_name AS CourseName, dbo.course.course_code, dbo.course.no_of_questions_in_course AS NoOfQuestionsInCourse, dbo.course.no_of_questionsets_in_course AS NoOfQuestionSetsInCourse FROM dbo.session_detail INNER JOIN dbo.session_header ON dbo.session_detail.session_header_id = dbo.session_header.session_header_id INNER JOIN dbo.win_user ON dbo.session_header.win_user_id = dbo.win_user.win_user_id INNER JOIN dbo.win_user_course ON dbo.win_user_course.win_user_id = dbo.win_user.win_user_id INNER JOIN dbo.question_set ON dbo.session_header.question_set_id = dbo.question_set.question_set_id RIGHT OUTER JOIN dbo.course ON dbo.win_user_course.course_id = dbo.course.course_id WHERE (dbo.session_detail.no_of_attempts = 1 OR dbo.session_detail.no_of_attempts IS NULL) AND (dbo.session_detail.is_correct = 1 OR dbo.session_detail.is_correct IS NULL) AND (dbo.win_user_course.is_active = 'True') GROUP BY dbo.win_user.user_name, dbo.course.course_name, dbo.question_set.no_of_questions, dbo.course.no_of_questions_in_course, dbo.course.no_of_questionsets_in_course, dbo.session_header.win_user_id, dbo.course.course_id, dbo.course.course_code UNION ALL -- Table 2 - All UserAccount/Course combinations that do or do not have quizzes but where the Course is selected for quizzes for that User Account. SELECT dbo.win_user.user_name AS UserName, -1 AS AmountUsed, -1 AS AnsweredCorrectly, dbo.course.course_name AS CourseName, dbo.course.course_code, dbo.course.no_of_questions_in_course AS NoOfQuestionsInCourse, dbo.course.no_of_questionsets_in_course AS NoOfQuestionSetsInCourse FROM dbo.win_user_course INNER JOIN dbo.win_user ON dbo.win_user_course.win_user_id = dbo.win_user.win_user_id RIGHT OUTER JOIN dbo.course ON dbo.win_user_course.course_id = dbo.course.course_id WHERE (dbo.win_user_course.is_active = 'True') GROUP BY dbo.win_user.user_name, dbo.course.course_name, dbo.course.no_of_questions_in_course, dbo.course.no_of_questionsets_in_course, dbo.course.course_id, dbo.course.course_code ) dt_derivedTable_1 GROUP BY CourseName, UserName -- UNION ALL -- Table 3 - All Courses. -- SELECT DISTINCT null AS UserName, -- -2 AS AmountUsed, -- -2 AS AnsweredCorrectly, -- dbo.course.course_name AS CourseName, -- dbo.course.course_code, -- dbo.course.no_of_questions_in_course AS NoOfQuestionsInCourse, -- dbo.course.no_of_questionsets_in_course AS NoOfQuestionSetsInCourse -- FROM dbo.course -- WHERE is_active = 'True' -- ) dt_derivedTable_2 -- GROUP BY CourseName -- ORDER BY CourseName

    Read the article

  • sql perfomance on new server

    - by Rapunzo
    My database is running on a pc (AMD Phenom x6, intel ssd disk, 8GB DDR3 RAM and windows 7 OS + sql server 2008 R2 sp3 ) and it started working hard, timeout problems and up to 30 seconds long queries after 200 mb of database And I also have an old server pc (IBM x-series 266: 72*3 15k rpm scsi discs with raid5, 4 gb ram and windows server 2003 + sql server 2008 R2 sp3 ) and same query start to give results in 100 seconds.. I tried query analyser tool for tuning my indexed. but not so much improvements. its a big dissapointment for me. because I thought even its an old server pc it should be more powerfull with 15k rpm discs with raid5. what should I do. do I need $10.000 new server to get a good performance for my sql server? cant I use that IBM server? Extra information: there is 50 sql users and its an ERP program. There is my query ALTER FUNCTION [dbo].[fnDispoTerbiye] ( ) RETURNS TABLE AS RETURN ( SELECT MD.dispoNo, SV.sevkNo, M1.musteriAdi AS musteri, SD.tipTurId, TT.tipTur, SD.tipNo, SD.desenNo, SD.varyantNo, SUM(T.topMetre) AS toplamSevkMetre, MD.dispoMetresi, DT.gelisMetresi, ISNULL(DT.fire, 0) AS fire, SV.sevkTarihi, DT.gelisTarihi, SP.mamulTermin, SD.miktar AS siparisMiktari, M.musteriAdi AS boyahane, MD.akisNotu AS islemler, --dbo.fnAkisIslemleri(MD.dispoNo) DT.partiNo, DT.iplikBoyaId, B.tanimAd AS BoyaTuru, MAX(HD.hamEn) AS hamEn, MAX(HD.hamGramaj) AS hamGramaj, TS.mamulEn, TS.mamulGramaj, DT.atkiCekmesi, DT.cozguCekmesi, DT.fiyat, DV.dovizCins, DT.dovizId, (SELECT CASE WHEN DT.dovizId = 2 THEN CAST(round(SUM(T .topMetre) * DT.fiyat * (SELECT TOP 1 satis FROM tblKur WHERE dovizId = 2 ORDER BY tarih DESC), 2) AS numeric(18, 2)) WHEN DT.dovizId = 3 THEN CAST(round(SUM(T .topMetre) * DT.fiyat * (SELECT TOP 1 satis FROM tblKur WHERE dovizId = 3 ORDER BY tarih DESC), 2) AS numeric(18, 2)) WHEN DT.dovizId = 1 THEN CAST(round(SUM(T .topMetre) * DT.fiyat * (SELECT TOP 1 satis FROM tblKur WHERE dovizId = 1 ORDER BY tarih DESC), 2) AS numeric(18, 2)) END AS Expr1) AS ToplamTLfiyat, DT.aciklama, MD.dispoNotu, SD.siparisId, SD.siparisDetayId, DT.sqlUserName, DT.kayitTarihi, O.orguAd, 'Çözgü=(' + (SELECT dbo.fnTipIplikler(SD.tipTurId, SD.tipNo, SD.desenNo, SD.varyantNo, 1) AS Expr1) + ')' + ' Atki=(' + (SELECT dbo.fnTipIplikler(SD.tipTurId, SD.tipNo, SD.desenNo, SD.varyantNo, 2) AS Expr1) + ')' AS iplikAciklama, DT.prosesOk, dbo.[fnYikamaTalimat](SP.siparisId) yikamaTalimati FROM tblDoviz AS DV WITH(NOLOCK) INNER JOIN tblDispoTerbiye AS DT WITH(NOLOCK) INNER JOIN tblTanimlar AS B WITH(NOLOCK) ON DT.iplikBoyaId = B.tanimId AND B.tanimTurId = 2 ON DV.id = DT.dovizId RIGHT OUTER JOIN tblMusteri AS M1 WITH(NOLOCK) INNER JOIN tblSiparisDetay AS SD WITH(NOLOCK) INNER JOIN tblDispo AS MD WITH(NOLOCK) ON SD.siparisDetayId = MD.siparisDetayId INNER JOIN tblTipTur AS TT WITH(NOLOCK) ON SD.tipTurId = TT.tipTurId INNER JOIN tblSiparis AS SP WITH(NOLOCK) ON SD.siparisId = SP.siparisId ON M1.musteriNo = SP.musteriNo INNER JOIN tblTip AS TP WITH(NOLOCK) ON SD.tipTurId = TP.tipTurId AND SD.tipNo = TP.tipNo AND SD.desenNo = TP.desen AND SD.varyantNo = TP.varyant INNER JOIN tblOrgu AS O WITH(NOLOCK) ON TP.orguId = O.orguId INNER JOIN tblMusteri AS M WITH(NOLOCK) INNER JOIN tblSevkiyat AS SV WITH(NOLOCK) ON M.musteriNo = SV.musteriNo INNER JOIN tblSevkDetay AS SVD WITH(NOLOCK) ON SV.sevkNo = SVD.sevkNo ON MD.mamulDispoHamSevkno = SV.sevkNo LEFT OUTER JOIN tblTop AS T WITH(NOLOCK) INNER JOIN tblDispo AS HD WITH(NOLOCK) ON T.dispoNo = HD.dispoNo AND T.dispoTuruId = HD.dispoTuruId ON SVD.dispoTuruId = T.dispoTuruId AND SVD.dispoNo = T.dispoNo AND SVD.topNo = T.topNo AND MD.siparisDetayId = HD.siparisDetayId ON DT.dispoTuruId = MD.dispoTuruId AND DT.dispoNo = MD.dispoNo LEFT OUTER JOIN tblDispoTerbiyeTest AS TS WITH(NOLOCK) ON DT.dispoTuruId = TS.dispoTuruId AND DT.dispoNo = TS.dispoNo --WHERE DT.gelisTarihi IS NULL -- OR DT.gelisTarihi > GETDATE()-30 GROUP BY MD.dispoNo, DT.partiNo, DT.iplikBoyaId, TS.mamulEn, TS.mamulGramaj, DT.gelisMetresi, DT.gelisTarihi, DT.atkiCekmesi, DT.cozguCekmesi, DT.fire, DT.fiyat, DT.aciklama, DT.sqlUserName, DT.kayitTarihi, SD.tipTurId, TT.tipTur, SD.tipNo, SD.desenNo, SD.varyantNo, SD.siparisId, SD.siparisDetayId, B.tanimAd, M.musteriAdi, M.musteriAdi, M1.musteriAdi, O.orguAd, TP.iplikAciklama, SD.miktar, MD.dispoNotu, SP.mamulTermin, DT.dovizId, DV.dovizCins, MD.dispoMetresi, MD.akisNotu, SV.sevkNo, SV.sevkTarihi, DT.prosesOk,SP.siparisId )

    Read the article

  • Why Do I See the "In Recovery" Msg, and How Can I Prevent it?

    - by John Hansen
    The project I'm working on creates a local copy of the SQL Server database for each SVN branch you work on. We're running SQL Server 2008 Express with Advanced Services on our local machine to host it. When we create a new branch, the build script will create a new database with the ID of that branch, creates the schema objects, and copies over a selection of data from the production shadow server. After the database is created, it, or other databases on the local machine, will often go into "In Recovery" mode for several minutes. After several refreshes it comes up and is happy, but will occasionally go back into "In Recovery" mode. The database is created in simple recovery mode. The file names aren't specified, so it uses default paths for files. The size of the database after loading data is ~400 megs. It is running in SQL Server 2005 compatibility mode. The command that creates the database is: sqlcmd -S $(DBServer) -Q "IF NOT EXISTS (SELECT [name] FROM sysdatabases WHERE [name] = '$(DBName)') BEGIN CREATE DATABASE [$(DBName)]; print 'Created $(DBName)'; END" ...where $(DBName) and $(DBServer) are MSBuild parameters. I got a nice clean log file this morning. When I turned on my computer it starts all five databases. However, two of them show transactions being rolled forward and backwards. The it just keeps trying to start up all five of the databases. 2010-06-10 08:24:59.74 spid52 Starting up database 'ASPState'. 2010-06-10 08:24:59.82 spid52 Starting up database 'CommunityLibrary'. 2010-06-10 08:25:03.97 spid52 Starting up database 'DLG-R8441'. 2010-06-10 08:25:05.07 spid52 2 transactions rolled forward in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:05.14 spid52 0 transactions rolled back in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:05.14 spid52 Recovery is writing a checkpoint in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:11.23 spid52 Starting up database 'DLG-R8979'. 2010-06-10 08:25:12.31 spid36s Starting up database 'DLG-R8441'. 2010-06-10 08:25:13.17 spid52 2 transactions rolled forward in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:13.22 spid52 0 transactions rolled back in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:13.22 spid52 Recovery is writing a checkpoint in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:18.43 spid52 Starting up database 'Rls QA'. 2010-06-10 08:25:19.13 spid46s Starting up database 'DLG-R8979'. 2010-06-10 08:25:23.29 spid36s Starting up database 'DLG-R8441'. 2010-06-10 08:25:27.91 spid52 Starting up database 'ASPState'. 2010-06-10 08:25:29.80 spid41s Starting up database 'DLG-R8979'. 2010-06-10 08:25:31.22 spid52 Starting up database 'Rls QA'. In this case it kept trying to start the databases continuously until I shut down SQL Server at 08:48:19.72, 23 minutes later. Meanwhile, I actually am able to use the databases much of the time.

    Read the article

  • vb.net : is it possible to connect to sql server 2008 via odbc but not through vb.net code?

    - by phill
    I'm supporting an old vb.net program whose database it connected to was moved from SQL Server 2005 to SQL Server 2008. Is there a setting on SQL Server 2008 which will allow ODBC connections to access the database but not allow VB.NET to connect to it programmatically? the error i keep receiving in the app is: An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server) however I can connect to it when I create a system dsn to the sql server instance and through VS2005's Tools Connect to Database. Here is the code I'm using to connect: dim strC as string strC = "data source=bob; database=subscribers; user id=bobuser; password=passme" dim connection as New SqlClient.SqlConnection(strC) try connection.open() catch ex as Exception msgbox(ex.message) end try connection.Close()

    Read the article

  • SQL Server 2005 to 2008 Bak file help please!

    - by Brandon
    I have a SQl Server 2005 database backup that I want to transfer to SQL Server 2008 on my server. I spent 3 days transferring the .bak file from my own machine to my server. I then tried to restore the bak file and I got an error. I then read online a completely different method for adding a SQL server 2005 Database to SQL server 2008 which was the detach and attach method which means I need to detach the database in SQL Server 2005 and then transfer the MDF file from it via ftp to my server and then attach it in SQL Server 2008. Well I already used a lot of bandwidth transferring the .bak file to my server. is there a way to convert my .bak file which is already on my server to an MDF file and attach it in SQL server 2008?

    Read the article

  • SQLAuthority News SQL Server 2008 R2 Update for Developers Training Kit (March 2010 Update)

    SQL Server 2008 R2 offers an impressive array of capabilities for developers that build upon key innovations introduced in SQL Server 2008. The SQL Server 2008 R2 Update for Developers Training Kit is ideal for developers who want to understand how to take advantage of the key improvements introduced in SQL [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Command line mode only -- successful login only brings me back to login screen

    - by seth
    whenever I log in the screen goes black, I see a glimpse of terminal-esque text, and then it brings me back to the log in screen (Ubuntu 12.04). I can enter and log in via the command line. The guest account works find. I think this happened because I edited some Xorg related file trying to make an external monitor work with my laptop. I copy pasted from a forum post so I dont recall the file or what i put in the file. Can't find the forum post again and my bash history wasn't recorded from that session. I tried reinstalling Xorg and ubuntu-desktop, nvidia, resetting any configs I could find... I'm really at a loss of what to do. Here's my /.xsession-errors: /usr/sbin/lightdm-session: 11: /home/seth/.profile: -s: not found Backend : gconf Integration : true Profile : unity Adding plugins Initializing core options...done Initializing composite options...done Initializing opengl options...done Initializing decor options...done Initializing vpswitch options...done Initializing snap options...done Initializing mousepoll options...done Initializing resize options...done Initializing place options...done Initializing move options...done Initializing wall options...done Initializing grid options...done I/O warning : failed to load external entity "/home/seth/.compiz/session/108fa6ea48f8a973b9133850948930576700000017740033" Initializing session options...done Initializing gnomecompat options...done ** Message: applet now removed from the notification area Initializing animation options...done Initializing fade options...done Initializing unitymtgrabhandles options...done Initializing workarounds options...done Initializing scale options...done compiz (expo) - Warn: failed to bind image to texture Initializing expo options...done Initializing ezoom options...done ** Message: using fallback from indicator to GtkStatusIcon (compiz:1846): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed Initializing unityshell options...done Nautilus-Share-Message: Called "net usershare info" but it failed: 'net usershare' returned error 255: net usershare: cannot open usershare directory /var/lib/samba/usershares. Error No such file or directory Please ask your system administrator to enable user sharing. Setting Update "main_menu_key" Setting Update "run_key" Setting Update "launcher_hide_mode" Setting Update "edge_responsiveness" Setting Update "launcher_capture_mouse" ** Message: moving back from GtkStatusIcon to indicator compiz (decor) - Warn: failed to bind pixmap to texture ** (zeitgeist-datahub:2128): WARNING **: zeitgeist-datahub.vala:227: Unable to get name "org.gnome.zeitgeist.datahub" on the bus! failed to create drawable compiz (core) - Warn: glXCreatePixmap failed compiz (core) - Warn: Couldn't bind background pixmap 0x1e00001 to texture compiz (decor) - Warn: failed to bind pixmap to texture ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. compiz (decor) - Warn: failed to bind pixmap to texture compiz (decor) - Warn: failed to bind pixmap to texture ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. [2348:2352:12678840568:ERROR:gpu_watchdog_thread.cc(231)] The GPU process hung. Terminating after 10000 ms. [2256:2283:14450711755:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14450726175:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14450746028:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14464521342:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14464541249:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14690775186:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14690795231:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14704543843:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14704566717:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14766138587:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14857232694:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14930901403:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14930965542:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14944566814:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14944592215:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15170929788:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15170947382:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15184585015:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15184605475:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15366189036:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15410983381:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15411569689:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15431632431:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15431674438:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15457304356:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15656020938:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15656042383:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15674651268:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15674671786:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:16052544301:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:16057387653:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:16157122849:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:16157123851:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:16157125473:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:16157126544:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:16157129682:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 If anyone can help me out, I'd be forever grateful

    Read the article

  • Pre-rentrée Oracle Open World 2012 : à vos agendas

    - by Eric Bezille
    A maintenant moins d'un mois de l’événement majeur d'Oracle, qui se tient comme chaque année à San Francisco, fin septembre, début octobre, les spéculations vont bon train sur les annonces qui vont y être dévoilées... Et sans lever le voile, je vous engage à prendre connaissance des sujets des "Key Notes" qui seront tenues par Larry Ellison, Mark Hurd, Thomas Kurian (responsable des développements logiciels) et John Fowler (responsable des développements systèmes) afin de vous donner un avant goût. Stratégie et Roadmaps Oracle Bien entendu, au-delà des séances plénières qui vous donnerons  une vision précise de la stratégie, et pour ceux qui seront sur place, je vous engage à ne pas manquer les séances d'approfondissement qui auront lieu dans la semaine, dont voici quelques morceaux choisis : "Accelerate your Business with the Oracle Hardware Advantage" avec John Fowler, le lundi 1er Octobre, 3:15pm-4:15pm "Why Oracle Softwares Runs Best on Oracle Hardware" , avec Bradley Carlile, le responsable des Benchmarks, le lundi 1er Octobre, 12:15pm-13:15pm "Engineered Systems - from Vision to Game-changing Results", avec Robert Shimp, le lundi 1er Octobre 1:45pm-2:45pm "Database and Application Consolidation on SPARC Supercluster", avec Hugo Rivero, responsable dans les équipes d'intégration matériels et logiciels, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle’s SPARC Server Strategy Update", avec Masood Heydari, responsable des développements serveurs SPARC, le mardi 2 Octobre, 10:15am - 11:15am "Oracle Solaris 11 Strategy, Engineering Insights, and Roadmap", avec Markus Flier, responsable des développements Solaris, le mercredi 3 Octobre, 10:15am - 11:15am "Oracle Virtualization Strategy and Roadmap", avec Wim Coekaerts, responsable des développement Oracle VM et Oracle Linux, le lundi 1er Octobre, 12:15pm-1:15pm "Big Data: The Big Story", avec Jean-Pierre Dijcks, responsable du développement produits Big Data, le lundi 1er Octobre, 3:15pm-4:15pm "Scaling with the Cloud: Strategies for Storage in Cloud Deployments", avec Christine Rogers,  Principal Product Manager, et Chris Wood, Senior Product Specialist, Stockage , le lundi 1er Octobre, 10:45am-11:45am Retours d'expériences et témoignages Si Oracle Open World est l'occasion de partager avec les équipes de développement d'Oracle en direct, c'est aussi l'occasion d'échanger avec des clients et experts qui ont mis en oeuvre  nos technologies pour bénéficier de leurs retours d'expériences, comme par exemple : "Oracle Optimized Solution for Siebel CRM at ACCOR", avec les témoignages d'Eric Wyttynck, directeur IT Multichannel & CRM  et Pascal Massenet, VP Loyalty & CRM systems, sur les bénéfices non seulement métiers, mais également projet et IT, le mercredi 3 Octobre, 1:15pm-2:15pm "Tips from AT&T: Oracle E-Business Suite, Oracle Database, and SPARC Enterprise", avec le retour d'expérience des experts Oracle, le mardi 2 Octobre, 11:45am-12:45pm "Creating a Maximum Availability Architecture with SPARC SuperCluster", avec le témoignage de Carte Wright, Database Engineer à CKI, le mercredi 3 Octobre, 11:45am-12:45pm "Multitenancy: Everybody Talks It, Oracle Walks It with Pillar Axiom Storage", avec le témoignage de Stephen Schleiger, Manager Systems Engineering de Navis, le lundi 1er Octobre, 1:45pm-2:45pm "Oracle Exadata for Database Consolidation: Best Practices", avec le retour d'expérience des experts Oracle ayant participé à la mise en oeuvre d'un grand client du monde bancaire, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle Exadata Customer Panel: Packaged Applications with Oracle Exadata", animé par Tim Shetler, VP Product Management, mardi 2 Octobre, 1:15pm-2:15pm "Big Data: Improving Nearline Data Throughput with the StorageTek SL8500 Modular Library System", avec le témoignage du CTO de CSC, Alan Powers, le jeudi 4 Octobre, 12:45pm-1:45pm "Building an IaaS Platform with SPARC, Oracle Solaris 11, and Oracle VM Server for SPARC", avec le témoignage de Syed Qadri, Lead DBA et Michael Arnold, System Architect d'US Cellular, le mardi 2 Octobre, 10:15am-11:15am "Transform Data Center TCO with Oracle Optimized Servers: A Customer Panel", avec les témoignages notamment d'AT&T et Liberty Global, le mardi 2 Octobre, 11:45am-12:45pm "Data Warehouse and Big Data Customers’ View of the Future", avec The Nielsen Company US, Turkcell, GE Retail Finance, Allianz Managed Operations and Services SE, le lundi 1er Octobre, 4:45pm-5:45pm "Extreme Storage Scale and Efficiency: Lessons from a 100,000-Person Organization", le témoignage de l'IT interne d'Oracle sur la transformation et la migration de l'ensemble de notre infrastructure de stockage, mardi 2 Octobre, 1:15pm-2:15pm Echanges avec les groupes d'utilisateurs et les équipes de développement Oracle Si vous avez prévu d'arriver suffisamment tôt, vous pourrez également échanger dès le dimanche avec les groupes d'utilisateurs, ou tous les soirs avec les équipes de développement Oracle sur des sujets comme : "To Exalogic or Not to Exalogic: An Architectural Journey", avec Todd Sheetz - Manager of DBA and Enterprise Architecture, Veolia Environmental Services, le dimanche 30 Septembre, 2:30pm-3:30pm "Oracle Exalytics and Oracle TimesTen for Exalytics Best Practices", avec Mark Rittman, de Rittman Mead Consulting Ltd, le dimanche 30 Septembre, 10:30am-11:30am "Introduction of Oracle Exadata at Telenet: Bringing BI to Warp Speed", avec Rudy Verlinden & Eric Bartholomeus - Managers IT infrastructure à Telenet, le dimanche 30 Septembre, 1:15pm-2:00pm "The Perfect Marriage: Sun ZFS Storage Appliance with Oracle Exadata", avec Melanie Polston, directeur, Data Management, de Novation et Charles Kim, Managing Director de Viscosity, le dimanche 30 Septembre, 9:00am-10am "Oracle’s Big Data Solutions: NoSQL, Connectors, R, and Appliance Technologies", avec Jean-Pierre Dijcks et les équipes de développement Oracle, le lundi 1er Octobre, 6:15pm-7:00pm Testez et évaluez les solutions Et pour finir, vous pouvez même tester les technologies au travers du Oracle DemoGrounds, (1133 Moscone South pour la partie Systèmes Oracle, OS, et Virtualisation) et des "Hands-on-Labs", comme : "Deploying an IaaS Environment with Oracle VM", le mardi 2 Octobre, 10:15am-11:15am "Virtualize and Deploy Oracle Applications in Minutes with Oracle VM: Hands-on Lab", le mardi 2 Octobre, 11:45am-12:45pm (il est fortement conseillé d'avoir suivi le "Hands-on-Labs" précédent avant d'effectuer ce Lab. "x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance", le mercredi 3 Octobre, 5:00pm-6:00pm "StorageTek Tape Analytics: Managing Tape Has Never Been So Simple", le mercredi 3 Octobre, 1:15pm-2:15pm "Oracle’s Pillar Axiom 600 Storage System: Power and Ease", le lundi 1er Octobre, 12:15pm-1:15pm "Enterprise Cloud Infrastructure for SPARC with Oracle Enterprise Manager Ops Center 12c", le lundi 1er Octobre, 1:45pm-2:45pm "Managing Storage in the Cloud", le mardi 2 Octobre, 5:00pm-6:00pm "Learn How to Write MapReduce on Oracle’s Big Data Platform", le lundi 1er Octobre, 12:15pm-1:15pm "Oracle Big Data Analytics and R", le mardi 2 Octobre, 1:15pm-2:15pm "Reduce Risk with Oracle Solaris Access Control to Restrain Users and Isolate Applications", le lundi 1er Octobre, 10:45am-11:45am "Managing Your Data with Built-In Oracle Solaris ZFS Data Services in Release 11", le lundi 1er Octobre, 4:45pm-5:45pm "Virtualizing Your Oracle Solaris 11 Environment", le mardi 2 Octobre, 1:15pm-2:15pm "Large-Scale Installation and Deployment of Oracle Solaris 11", le mercredi 3 Octobre, 3:30pm-4:30pm En conclusion, une semaine très riche en perspective, et qui vous permettra de balayer l'ensemble des sujets au coeur de vos préoccupations, de la stratégie à l'implémentation... Cette semaine doit se préparer, pour tailler votre agenda sur mesure, à travers les plus de 2000 sessions dont je ne vous ai fait qu'un extrait, et dont vous pouvez retrouver l'ensemble en ligne.

    Read the article

  • Slides and Links from SQL Azure session at BizSpark Azure Day in London

    - by Eric Nelson
    A big thanks to all who attended my two sessions on SQL Azure yesterday (29th March 2010). As promised, my slides and links from the session. SQL Azure Overview for Bizspark day View more presentations from Eric Nelson. Related Links: UK Azure Online Community – join today. UK Windows Azure Site Start working with Windows Azure SQL Azure maximum database size rises from 10GB to 50GB in June TCO and ROI calculator for Windows Azure SQL Azure Migration Wizard

    Read the article

  • SQL Azure news at TechEd 2010

    - by guybarrette
    More Azure news from TechEd US 2010.  This time, it’s from the SQL Azure team: 50GB databases available on June 28th Support for Spatial Data Data Sync Service for SQL Azure Microsoft SQL Server Web Manager Access 2010 Support for SQL Azure Read at about it here var addthis_pub="guybarrette";

    Read the article

  • SQL Azure news at TechEd 2010

    More Azure news from TechEd US 2010.  This time, its from the SQL Azure team: 50GB databases available on June 28th Support for Spatial Data Data Sync Service for SQL Azure Microsoft SQL Server Web Manager Access 2010 Support for SQL Azure Read at about it here var addthis_pub="guybarrette";...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Entity Framework 4.0: Optimal and horrible SQL

    - by DigiMortal
    Lately I had Entity Framework 4.0 session where I introduced new features of Entity Framework. During session I found out with audience how Entity Framework 4.0 can generate optimized SQL. After session I also showed guys one horrible example about how awful SQL can be generated by Entity Framework. In this posting I will cover both examples. Optimal SQL Before going to code take a look at following model. There is class called Event and I will use this class in my query. Here is the LINQ To Entities query that uses small anonymous type. var query = from e in _context.Events             select new { Id = e.Id, Title = e.Title }; Debug.WriteLine(((ObjectQuery)query).ToTraceString()); Running this code gives us the following SQL. SELECT      [Extent1].[event_id] AS [event_id],      [Extent1].[title] AS [title]  FROM [dbo].[events] AS [Extent1] This is really small – no additional fields in SELECT clause. Nice, isn’t it? Horrible SQL Ayende Rahien blog shows us darker side of Entiry Framework 4.0 queries. You can find comparison betwenn NHibernate, LINQ To SQL and LINQ To Entities from posting What happens behind the scenes: NHibernate, Linq to SQL, Entity Framework scenario analysis. In this posting I will show you the resulting query and let you think how much better it can be done. Well, it is not something we want to see running in our servers. I hope that EF team improves generated SQL to acceptable level before Visual Studio 2010 is released. There is also morale of this example: you should always check out the queries that O/R-mapper generates. Behind the curtains it may silently generate queries that perform badly and in this case you need to optimize you data querying strategy. Conclusion Entity Framework 4.0 is new product with a lot of new features and it is clear that not everything is 100% super in its first release. But it still great step forward and I hope that on 12.04.2010 we have new promising O/R-mapper available to use in our projects. If you want to read more about Entity Framework 4.0 and Visual Studio 2010 then please feel free to follow this link to list of my Visual Studio 2010 and .NET Framework 4.0 postings.

    Read the article

  • APress Deal of the Day 2/June/2014 - Pro SQL Server 2012 Integration Services

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/06/02/apress-deal-of-the-day-2june2014---pro-sql-server.aspxToday’s $10 Deal of the Day from APress at http://www.apress.com/9781430236924 is Pro SQL Server 2012 Integration Services. “Pro SQL Server 2012 Integration Services is your key to building powerful extract, transform, load (ETL) solutions using SQL Server 2012 Integration Services (SSIS).”

    Read the article

  • Introducing Microsoft SQL Server 2008 R2

    - by CatherineRussell
    I was thrilled to find a free ebook: Introducing Microsoft SQL Server 2008 R2,by Ross Mistry and Stacia Misner The purpose in Introducing Microsoft SQL Server 2008 R2 is to point out both the new and the improved in the latest version of SQL Server. Great info and they are sharing it for free. Feel free to follow the authors and ask quesitons on Twitter. @RossMistry @StaciaMisner   Here is the link to the book: http://blogs.msdn.com/b/microsoft_press/archive/2010/04/14/free-ebook-introducing-microsoft-sql-server-2008-r2.aspx

    Read the article

  • Read SQL Server Reporting Services Overview

    - by Editor
    Read an excellent, 14-page, general overview of Microsoft SQL Server 2008 Reporting Services entitled White Paper: Reporting Services in SQL Server 2008. Download the White Paper. (360 KB Microsoft Word file) White Paper: Reporting Services in SQL Server 2008 Microsoft SQL Server 2008 Reporting Services provides a complete server-based platform that is designed to support a wide variety [...]

    Read the article

  • APress Deal of the day 29/Jun/2013 - Pro SQL Database for Windows Azure

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/06/29/apress-deal-of-the-day-29jun2013---pro-sql-database.aspxToday's $10 Deal of the day from APress at http://www.apress.com/9781430243953 is Pro SQL Database for Windows Azure"Pro SQL Database for Windows Azure, 2nd Edition introduces you to Microsoft's cloud-based delivery of its enterprise-caliber, SQL Server database management system—showing you how to program and administer it in a variety of cloud computing scenarios."

    Read the article

  • Accessing SQL Server data from iOS apps

    - by RobertChipperfield
    Almost all mobile apps need access to external data to be valuable. With a huge amount of existing business data residing in Microsoft SQL Server databases, and an ever-increasing drive to make more and more available to mobile users, how do you marry the rather separate worlds of Microsoft's SQL Server and Apple's iOS devices? The classic answer: write a web service layer Look at any of the questions on this topic asked in Internet discussion forums, and you'll inevitably see the answer, "just write a web service and use that!". But what does this process gain? For a well-designed database with a solid security model, and business logic in the database, writing a custom web service on top of this just to access some of the data from a different platform seems inefficient and unnecessary. Desktop applications interact with the SQL Server directly - why should mobile apps be any different? The better answer: the iSql SDK Working along the lines of "if you do something more than once, make it shared," we set about coming up with a better solution for the general case. And so the iSql SDK was born: sitting between SQL Server and your iOS apps, it provides the simple API you're used to if you've been developing desktop apps using the Microsoft SQL Native Client. It turns out a web service remained a sensible idea: HTTP is much more suited to the Big Bad Internet than SQL Server's native TDS protocol, removing the need for complex configuration, firewall configuration, and the like. However, rather than writing a web service for every app that needs data access, we made the web service generic, serving only as a proxy between the SQL Server and a client library integrated into the iPhone or iPad app. This client library handles all the network communication, and provides a clean API. OSQL in 25 lines of code As an example of how to use the API, I put together a very simple app that allowed the user to enter one or more SQL statements, and displayed the results in a rather primitively formatted text field. The total amount of Objective-C code responsible for doing the work? About 25 lines. You can see this in action in the demo video. Beta out now - your chance to give us your suggestions! We've released the iSql SDK as a beta on the MobileFoo website: you're welcome to download a copy, have a play in your own apps, and let us know what we've missed using the Feedback button on the site. Software development should be fun and rewarding: no-one wants to spend their time writing boiler-plate code over and over again, so stop writing the same web service code, and start doing exciting things in the new world of mobile data!

    Read the article

  • Le "In-Memory" débarque dans SQL Server, Microsoft muscle Excel pour la BI et annonce le SP1 de SQL Server 2012 au Pass Summit 2012

    Pass Summit 2012 : Microsoft dote SQL Server d'une base de données In-Memory et annonce la disponibilité du SP 1 de SQL Server 2012 et SQL Server 2012 PDW Microsoft a profité de son salon Pass Summit 2012 dédié à SQL Server pour dévoiler sa vision d'une plateforme de gestion de données moderne. L'éditeur à travers plusieurs annonces a renouvelé son engagement d'aider ses clients à mieux exploiter leurs données, qu'elles soient structurées ou non structurées, où qu'elles se trouvent et quelle que soit leur taille. L'annonce phare qui a marqué l'ouverture du Pass Summit 2012 a été la présentation d'une nouvelle technologie de base de données transactionnelles in-Memory

    Read the article

  • APress Deal of the Day 20/Dec/2010 - Beginning SQL Server Modeling: Model-Driven Application Development in SQL Server 2008

    - by TATWORTH
    Todays $10 bargain PDF from Apress is: Beginning SQL Server Modeling: Model-Driven Application Development in SQL Server 2008 Get ready for model-driven application development with SQL Server Modeling! This book covers Microsoft's SQL Server Modeling (formerly known under the code name "Oslo") in detail and contains the information you need to be successful with designing and implementing workflow modeling. $49.99 | Published Jul 2010 |

    Read the article

  • Configure SQL Express 2005 for remote access

    Please follow the below steps as shown in pictures to configure SQL Server Express 2005 for remote access. Fig1: Open SQL Serve Configuration Manager Fig2: Navigate to SQL Serve 2005 N/W configuration and click on Protocols node Fig3: Enable TCP/IP Protocol Fig4: Enable Named Pipes Protocol Fig5: After enabling TCP/IP and Named Pipes protocols Fig6: Finally click on TCP/IP to configure the port number to listen N/W requests to SQL Express 2005. span.fullpost {display:none;}

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >