Search Results

Search found 93496 results on 3740 pages for 'email server'.

Page 19/3740 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • SQL SERVER – Database Dynamic Caching by Automatic SQL Server Performance Acceleration

    - by pinaldave
    My second look at SafePeak’s new version (2.1) revealed to me few additional interesting features. For those of you who hadn’t read my previous reviews SafePeak and not familiar with it, here is a quick brief: SafePeak is in business of accelerating performance of SQL Server applications, as well as their scalability, without making code changes to the applications or to the databases. SafePeak performs database dynamic caching, by caching in memory result sets of queries and stored procedures while keeping all those cache correct and up to date. Cached queries are retrieved from the SafePeak RAM in microsecond speed and not send to the SQL Server. The application gets much faster results (100-500 micro seconds), the load on the SQL Server is reduced (less CPU and IO) and the application or the infrastructure gets better scalability. SafePeak solution is hosted either within your cloud servers, hosted servers or your enterprise servers, as part of the application architecture. Connection of the application is done via change of connection strings or adding reroute line in the c:\windows\system32\drivers\etc\hosts file on all application servers. For those who would like to learn more on SafePeak architecture and how it works, I suggest to read this vendor’s webpage: SafePeak Architecture. More interesting new features in SafePeak 2.1 In my previous review of SafePeak new I covered the first 4 things I noticed in the new SafePeak (check out my article “SQLAuthority News – SafePeak Releases a Major Update: SafePeak version 2.1 for SQL Server Performance Acceleration”): Cache setup and fine-tuning – a critical part for getting good caching results Database templates Choosing which database to cache Monitoring and analysis options by SafePeak Since then I had a chance to play with SafePeak some more and here is what I found. 5. Analysis of SQL Performance (present and history): In SafePeak v.2.1 the tools for understanding of performance became more comprehensive. Every 15 minutes SafePeak creates and updates various performance statistics. Each query (or a procedure execute) that arrives to SafePeak gets a SQL pattern, and after it is used again there are statistics for such pattern. An important part of this product is that it understands the dependencies of every pattern (list of tables, views, user defined functions and procs). From this understanding SafePeak creates important analysis information on performance of every object: response time from the database, response time from SafePeak cache, average response time, percent of traffic and break down of behavior. One of the interesting things this behavior column shows is how often the object is actually pdated. The break down analysis allows knowing the above information for: queries and procedures, tables, views, databases and even instances level. The data is show now on all arriving queries, both read queries (that can be cached), but also any types of updates like DMLs, DDLs, DCLs, and even session settings queries. The stats are being updated every 15 minutes and SafePeak dashboard allows going back in time and investigating what happened within any time frame. 6. Logon trigger, for making sure nothing corrupts SafePeak cache data If you have an application with many parts, many servers many possible locations that can actually update the database, or the SQL Server is accessible to many DBAs or software engineers, each can access some database directly and do some changes without going thru SafePeak – this can create a potential corruption of the data stored in SafePeak cache. To make sure SafePeak cache is correct it needs to get all updates to arrive to SafePeak, and if a DBA will access the database directly and do some changes, for example, then SafePeak will simply not know about it and will not clean SafePeak cache. In the new version, SafePeak brought a new feature called “Logon Trigger” to solve the above challenge. By special click of a button SafePeak can deploy a special server logon trigger (with a CLR object) on your SQL Server that actually monitors all connections and informs SafePeak on any connection that is coming not from SafePeak. In SafePeak dashboard there is an interface that allows to control which logins can be ignored based on login names and IPs, while the rest will invoke cache cleanup of SafePeak and actually locks SafePeak cache until this connection will not be closed. Important to note, that this does not interrupt any logins, only informs SafePeak on such connection. On the Dashboard screen in SafePeak you will be able to see those connections and then decide what to do with them. Configuration of this feature in SafePeak dashboard can be done here: Settings -> SQL instances management -> click on instance -> Logon Trigger tab. Other features: 7. User management ability to grant permissions to someone without changing its configuration and only use SafePeak as performance analysis tool. 8. Better reports for analysis of performance using 15 minute resolution charts. 9. Caching of client cursors 10. Support for IPv6 Summary SafePeak is a great SQL Server performance acceleration solution for users who want immediate results for sites with performance, scalability and peak spikes challenges. Especially if your apps are packaged or 3rd party, since no code changes are done. SafePeak can significantly increase response times, by reducing network roundtrip to the database, decreasing CPU resource usage, eliminating I/O and storage access. SafePeak team provides a free fully functional trial www.safepeak.com/download and actually provides a one-on-one assistance during such trial. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • SQL SERVER – Solution to Puzzle – Simulate LEAD() and LAG() without Using SQL Server 2012 Analytic Function

    - by pinaldave
    Earlier I wrote a series on SQL Server Analytic Functions of SQL Server 2012. During the series to keep the learning maximum and having fun, we had few puzzles. One of the puzzle was simulating LEAD() and LAG() without using SQL Server 2012 Analytic Function. Please read the puzzle here first before reading the solution : Write T-SQL Self Join Without Using LEAD and LAG. When I was originally wrote the puzzle I had done small blunder and the question was a bit confusing which I corrected later on but wrote a follow up blog post on over here where I describe the give-away. Quick Recap: Generate following results without using SQL Server 2012 analytic functions. I had received so many valid answers. Some answers were similar to other and some were very innovative. Some answers were very adaptive and some did not work when I changed where condition. After selecting all the valid answer, I put them in table and ran RANDOM function on the same and selected winners. Here are the valid answers. No Joins and No Analytic Functions Excellent Solution by Geri Reshef – Winner of SQL Server Interview Questions and Answers (India | USA) WITH T1 AS (SELECT Row_Number() OVER(ORDER BY SalesOrderDetailID) N, s.SalesOrderID, s.SalesOrderDetailID, s.OrderQty FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663)) SELECT SalesOrderID,SalesOrderDetailID,OrderQty, CASE WHEN N%2=1 THEN MAX(CASE WHEN N%2=0 THEN SalesOrderDetailID END) OVER (Partition BY (N+1)/2) ELSE MAX(CASE WHEN N%2=1 THEN SalesOrderDetailID END) OVER (Partition BY N/2) END LeadVal, CASE WHEN N%2=1 THEN MAX(CASE WHEN N%2=0 THEN SalesOrderDetailID END) OVER (Partition BY N/2) ELSE MAX(CASE WHEN N%2=1 THEN SalesOrderDetailID END) OVER (Partition BY (N+1)/2) END LagVal FROM T1 ORDER BY SalesOrderID, SalesOrderDetailID, OrderQty; GO No Analytic Function and Early Bird Excellent Solution by DHall – Winner of Pluralsight 30 days Subscription -- a query to emulate LEAD() and LAG() ;WITH s AS ( SELECT 1 AS ldOffset, -- equiv to 2nd param of LEAD 1 AS lgOffset, -- equiv to 2nd param of LAG NULL AS ldDefVal, -- equiv to 3rd param of LEAD NULL AS lgDefVal, -- equiv to 3rd param of LAG ROW_NUMBER() OVER (ORDER BY SalesOrderDetailID) AS row, SalesOrderID, SalesOrderDetailID, OrderQty FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ) SELECT s.SalesOrderID, s.SalesOrderDetailID, s.OrderQty, ISNULL( sLd.SalesOrderDetailID, s.ldDefVal) AS LeadValue, ISNULL( sLg.SalesOrderDetailID, s.lgDefVal) AS LagValue FROM s LEFT OUTER JOIN s AS sLd ON s.row = sLd.row - s.ldOffset LEFT OUTER JOIN s AS sLg ON s.row = sLg.row + s.lgOffset ORDER BY s.SalesOrderID, s.SalesOrderDetailID, s.OrderQty No Analytic Function and Partition By Excellent Solution by DHall – Winner of Pluralsight 30 days Subscription /* a query to emulate LEAD() and LAG() */ ;WITH s AS ( SELECT 1 AS LeadOffset, /* equiv to 2nd param of LEAD */ 1 AS LagOffset, /* equiv to 2nd param of LAG */ NULL AS LeadDefVal, /* equiv to 3rd param of LEAD */ NULL AS LagDefVal, /* equiv to 3rd param of LAG */ /* Try changing the values of the 4 integer values above to see their effect on the results */ /* The values given above of 0, 0, null and null behave the same as the default 2nd and 3rd parameters to LEAD() and LAG() */ ROW_NUMBER() OVER (ORDER BY SalesOrderDetailID) AS row, SalesOrderID, SalesOrderDetailID, OrderQty FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ) SELECT s.SalesOrderID, s.SalesOrderDetailID, s.OrderQty, ISNULL( sLead.SalesOrderDetailID, s.LeadDefVal) AS LeadValue, ISNULL( sLag.SalesOrderDetailID, s.LagDefVal) AS LagValue FROM s LEFT OUTER JOIN s AS sLead ON s.row = sLead.row - s.LeadOffset /* Try commenting out this next line when LeadOffset != 0 */ AND s.SalesOrderID = sLead.SalesOrderID /* The additional join criteria on SalesOrderID above is equivalent to PARTITION BY SalesOrderID in the OVER clause of the LEAD() function */ LEFT OUTER JOIN s AS sLag ON s.row = sLag.row + s.LagOffset /* Try commenting out this next line when LagOffset != 0 */ AND s.SalesOrderID = sLag.SalesOrderID /* The additional join criteria on SalesOrderID above is equivalent to PARTITION BY SalesOrderID in the OVER clause of the LAG() function */ ORDER BY s.SalesOrderID, s.SalesOrderDetailID, s.OrderQty No Analytic Function and CTE Usage Excellent Solution by Pravin Patel - Winner of SQL Server Interview Questions and Answers (India | USA) --CTE based solution ; WITH cteMain AS ( SELECT SalesOrderID, SalesOrderDetailID, OrderQty, ROW_NUMBER() OVER (ORDER BY SalesOrderDetailID) AS sn FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ) SELECT m.SalesOrderID, m.SalesOrderDetailID, m.OrderQty, sLead.SalesOrderDetailID AS leadvalue, sLeg.SalesOrderDetailID AS leagvalue FROM cteMain AS m LEFT OUTER JOIN cteMain AS sLead ON sLead.sn = m.sn+1 LEFT OUTER JOIN cteMain AS sLeg ON sLeg.sn = m.sn-1 ORDER BY m.SalesOrderID, m.SalesOrderDetailID, m.OrderQty No Analytic Function and Co-Related Subquery Usage Excellent Solution by Pravin Patel – Winner of SQL Server Interview Questions and Answers (India | USA) -- Co-Related subquery SELECT m.SalesOrderID, m.SalesOrderDetailID, m.OrderQty, ( SELECT MIN(SalesOrderDetailID) FROM Sales.SalesOrderDetail AS l WHERE l.SalesOrderID IN (43670, 43669, 43667, 43663) AND l.SalesOrderID >= m.SalesOrderID AND l.SalesOrderDetailID > m.SalesOrderDetailID ) AS lead, ( SELECT MAX(SalesOrderDetailID) FROM Sales.SalesOrderDetail AS l WHERE l.SalesOrderID IN (43670, 43669, 43667, 43663) AND l.SalesOrderID <= m.SalesOrderID AND l.SalesOrderDetailID < m.SalesOrderDetailID ) AS leag FROM Sales.SalesOrderDetail AS m WHERE m.SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY m.SalesOrderID, m.SalesOrderDetailID, m.OrderQty This was one of the most interesting Puzzle on this blog. Giveaway Winners will get following giveaways. Geri Reshef and Pravin Patel SQL Server Interview Questions and Answers (India | USA) DHall Pluralsight 30 days Subscription Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Function, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Process RECEIVED email attatchment with PHP.

    - by Eli
    Hi All, I have a php script that will an catch email passed to it and process it. #!/usr/local/bin/php -q <?php while (!feof(STDIN)) {$s .= fgets(STDIN);} // Now do some work on the email source in $s. ?> This works fine. My question is how to save an attachment into the file system from the source. For example, if I isolate the section below, how do I need to process it before saving it into a gif file to create a valid gif? I assume I need to change the encoding, or otherwise process it, but does anyone know exactly? Thanks! --------------090607000609050308090504 Content-Type: image/gif; name="tfk.gif" Content-Transfer-Encoding: base64 Content-ID: <part1.06050801.05020504@etc....> Content-Disposition: attachment; filename="tfk.gif" R0lGODlheAA8APcAAAAAAP/////Wgv+xEgliOAEcDxAnFiAyHTA+JEBKK0BKLFBWM2BhOv// nf//tfz60dDEPdnPT//0Z3BtQf/kBv/pKM/KoO3UHendgoB5SOTGB/HSCNm8D7+oEvrhRcW1 U+visL2dA3hlAqqNBM+vB52DBo+ET4+EUJp+AnNgCmdUCZV9EredLJuKQ4x9PVNCBo1yC39n C6OGFYBpFp+QV4RtI1pMIJWBPq+cXkExBDQuHQwLCDYoBnRdIM+zbMCnZaiWabeleH1zWu/K e9+/dNi5dntsSaSPYpWDXHJmS6CQbl1UQOOYB/6xEv2wEv2vEvuuEvqtEvmsEvirEvaqEvWq EvSpEvOoEvGmE+6kE+yjE+uiE+qhE+mgE+efE+SdE+OcE9+ZEykdBtWvZejAcuS9cfHIeO/H d/nQffXMe/PLevzTf/rRfvnRfvjPffbNfPLKeurDdv7Vgf3UgfzTgPvTgPrSf/fPfvbOffTN fPHKe+/Jeu7Ied+7cty5cP7Vgv3VgvXOfvPMfdOxbOvGete0b7+hZfjRhNGyc+rIgti5eLWb aL+lb9K3gMWtfYh4V3dpTKyadGRcTOGaE9qVFNWRFNGOFM6MFMyKFMqJFMiHFMWFFMOEFMGC Fbx/FTQoE9CpYs2mYduzadiwZ962a921a9yzauC4beO7b5F7UeG/gO/Mis2vd3ZpUZaGaW5j T4l9aLd7FbJ3Fa50FatxFahvFqZuFh0VCLaPUMKZV7yUVMigXEhDO6RsFqFqFp9oFp1nFppl FpdjFpRgFp12PqN7QqeARa6GSrKKTXJgRJBdF49cF4xaF4pYF4VVF2BCG4diLkk1G5ZuOJpy O3hnT4dWF4FSF35PF4JaKo9nNHpMF3hLGHdKGHZJGHVJGHNIGHNHGHFGGHdOIXtSJPvWrHFF GG9EGG5DGG1DGG9FGnBGG3FHHHNJHaJ6V/+kbdJPBP+PTua4nv65lmAfAvR7ROWLYvygdM+O ce9IBb1YLKQ3ET8TBY1aSyMEAS4tLf///yH5BAEAAP8ALAAAAAB4ADwAAAj/AAkIHEiwoMGD CBMqXMiwocOHBARInEixosWLGDNq3Mixo8eOAj+KHEmypEmQEU+qXMmypcSQLmPK7BgoUSJA EwE1itaP2io6cjDCnEm0qMRESIQIUVUnaKJX+/RFCyKo6cWhRrO2XPVqx75+vBK1cYOkXz5y /JIVyUPnakqKRGhkmJvBxBCTPn78IIIxr4m5NFjiCNxRzpEd8+jd4xFpzx1GOvTR0+eqEB87 bitmQIBjwgEfOBIQnojDgAkBRA4swIjAxA8DGSz6SKD6B47TFX2MzLD64iFGiIAKmGMEgD54 9uYpKaOGkKt973YtGhSnTWaKNO4umKDRAF8BC3Bj/zRR4HvFIQV0X/RhYOSC0RRVSdohZtEh OnUcecWHr90PP3vsgYMOI4xhyCCEYGYRVhSll9sCfPlwwETeZSThhBfhUMACDAjwAwMZcAee iAL4AKEAJiTww3ukTWAADhYBsoRx9cSzCh5oDKHEAfMwQYMhAAoy4AgsAOGHHm0t+BZFOGA4 0QQmGHCXCSJKmNEQCTCAAEYMcKfehrghQNgEGVT4QwEZiCYRlD9saVEdAOyDjzvyqBJHHnz4 wYgkI8gQSR+CtLEIgSMoAegc11U0QYcTtTmBepxJFOKavUk0xAEmMMBomhQZ8MNEVkpERHse IvCopBOgd5dcAkwg4gSVtv/h1Tv46MNKH3rgEccgSryggVqC2NEIoUocmWRFDEp0AHwqDnEX EQXcJYCYFxHBAF8JxGZRkxRB+aSICeDgrLI+0JCARCrSUAB8E90Byy397CNGJJa1oSsSt2yA AiyE3NHIJyGE8AgifNSRKKgFiCdAAlmuGi1qBcBIUYgJSOvZpxWBSNEBjAoAG7oNC0DDhAyD PEECJFJ0BxFHJOGPCNAEkSAdb6SyAwkaROPIHo58QkIH1PwQh4LILmnpD+pZah4RGA/xg7RP KjyEeZJm0LFEPkBN0dQaEfEsRnbwUcgPScQQgg2I4DGHHUcAgIK+vKjCCg8cQJCEIdUdrFVF f+3/TREduvqhhA4clNCKKm3YscgOOciwwQuwBPHCBRE8YggZdwRVdEXIiCPOOKCLUwwqGNGh CzafjyMONrjgoVEau0xDzDG5jAKH5kXJQccdZzTizwsbiABLInksngIJG5AQjT8zSIBBMjiU gQfuEzEoDhZZbKE9Fp5884ZFdBwzTBjZb4HFF7+Ug8ZFcpACTjCadCLLL8x4c8becrRBiCQ5 aKBBDvxaHAxgQAIKhCAHI3AACGqwCOlR7yVGQ8cAJkjBTphjDRWpwzGEsQUKDsAKtAhHGti3 i2tw4goUfAIWZoELv81BDUDQgQo0QAIxKEEJYuhBCjZAAQpswAMPeIAL/xbRhzw8UADWw4IH ByCLY2TQGL/oIAW/IAxjqIGE1qjEEgcwCV/kwm9ywAMr/EE3A37idxegAA1D0IEPWAADLjiC HwSBqM1RZBxc8OATatHCidTBGL1AIQXDoAxcHKsipHCGFj1ICVo44xzW8Zsd4sALMZSAhyRQ gQhCkAIeiOGTPHgBDGrwCGMpqSJ49CAWgrELPxrDFlVgpDN0UceLFKMTHtzCLKyBjFMcUit0 GIIQAAA8NabgFrcIZQ1aAIQgBOEDM0iGZX6JRKNhIwyqFEYrBaAGQCqRgpiwRi6OOJE/oIMZ tcAEFyrhjGOggpxZkUMggiCGEGwgBZ9MQQg0QP+BCkigAQ5wQAM6YAQEGcyOE4EGJTyYBWGE QgBxKMYsvjlBTWgDFPCcCB3GgItvXAMc2SjHGGq5ETlkVCRoUMUSYpCDWxyvhx0oAQxm0IOa 9iAGN8hFKHJRjGPcD4IVUagHvcCMMaCCGLGgggc9wY1RnNQNuihGNtDhDF/E4hKaqIYpLpIG NajBFLvAhTTQgY464oEMowCFLowxjWxIwxgj1UgdCCGEW6iAAxTgwAqewVduOIMZwgjGMJxh DWs4AxieaGL1jCZUChI1FN/wxBSWCo53aoQO5rBFJy5BvilAwQlYAMY2KaIGb3jjG80QRi0y wc5QGAMa4bgGM5TRi1j/aKISmYgFM6ZBiozMQRCuAJ4GVBANbQAjFpX4QhayUAUqWKF8zIWC Jqax2KAudIrO+AYmoJDCWZSjDB3RhS0+2wQPdqEZo7BIKIKBBSpMQQpQeAIVZrEMT1QiC+6V QhTi2wQnQGEKnCjHKTASRkfo4AXRcMYJ97vFBvtCF9WlSGMnOAlgYMIJHqyFOeDgETSYgxNb PK8oLNIGc1SivBR0AhWk8AQUN3iCUOAEOohWETuo9Bm00MITpkgJKbx4AE6oRikinNDrwlgK Lt6jOdwgElKYeIkivsgulOHjF0vBEsFwRi/CgGEPSoGVpTtFNjjB3QlGgRblKIc1ajGJHS8R /wvb+Gk1rfvjAVTCG3EgyS6E4eYJRtkibwCHkT3YBC8MIxu7KIUuynGJPg/gCdMtHTpiUeZH y4IYaXBDGXCBDWB0YYl3viJQJTzoLUKBFuZYn0jKYI08UvDPFpnGJrZYaGUgI0l/AEU1skDo L5SDDRc5hTV4PUVwyFkAdSCGJ5a4CXB8b9QTuWadJxiMGYuEDNf4gnmbMeKLHEMWS2xCF5iB C5LOIRuWcDQWuiFqipgzFlFYajEsUoxlezAW4VD1nO/4aQpSIQuVniAVloGLk1YE217Ydrct ogtfLJEKtkAGSSVCDDIz1BvtnkgavnFiCjbhF1+sSL2X6F1gQ1si4/+Q4gTBYOEuO7Ya4/QI whWOEV30gtCUKMezKTIHdDTag77W90RYnXAKRkEZoKC3vSnoiwsSGeWuXvk1uPFzD4bhGrsw uERm/mpu1/zmFJQCLfpYkThk28VOuEQ2LmKKZkR9AFJAutKXOAx04I5BqaRgFoaxC21cYouV 0IZTOcJ1P1dj4ZybBUO9bpFc/KLKMObjRUhRjX5P8AlgFvnSJygMuz9dAOOw/ACwoE1QWKPU A7Ao4jFS+AF04fAYGfkEmxAGbRxbInbwuaO5cA0yXATbYFhiLIzxQNlTsPN3N1rKPTgFkMsh F82YxBY38Q3fa4TVoteC3C3yB2lgIoWWcDr/RZ4vjFh6cBbIICcayoEJF7v+GqFwnUTmYPxH K8PzJxcAO6ywRMnPQRfMIHoD0ASxYA55lhGjwAzmN0GrNFoUsXFGBmnSUBFyMAbXUGo5Fwem QGMTkQu+AHkTdAnMUA7GcAzGgA7gsHlS0Azo8Aeftw4gyEQtOBzIIAxKxXwFmHEVAQrKoG6s dESoYA0UBQWbQF0TIQejsA2Z4Ghe4AzZoA3MYA4TJwBm0A2LBHSaEAuyEAue4AsgZnTMMIP5 xw6TJW8aVQy94GgDEAWzYAwH1Xi/4H4NKGXCUGlPgAn4V4HcoAlqaAm+gAmUIAv4N367YA0C uERi94VmFoYuOIZl/1hB5kA0bXAO4LZEV2ALxWAGtrR5o5d5FfFtfdYElRAOTDYHuXANS7hF VoBCWWANpIMRNfgFajhBV5AJzjBrYCiG+zYR6hBvHmQJtkcRZwAOuOhBVCALqVYRc3AO2GSM IHcREzZBYfANI5SAqEdyx5BRk8gMvzALXCgLtFALwbAM2nANV7iGjPh56VBmU3AFkMYNHEYR orCHVEAFVVAFVoAFYTAMSUcRbFAOlbAFXDCQW7CP/chz5hALm7CQCzkL4OAG6+cLmcCQC8kJ mUAJlDAJlsANZ9CIF/EHeDAK6FAO3gAO5WAO57ALY1AKztCMMJaO+VcMvlALvjAMzDAM1v9w Dr+EhN6wDMrwk8pQDdagDedAUnKADtdQWEppDc7GfSj4DVAJleHQguYEDqcVlVHJDdywDdoQ CuTEBmNADCPJOm+QBm+ABnZQR2PADG9HBc2wVZ9XYuZgDmRFVtQkAHIAB3VJVqSACuhATXNA BmUwmGWACmSwcxXxB2+wmIuZaY34B26gBozJmGlgBmZwBmdATiXGDJwABl7wBaJlEXOAC7UQ cF7QexSRLGszB3LwB62pEa5pUprjkX4zEnKAC7aQBS3WBE2AfhYhbC45QZ3QlJ9Xm8b5e9cQ nAMgeRLxB2sgCt+AS4uXXqlpNMd5nRNBedrmQZdgDoepBuhgDtr/UIyRZw4mV5zYaRI/cAIL oB4n0CUScQIKgANiIp8SYQbccI4TNAsf9Q3VYAvB53FiZw7yV53p2RII4Cx3cQKBcQB88SEC kAGBsQCfIgfiJUhDVQmUQFFARgWTEAzFwGSndKArQSoSQaHg8SnrKQAn8CntKRF/NAtbgGRb FAVVgAVcMAmZ4AvgoAtviFAkOhKEsQAS0yoTehcr2qLTQlrH0AyycFuUUAmawAmxMAu9MAzN sA3SkAsjlBHJEqQdoQB8gQMKsBclwhmEQQMnkKJDoABJMxyjcAxtBQ3ZIDvGgAy6EAqmYAZa 96Vg+hFZMxFEkDTfwRdOozUfaRJ++qeMGvoRi9qokKoRjxqplFo0EHGpmJqpmrqpCBEQADs= --------------090607000609050308090504--

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • Freemarker - lack of special character in email subject template cause email content crash

    - by freakman
    im fighting with strange error. Im using seperate freemarker templates for mail subject and body. It is sent using org.springframework.mail.javamail.JavaMailSender. Only templates that contains some special swedish character works in my application ( yes you read right... not the other way). If I delete it my email content crashes. It contains then: MIME-Version: 1.0 Content-Type: text/html;charset=UTF-8 Content-Transfer-Encoding: 7bit .. html code here .. My freemarker.properties file locale=sv_SE classic_compatible=false number_format= date_format=yyyy-MM-dd time_format=HH:mm datetime_format=yyyy-MM-dd HH:mm output_encoding=UTF-8 url_escaping_charset=UTF-8 auto_import=spring.ftl as spring auto_include= default_encoding=UTF-8 localized_lookup=true strict_syntax=true whitespace_stripping=true template_update_delay=10 Ive tried to convert subject file with dos2unix tool. Using 'find -bi subject.ftl' show that encoding is us-ascii. With added special character - utf-8. This thing is suprisingly strange for me... //SOLUTION: use :set bomb and save file in vim.

    Read the article

  • GMail suspects confirmation email in stealing personal information

    - by Dennis Gorelik
    When user registers on my web site, web site sends user email confirmation link. Subject: Please confirm your email address Body:Please open this link in your browser to confirm your email address: http://www.postjobfree.com/a/c301718062444f96ba0e358ea833c9b3 This link will expire on: 6/9/2012 8:04:07 PM EST. If my web site sends that email to GMaill (either @gmail.com or another domain that's handled by Google Apps) and that user never emailed to email -- then GMail not only puts the email to spam folder, but also adds prominent red warning:Be careful with this message. Similar messages were used to steal people's personal information. Unless you trust the sender, don't click links or reply with personal information. Learn more That warning really scares many of my users, so they are afraid to open that link and confirm their email. What can I do about it? Ideally I would like that message end up in user's inbox, not spam folder. But at least how do I prevent that scary message? IP address of my mailing server is not blacklisted: http://www.mxtoolbox.com/SuperTool.aspx?action=blacklist%3a208.43.198.72 I use SPF and DKIM signature. Below is the email that ended up in spam folder with that scary red message. Delivered-To: [email protected] Received: by 10.112.84.98 with SMTP id x2csp36568lby; Fri, 8 Jun 2012 17:04:15 -0700 (PDT) Received: by 10.60.25.6 with SMTP id y6mr9110318oef.42.1339200255375; Fri, 08 Jun 2012 17:04:15 -0700 (PDT) Return-Path: Received: from smtp.postjobfree.com (smtp.postjobfree.com. [208.43.198.72]) by mx.google.com with ESMTP id v8si6058193oev.44.2012.06.08.17.04.14; Fri, 08 Jun 2012 17:04:15 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 208.43.198.72 as permitted sender) client-ip=208.43.198.72; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 208.43.198.72 as permitted sender) [email protected]; dkim=pass [email protected] DomainKey-Signature: a=rsa-sha1; c=nofws; q=dns; d=postjobfree.com; s=postjobfree.com; h= received:message-id:mime-version:from:to:date:subject:content-type; b=TCip/3hP1WWViWB1cdAzMFPjyi/aUKXQbuSTVpEO7qr8x3WdMFhJCqZciA69S0HB4 Koatk2cQQ3fOilr4ledCgZYemLSJgwa/ZRhObnqgPHAglkBy8/RAwkrwaE0GjLKup 0XI6G2wPlh+ReR+inkMwhCPHFInmvrh4evlBx/VlA= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=postjobfree.com; s=postjobfree.com; h=content-type:subject:date:to:from:mime-version:message-id; bh=N59EIgRECIlAnd41LY4HY/OFI+v1p7t5M9yP+3FsKXY=; b=J3/BdZmpjzP4I6GA4ntmi4REu5PpOcmyzEL+6i7y7LaTR8tuc2h7fdW4HaMPlB7za Lj4NJPed61ErumO66eG4urd1UfyaRDtszWeuIbcIUqzwYpnMZ8ytaj8DPcWPE3JYj oKhcYyiVbgiFjLujib3/2k2PqDIrNutRH9Ln7puz4= Received: from sv3035 (sv3035 [208.43.198.72]) by smtp.postjobfree.com with SMTP; Fri, 8 Jun 2012 20:04:07 -0400 Message-ID: MIME-Version: 1.0 From: "PostJobFree Notification" To: [email protected] Date: 8 Jun 2012 20:04:07 -0400 Subject: Please confirm your email address Content-Type: multipart/alternative; boundary=--boundary_107_ffa6a9ea-01dc-40f5-a50c-4c3b3d113f08 ----boundary_107_ffa6a9ea-01dc-40f5-a50c-4c3b3d113f08 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Please open this link in your browser to confirm your email addre= ss: =0D=0Ahttp://www.postjobfree.com/a/c301718062444f96ba0e358ea8= 33c9b3 =0D=0AThis link will expire on: 6/9/2012 8:04:07 PM EST. =0D=0A ----boundary_107_ffa6a9ea-01dc-40f5-a50c-4c3b3d113f08 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: base64 PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj1Db250ZW50LVR5cGUgY29udGVu dD0idGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij48L2hlYWQ+DQo8Ym9keT48ZGl2 Pg0KUGxlYXNlIG9wZW4gdGhpcyBsaW5rIGluIHlvdXIgYnJvd3NlciB0byBjb25m aXJtIHlvdXIgZW1haWwgYWRkcmVzczo8YnIgLz48YSBocmVmPSJodHRwOi8vd3d3 LnBvc3Rqb2JmcmVlLmNvbS9hL2MzMDE3MTgwNjI0NDRmOTZiYTBlMzU4ZWE4MzNj OWIzIj5odHRwOi8vd3d3LnBvc3Rqb2JmcmVlLmNvbS9hL2MzMDE3MTgwNjI0NDRm OTZiYTBlMzU4ZWE4MzNjOWIzPC9hPjxiciAvPlRoaXMgbGluayB3aWxsIGV4cGly ZSBvbjogNi85LzIwMTIgODowNDowNyBQTSBFU1QuPGJyIC8+DQo8L2Rpdj48L2Jv ZHk+PC9odG1sPg== ----boundary_107_ffa6a9ea-01dc-40f5-a50c-4c3b3d113f08--

    Read the article

  • APACHE - PHP - Bounce emails

    - by user1179459
    I want to improve our mail server lists by handling all the bounces we get in our websites. I have a website which has over 8000+ users and another website which as over 1500+ users, they are emailed various notifications every second, ie. job alerts, email alerts, I am using POP connections with EXIM on APACHE server, most scripts are based on PHP language generates email on the fly. Problems i have But some users has registered long time ago and now quite few has bounce email address Some users register with dummy emails like [email protected] which never existed but a valid email address, any chance of stopping this without asking them login to the email account and clicking links which dont work most times.. (too annoying to the end user) Server is sending unnecessary emails can be avoided if i know they dont exists ? Solutions i need to have Is there a way i can download the bounce email list somewhere (WHM/Cpanel), i know exim mail has it but its not readable (i need a file like CSV or something similar to scan them over and write a php script to delete the users from the database ?) I need to know if there is any function in the PHP that can check the existence of the email address on the fly ? so that i can set the email send function in the mailer class to check before it sends out. On the server will bouncing emails are going to eatup lots of server resources ? like memory/cpu on processing them ? or are they are minimal where we dont have to worry about this at all ? May be a opensoruce or linux software to capture them and view them in a report basis and cleaning them up ? I am not a linux expert or server admin but i do lot of PHP coding, so please be descriptive of the solutions specially if they are linux commands Thank you!

    Read the article

  • SQL SERVER – SSMS: Top Object and Batch Execution Statistics Reports

    - by Pinal Dave
    The month of June till mid of July has been the fever of sports. First, it was Wimbledon Tennis and then the Soccer fever was all over. There is a huge number of fan followers and it is great to see the level at which people sometimes worship these sports. Being an Indian, I cannot forget to mention the India tour of England later part of July. Following these sports and as the events unfold to the finals, there are a number of ways the statisticians can slice and dice the numbers. Cue from soccer I can surely say there is a team performance against another team and then there is individual member fairs against a particular opponent. Such statistics give us a fair idea to how a team in the past or in the recent past has fared against each other, head-to-head stats during World cup and during other neutral venue games. All these statistics are just pointers. In reality, they don’t reflect the calibre of the current team because the individuals who performed in each of these games are totally different (Typical example being the Brazil Vs Germany semi-final match in FIFA 2014). So at times these numbers are misleading. It is worth investigating and get the next level information. Similar to these statistics, SQL Server Management studio is also equipped with a number of reports like a) Object Execution Statistics report and b) Batch Execution Statistics reports. As discussed in the example, the team scorecard is like the Batch Execution statistics and individual stats is like Object Level statistics. The analogy can be taken only this far, trust me there is no correlation between SQL Server functioning and playing sports – It is like I think about diet all the time except while I am eating. Performance – Batch Execution Statistics Let us view the first report which can be invoked from Server Node -> Reports -> Standard Reports -> Performance – Batch Execution Statistics. Most of the values that are displayed in this report come from the DMVs sys.dm_exec_query_stats and sys.dm_exec_sql_text(sql_handle). This report contains 3 distinctive sections as outline below.   Section 1: This is a graphical bar graph representation of Average CPU Time, Average Logical reads and Average Logical Writes for individual batches. The Batch numbers are indicative and the details of individual batch is available in section 3 (detailed below). Section 2: This represents a Pie chart of all the batches by Total CPU Time (%) and Total Logical IO (%) by batches. This graphical representation tells us which batch consumed the highest CPU and IO since the server started, provided plan is available in the cache. Section 3: This is the section where we can find the SQL statements associated with each of the batch Numbers. This also gives us the details of Average CPU / Average Logical Reads and Average Logical Writes in the system for the given batch with object details. Expanding the rows, I will also get the # Executions and # Plans Generated for each of the queries. Performance – Object Execution Statistics The second report worth a look is Object Execution statistics. This is a similar report as the previous but turned on its head by SQL Server Objects. The report has 3 areas to look as above. Section 1 gives the Average CPU, Average IO bar charts for specific objects. The section 2 is a graphical representation of Total CPU by objects and Total Logical IO by objects. The final section details the various objects in detail with the Avg. CPU, IO and other details which are self-explanatory. At a high-level both the reports are based on queries on two DMVs (sys.dm_exec_query_stats and sys.dm_exec_sql_text) and it builds values based on calculations using columns in them: SELECT * FROM    sys.dm_exec_query_stats s1 CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2 WHERE   s2.objectid IS NOT NULL AND DB_NAME(s2.dbid) IS NOT NULL ORDER BY  s1.sql_handle; This is one of the simplest form of reports and in future blogs we will look at more complex reports. I truly hope that these reports can give DBAs and developers a hint about what is the possible performance tuning area. As a closing point I must emphasize that all above reports pick up data from the plan cache. If a particular query has consumed a lot of resources earlier, but plan is not available in the cache, none of the above reports would show that bad query. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • SQL SERVER – SSMS: Disk Usage Report

    - by Pinal Dave
    Let us start with humor!  I think we the series on various reports, we come to a logical point. We covered all the reports at server level. This means the reports we saw were targeted towards activities that are related to instance level operations. These are mostly like how a doctor diagnoses a patient. At this point I am reminded of a dialog which I read somewhere: Patient: Doc, It hurts when I touch my head. Doc: Ok, go on. What else have you experienced? Patient: It hurts even when I touch my eye, it hurts when I touch my arms, it even hurts when I touch my feet, etc. Doc: Hmmm … Patient: I feel it hurts when I touch anywhere in my body. Doc: Ahh … now I get it. You need a plaster to your finger John. Sometimes the server level gives an indicator to what is happening in the system, but we need to get to the root cause for a specific database. So, this is the first blog in series where we would start discussing about database level reports. To launch database level reports, expand selected server in Object Explorer, expand the Databases folder, and then right-click any database for which we want to look at reports. From the menu, select Reports, then Standard Reports, and then any of database level reports. In this blog, we would talk about four “disk” reports because they are similar: Disk Usage Disk Usage by Top Tables Disk Usage by Table Disk Usage by Partition Disk Usage This report shows multiple information about the database. Let us discuss them one by one.  We have divided the output into 5 different sections. Section 1 shows the high level summary of the database. It shows the space used by database files (mdf and ldf). Under the hood, the report uses, various DMVs and DBCC Commands, it is using sys.data_spaces and DBCC SHOWFILESTATS. Section 2 and 3 are pie charts. One for data file allocation and another for the transaction log file. Pie chart for “Data Files Space Usage (%)” shows space consumed data, indexes, allocated to the SQL Server database, and unallocated space which is allocated to the SQL Server database but not yet filled with anything. “Transaction Log Space Usage (%)” used DBCC SQLPERF (LOGSPACE) and shows how much empty space we have in the physical transaction log file. Section 4 shows the data from Default Trace and looks at Event IDs 92, 93, 94, 95 which are for “Data File Auto Grow”, “Log File Auto Grow”, “Data File Auto Shrink” and “Log File Auto Shrink” respectively. Here is an expanded view for that section. If default trace is not enabled, then this section would be replaced by the message “Trace Log is disabled” as highlighted below. Section 5 of the report uses DBCC SHOWFILESTATS to get information. Here is the enhanced version of that section. This shows the physical layout of the file. In case you have In-Memory Objects in the database (from SQL Server 2014), then report would show information about those as well. Here is the screenshot taken for a different database, which has In-Memory table. I have highlighted new things which are only shown for in-memory database. The new sections which are highlighted above are using sys.dm_db_xtp_checkpoint_files, sys.database_files and sys.data_spaces. The new type for in-memory OLTP is ‘FX’ in sys.data_space. The next set of reports is targeted to get information about a table and its storage. These reports can answer questions like: Which is the biggest table in the database? How many rows we have in table? Is there any table which has a lot of reserved space but its unused? Which partition of the table is having more data? Disk Usage by Top Tables This report provides detailed data on the utilization of disk space by top 1000 tables within the Database. The report does not provide data for memory optimized tables. Disk Usage by Table This report is same as earlier report with few difference. First Report shows only 1000 rows First Report does order by values in DMV sys.dm_db_partition_stats whereas second one does it based on name of the table. Both of the reports have interactive sort facility. We can click on any column header and change the sorting order of data. Disk Usage by Partition This report shows the distribution of the data in table based on partition in the table. This is so similar to previous output with the partition details now. Here is the query taken from profiler. SELECT row_number() OVER (ORDER BY a1.used_page_count DESC, a1.index_id) AS row_number ,      (dense_rank() OVER (ORDER BY a5.name, a2.name))%2 AS l1 ,      a1.OBJECT_ID ,      a5.name AS [schema] ,       a2.name ,       a1.index_id ,       a3.name AS index_name ,       a3.type_desc ,       a1.partition_number ,       a1.used_page_count * 8 AS total_used_pages ,       a1.reserved_page_count * 8 AS total_reserved_pages ,       a1.row_count FROM sys.dm_db_partition_stats a1 INNER JOIN sys.all_objects a2  ON ( a1.OBJECT_ID = a2.OBJECT_ID) AND a1.OBJECT_ID NOT IN (SELECT OBJECT_ID FROM sys.tables WHERE is_memory_optimized = 1) INNER JOIN sys.schemas a5 ON (a5.schema_id = a2.schema_id) LEFT OUTER JOIN  sys.indexes a3  ON ( (a1.OBJECT_ID = a3.OBJECT_ID) AND (a1.index_id = a3.index_id) ) WHERE (SELECT MAX(DISTINCT partition_number) FROM sys.dm_db_partition_stats a4 WHERE (a4.OBJECT_ID = a1.OBJECT_ID)) >= 1 AND a2.TYPE <> N'S' AND  a2.TYPE <> N'IT' ORDER BY a5.name ASC, a2.name ASC, a1.index_id, a1.used_page_count DESC, a1.partition_number Using all of the above reports, you should be able to get the usage of database files and also space used by tables. I think this is too much disk information for a single blog and I hope you have used them in the past to get data. Do let me know if you found anything interesting using these reports in your environments. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • Whitelisting website email so it is not rejected as spam

    - by Micah Burnett
    What are the processes I need to go through to make sure emails sent from my web server are not rejected as spam? This question is for legitimate site emails that members have requested like a daily newsletter which is generated and run in a nightly process, as well as confirmation emails. Some of the ideas I've heard are: Making sure the server sending the mail has reverse-dns lookup turned on. Manually submitting a whitelist request to major ISPs.

    Read the article

  • Will this increase my Virtual private Server failing rate ?

    - by Spencer Lim
    Will this increase my Virtual private Server failing rate if i :- install Microsoft Window Server 2008 Enterprise install SQL server enterprise 2008 install IIS 7.5 install ASP.Net Mvc 2 install Microsoft Exchange << should live inside MWS2008 ? or standalone without OS? install Team foundation server << should live inside MWS2008 ? or standalone without OS? on one mini VPS with specification of DELL Poweredge R710 shared plan DDR3 ECC RAMs 16GB and -- 1GB for this VPS using DELL PERC 6i raid controller (this thing alone about 1.5k-2k) and the SAS HDD (15K RPM) (146GB) -- 33GB to this VPS each hdd is freaking fast over 300MB read / write possible with proper tuning the motherboard is a DELL and it has twin redundant PSU (870watt 85%eff) its running on Intel Xeon 5502 (Quad Core) x2 so about 8 physical proc (fairly share) is there any ruler to measure for this about one VPS can only install what what what service ? because of my resource is limited =.@ may i know if it is install in this way,maybe it seem like defeat the way of "VPS"... what will happen ? or any guideline on this issue (fully configuring the window server 2008 R2) ? Thx for reply

    Read the article

  • How to configure SSL on an instance of SQL Server to allow dedicated users to remotely access it?

    - by The Good Boy
    I have configured the instance of SQL Server to allow dedicated users to access it remotely. Connection string Data Source = 192.168.1.2,1433\sqlexpress;etc... has been tested and works. However, I have not configured the SSL to secure the communication. How to configure SSL on an instance of SQL Server to allow dedicated users to remotely access it? edit 1 The dedicated user will administer its database using Sql Server Management Studio. What I want to do is to secure the communication when he/she administers the database using Sql Server Management Studio.

    Read the article

  • Things to check for an internet-facing email server.

    - by Shtééf
    I'm faced with the task of setting up a public-internet-facing email server, that will be relaying mail for all of our other servers in the network. While the software in itself is set up in few keystrokes, what little experience I have with managing an email server has thought me that there are tons of awkward filtering techniques employed by other email systems. Systems that my own server will inevitably interact with a some point. Hence, my questions: What things should be kept in mind and double checked when setting up an email server? What resources are available for checking if my email server is set-up correctly? I'm specifically NOT looking for instructions for any given mail server, such as Exchange or Postfix. But it's okay to say: “you should have X and Y in your set-up, because when talking to server software Z, it typically tries to weed out open relays by checking for these.” Some things I've discovered myself: Make sure forward and reverse DNS are set up. Mail servers tend to do a reverse lookup for the peer IP-address when receiving. Matching a reverse look up with a follow-up forward lookup is probably employed to weed out open relays run through malware on home networks. Make sure the user in the From-address exists. The From-address is easily spoofed. A receiving mail server may try to contact the mail server in the From-domain, and see if the From-user actually exists.

    Read the article

  • Transfering from Mail Enable to Hmail Server

    - by air
    i have one windows server (Plesk 9.5) with MailEnable free Edition, i want to transfer from MailEnable to Hmail Server, i am looking for some program or script to transfer all email accounts, email redirects and emails from MailEnable to Hmail Server. Thank

    Read the article

  • Content Management for WebCenter Installation Guide

    - by Gary Niu
    Overvew As we known, there are two way to install Content Management for WebCenter. One way is install it by WebCenter installer wizard, another way is to install it use their own installer. This guide is for the later one. For SSO purpose, I also mentioned how to config OID identity store for Content Management for WebCenter. Content Management for WebCenter( 10.1.3.5.1) Oracle Enterprise Linux R5U4 Basic Installation -bash-3.2$ ./setup.sh Please select your locale from the list.           1. Chinese-Simplified           2. Chinese-Traditional           3. Deutsch          *4. English-US           5. English-UK           6. Español           7. Français           8. Italiano           9. Japanese          10. Korean          11. Nederlands          12. Português-Brazil Choice? Throughout the install, when entering a text value, you can press Enter to accept the default that appears between square brackets ([]). When selecting from a list, you can select the choice followed by an asterisk by pressing Enter. Select installation type from the list.         *1. Install new server          2. Update a server Choice? Content Server Installation Directory Please enter the full pathname to the installation directory. Content Server Core Folder [/oracle/ucm/server]:/opt/oracle/ucm/server Create Directory         *1. yes          2. no Choice? Java virtual machine         *1. Sun Java 1.5.0_11 JDK          2. Specify a custom Java virtual machine Choice? Installing with Java version 1.5.0_11. Enter the location of the native file repository. This directory contains the native files checked in by contributors. Content Server Native Vault Folder [/opt/oracle/ucm/server/vault/]: Create Directory         *1. yes          2. no Choice? Enter the location of the web-viewable file repository. This directory contains files that can be accessed through the web server. Content Server Weblayout Folder [/opt/oracle/ucm/server/weblayout/]: Create Directory         *1. yes          2. no Choice? This server can be configured to manage its own authentication or to allow another master to act as an authentication proxy. Configure this server as a master or proxied server.         *1. Configure as a master server.          2. Configure as server proxied by a local master server. Choice? During installation, an admin server can be installed and configured to manage this server. If there is already an admin server on this system, you can have the installer configure it to administrate this server instead. Select admin server configuration.         *1. Install an admin server to manage this server.          2. Configure an existing admin server to manage this server.          3. Don't configure an admin server. Choice? Enter the location of an executable to start your web browser. This browser will be used to display the online help. Web Browser Path [/usr/bin/firefox]: Content Server System locale           1. Chinese-Simplified           2. Chinese-Traditional           3. Deutsch          *4. English-US           5. English-UK           6. Español           7. Français           8. Italiano           9. Japanese          10. Korean          11. Nederlands          12. Português-Brazil Choice? Please select the region for your timezone from the list.         *1. Use the timezone setting for your operating system          2. Pacific          3. America          4. Atlantic          5. Europe          6. Africa          7. Asia          8. Indian          9. Australia Choice? Please enter the port number that will be used to connect to the Content Server. This port must be otherwise unused. Content Server Port [4444]: Please enter the port number that will be used to connect to the Admin Server. This port must be otherwise unused. Admin Server Port [4440]: Enter a security filter for the server port. Hosts which are allowed to communicate directly with the server port may access any resources managed by the server. Insure that hosts which need access are included in the filter. See the installation guide for more details. Incoming connection address filter [127.0.0.1]:*.*.*.* *** Content Server URL Prefix The URL prefix specified here is used when generating HTML pages that refer to the contents of the weblayout directory within the installation. This prefix must be mapped in the web server Additional Document Directories section of the Content Management administration menu to the physical location of the weblayout directory. For example, "/idc/" would be used in your installation to refer to the URL http://ucm.company.com/idc which would be mapped in the web server to the physical location /oracle/ucm/server/weblayout. Web Server Relative Root [/idc/]: Enter the name of the local mail server. The server will contact this system to deliver email. Company Mail Server [mail]: Enter the e-mail address for the system administrator. Administrator E-Mail Address [sysadmin@mail]: *** Web Server Address Many generated HTML pages refer to the web server you are using. The address specified here will be used when generating those pages. The address should include the host and domain name in most cases. If your webserver is running on a port other than 80, append a colon and the port number. Examples: www.company.com, ucm.company.com:90 Web Server HTTP Address [yekki]:yekki.cn.oracle.com:7777 Enter the name for this instance. This name should be unique across your entire enterprise. It may not contain characters other than letters, numbers, and underscores. Server Instance Name [idc]: Enter a short label for this instance. This label is used on web pages to identify this instance. It should be less than 12 characters long. Server Instance Label [idc]: Enter a long description for this instance. Server Description [Content Server idc]: Web Server         *1. Apache          2. Sun ONE          3. Configure manually Choice? Please select a database from the list below to use with the Content Server. Content Server Database         *1. Oracle          2. Microsoft SQL Server 2005          3. Microsoft SQL Server 2000          4. Sybase          5. DB2          6. Custom JDBC settings          7. Skip database configuration Choice? Manually configure JDBC settings for this database          1. yes         *2. no Choice? Oracle Server Hostname [localhost]: Oracle Listener Port Number [1521]: *** Database User ID The user name is used to log into the database used by the content server. Oracle User [user]:YEKKI_OCSERVER *** Database Password The password is used to log into the database used by the content server. Oracle Password []:oracle Oracle Instance Name [ORACLE]:orcl Configure the JVM to find the JDBC driver in a specific jar file          1. yes         *2. no Choice? The installer can attempt to create the database tables or you can manually create them. If you choose to manually create the tables, you should create them now. Attempt to create database tables          1. yes         *2. no Choice? Select components to install.          1. ContentFolios: Collect related items in folios          2. Folders_g: Organize content into hierarchical folders          3. LinkManager8: Hypertext link management support          4. OracleTextSearch: External Oracle 11g database as search indexer support          5. ThreadedDiscussions: Threaded discussion management Enter numbers separated by commas to toggle, 0 to unselect all, F to finish: 1,2,3,4,5         *1. ContentFolios: Collect related items in folios         *2. Folders_g: Organize content into hierarchical folders         *3. LinkManager8: Hypertext link management support         *4. OracleTextSearch: External Oracle 11g database as search indexer support         *5. ThreadedDiscussions: Threaded discussion management Enter numbers separated by commas to toggle, 0 to unselect all, F to finish: F Checking configuration. . . Configuration OK. Review install settings. . . Content Server Core Folder: /opt/oracle/ucm/server Java virtual machine: Sun Java 1.5.0_11 JDK Content Server Native Vault Folder: /opt/oracle/ucm/server/vault/ Content Server Weblayout Folder: /opt/oracle/ucm/server/weblayout/ Proxy authentication through another server: no Install admin server: yes Web Browser Path: /usr/bin/firefox Content Server System locale: English-US Content Server Port: 4444 Admin Server Port: 4440 Incoming connection address filter: *.*.*.* Web Server Relative Root: /idc/ Company Mail Server: mail Administrator E-Mail Address: sysadmin@mail Web Server HTTP Address: yekki.cn.oracle.com:7777 Server Instance Name: idc Server Instance Label: idc Server Description: Content Server idc Web Server: Apache Content Server Database: Oracle Manually configure JDBC settings for this database: false Oracle Server Hostname: localhost Oracle Listener Port Number: 1521 Oracle User: YEKKI_OCSERVER Oracle Password: 6GP1gBgzSyKa4JW10U8UqqPznr/lzkNn/Ojf6M8GJ8I= Oracle Instance Name: orcl Configure the JVM to find the JDBC driver in a specific jar file: false Attempt to create database tables: no Components: ContentFolios,Folders_g,LinkManager8,OracleTextSearch,ThreadedDiscussions Proceed with install         *1. Proceed          2. Change configuration          3. Recheck the configuration          4. Abort installation Choice? Finished install type Install with warnings at 4/2/10 12:32 AM. Run Scripts -bash-3.2$ ./wc_contentserverconfig.sh /opt/oracle/ucm/server /mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf Installing '/mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf/CS10gR35UpdateBundle.zip' Service 'DELETE_DOC' Extended Service 'DELETE_BYREV_REVISION' Extended Installing '/mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf/ContentAccess/ContentAccess-linux.zip' (internal)      04.02 00:40:38.019      main    updateDocMetaDefinitionV11: adding decimal column Installing '/opt/oracle/ucm/server/custom/CS10gR35UpdateBundle/extras/Folders_g.zip' Installing '/opt/oracle/ucm/server/custom/CS10gR35UpdateBundle/extras/FusionLibraries.zip' Installing '/opt/oracle/ucm/server/custom/CS10gR35UpdateBundle/extras/JpsUserProvider.zip' Installing '/mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf/WcConfigure.zip' Apr 2, 2010 12:41:24 AM oracle.security.jps.internal.core.util.JpsConfigUtil getPasswordCredential WARNING: A password credential is expected; instead found . Apr 2, 2010 12:41:24 AM oracle.security.jps.internal.idstore.util.IdentityStoreUtil getUnamePwdFromCredStore WARNING: The credential with map JPS and key ldap.credential does not exist. Apr 2, 2010 12:41:27 AM oracle.security.jps.internal.core.util.JpsConfigUtil getPasswordCredential WARNING: A password credential is expected; instead found . Apr 2, 2010 12:41:27 AM oracle.security.jps.internal.idstore.util.IdentityStoreUtil getUnamePwdFromCredStore WARNING: The credential with map JPS and key ldap.credential does not exist. Apr 2, 2010 12:41:28 AM oracle.security.jps.internal.core.util.JpsConfigUtil getPasswordCredential WARNING: A password credential is expected; instead found . Apr 2, 2010 12:41:28 AM oracle.security.jps.internal.idstore.util.IdentityStoreUtil getUnamePwdFromCredStore WARNING: The credential with map JPS and key ldap.credential does not exist. Restart Content Server to apply updates. Configuring Apache Web Server append the following lines at httpd.conf: include "/opt/oracle/ucm/server/data/users/apache22/apache.conf" Configuring the Identity Store( Optional ) 1.  Stop Oracle Content Server and the Admin Server 2.  Update the Oracle Content Server's JPS configuration file, jps-config.xml: a. add a service instance <serviceInstance provider="idstore.ldap.provider" name="idstore.oid"> <property name="subscriber.name" value="dc=cn,dc=oracle,dc=com"></property> <property name="idstore.type" value="OID"></property> <property name="security.principal.key" value="ldap.credential"></property> <property name="security.principal.alias" value="JPS"></property> <property name="ldap.url" value="ldap://yekki.cn.oracle.com:3060"></property> <extendedProperty> <name>user.search.bases</name> <values> <value>cn=users,dc=cn,dc=oracle,dc=com</value> </values> </extendedProperty> <extendedProperty> <name>group.search.bases</name> <values> <value>cn=groups,dc=cn,dc=oracle,dc=com</value> </values> </extendedProperty> <property name="username.attr" value="uid"></property> <property name="user.login.attr" value="uid"></property> <property name="groupname.attr" value="cn"></property> </serviceInstance> b. Ensure that the <jpsContext> entry in the jps-config.xml file refers to the new serviceInstance, that is, idstore.oid and not idstore.ldap: <jpsContext name="default"> <serviceInstanceRef ref="idstore.oid"/> 3. Run the new script to setup the credentials for idstore.oid in the credential store: cd CONTENT_SERVER_HOME/custom/FusionLibraries/tools -bash-3.2$ ./run_credtool.sh Buildfile: ./../tools/credtool.xml     [input] skipping input as property action has already been set.     [input] Alias: [JPS]     [input] Key: [ldap.credential]     [input] User Name: cn=orcladmin     [input] Password: welcome1     [input] JPS Config: [/opt/oracle/ucm/server/custom/FusionLibraries/tools/../../../config/jps-config.xml] manage-creds:      [echo] @@@ Help: run 'ant manage-creds' command to see the detailed usage      [java] Using default context in /opt/oracle/ucm/server/custom/FusionLibraries/tools/../../../config/jps-config.xml file for credential store.      [java] Credential store location : /opt/oracle/ucm/server/config      [java] Credential with map JPS key ldap.credential stored successfully!      [java]      [java]      [java]     Credential for map JPS and key ldap.credential is:      [java]             PasswordCredential name : cn=orcladmin      [java]             PasswordCredential password : welcome1 BUILD SUCCESSFUL Total time: 1 minute 27 seconds Testing 1. acces http://yekki.cn.oracle.com:7777/idc 2. login in with OID user, for example: orcladmin/welcome1 3. make sure your JpsUserProvider status is "good"

    Read the article

  • SQL SERVER – Subquery or Join – Various Options – SQL Server Engine knows the Best

    - by pinaldave
    This is followup post of my earlier article SQL SERVER – Convert IN to EXISTS – Performance Talk, after reading all the comments I have received I felt that I could write more on the same subject to clear few things out. First let us run following four queries, all of them are giving exactly same resultset. USE AdventureWorks GO -- use of = SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID = ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- use of in SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID IN ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- use of exists SELECT * FROM HumanResources.Employee E WHERE EXISTS ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- Use of Join SELECT * FROM HumanResources.Employee E INNER JOIN HumanResources.EmployeeAddress EA ON E.EmployeeID = EA.EmployeeID GO Let us compare the execution plan of the queries listed above. Click on image to see larger image. It is quite clear from the execution plan that in case of IN, EXISTS and JOIN SQL Server Engines is smart enough to figure out what is the best optimal plan of Merge Join for the same query and execute the same. However, in the case of use of Equal (=) Operator, SQL Server is forced to use Nested Loop and test each result of the inner query and compare to outer query, leading to cut the performance. Please note that here I no mean suggesting that Nested Loop is bad or Merge Join is better. This can very well vary on your machine and amount of resources available on your computer. When I see Equal (=) operator used in query like above, I usually recommend to see if user can use IN or EXISTS or JOIN. As I said, this can very much vary on different system. What is your take in above query? I believe SQL Server Engines is usually pretty smart to figure out what is ideal execution plan and use it. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Joins, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Subquery or Join – Various Options – SQL Server Engine Knows the Best – Part 2

    - by pinaldave
    This blog post is part 2 of the earlier written article SQL SERVER – Subquery or Join – Various Options – SQL Server Engine knows the Best by Paulo R. Pereira. Paulo has left excellent comment to earlier article once again proving the point that SQL Server Engine is smart enough to figure out the best plan itself and uses the same for the query. Let us go over his comment as he has posted. “I think IN or EXISTS is the best choice, because there is a little difference between ‘Merge Join’ of query with JOIN (Inner Join) and the others options (Left Semi Join), and JOIN can give more results than IN or EXISTS if the relationship is 1:0..N and not 1:0..1. And if I try use NOT IN and NOT EXISTS the query plan is different from LEFT JOIN too (Left Anti Semi Join vs. Left Outer Join + Filter). So, I found a case where EXISTS has a different query plan than IN or ANY/SOME:” USE AdventureWorks GO -- use of SOME SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID = SOME ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA UNION ALL SELECT EA.EmployeeID FROM HumanResources.EmployeeDepartmentHistory EA ) -- use of IN SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID IN ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA UNION ALL SELECT EA.EmployeeID FROM HumanResources.EmployeeDepartmentHistory EA ) -- use of EXISTS SELECT * FROM HumanResources.Employee E WHERE EXISTS ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA UNION ALL SELECT EA.EmployeeID FROM HumanResources.EmployeeDepartmentHistory EA ) When looked into execution plan of the queries listed above indeed we do get different plans for queries and SQL Server Engines creates the best (least cost) plan for each query. Click on image to see larger images. Thanks Paulo for your wonderful contribution. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Contribution, SQL, SQL Authority, SQL Joins, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Concat Function in SQL Server – SQL Concatenation

    - by pinaldave
    Earlier this week, I was delivering Advanced BI training on the subject of “SQL Server 2008 R2″. I had great time delivering the session. During the session, we talked about SQL Server 2010 Denali. Suddenly one of the attendees suggested his displeasure for the product. He said, even though, SQL Server is now in moving very fast and have proved many times a good enterprise solution, it does not have some basic functions. I naturally asked him for example and he suggested CONCAT() which exists in MySQL and Oracle. The answer is very simple – the equalent function in SQL Server to CONCAT() is ‘+’ (plus operator without quotes). Method 1: Concatenating two strings SELECT 'FirstName' + ' ' + 'LastName' AS FullName Method 2: Concatenating two Numbers SELECT CAST(1 AS VARCHAR(10)) + 'R' + CAST(2 AS VARCHAR(10)) Method 3: Concatenating values from table columns SELECT FirstName + ' ' + LastName FROM AdventureWorks.Person.Contact Well, this may look very simple but sometime it is very difficult to find the information for simple things only. Do you have any such example which you would like to share with community? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL String, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – List All the DMV and DMF on Server

    - by pinaldave
    “How many DMVs and DVFs are there in SQL Server 2008?” – this question was asked to me in one of the recent SQL Server Trainings. Answer is very simple: SELECT name, type, type_desc FROM sys.system_objects WHERE name LIKE 'dm_%' ORDER BY name Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL DMV

    Read the article

  • An XEvent a Day (19 of 31) – Using Customizable Fields

    - by Jonathan Kehayias
    Today’s post will be somewhat short, but we’ll look at Customizable Fields on Events in Extended Events and how they are used to collect additional information.  Customizable Fields generally represent information of potential interest that may be expensive to collect, and is therefore made available for collection if specified by the Event Session.  In SQL Server 2008 and 2008 R2, there are 50 Events that have customizable columns in their payload.  In SQL Server Denali CTP1, there...(read more)

    Read the article

  • An XEvent a Day (5 of 31) - Targets Week – ring_buffer

    - by Jonathan Kehayias
    Yesterday’s post, Querying the Session Definition and Active Session DMV’s , showed how to find information about the Event Sessions that exist inside a SQL Server and how to find information about the Active Event Sessions that are running inside a SQL Server using the Session Definition and Active Session DMV’s.  With the background information now out of the way, and since this post falls on the start of a new week I’ve decided to make this Targets Week, where each day we’ll look at a different...(read more)

    Read the article

  • SQL Server Driver for PHP 2.0 CTP adds PHP's PDO style data access for SQL Server

    - by The Official Microsoft IIS Site
    Today at DrupalCon SF 2010, we are reaching an important milestone by releasing a Community Technology Preview (CTP) of the new SQL Server Driver for PHP 2.0 , which includes support for PHP Data Objects (PDO). Alongside our efforts, the Commerce Guys , a company providing ecommerce solutions with Drupal, is also presenting a beta version of Drupal 7 running on SQL Server using this new PDO Application Programming Interfaces (API) in the SQL Server Driver for PHP 2.0. Providing a PDO driver in SQL...(read more)

    Read the article

  • An XEvent a Day (4 of 31) – Querying the Session Definition and Active Session DMV’s

    - by Jonathan Kehayias
    Yesterdays post, Managing Event Sessions , showed how to manage Event Sessions in Extended Events Sessions inside the Extended Events framework in SQL Server. In today's post, we’ll take a look at how to find information about the defined Event Sessions that already exist inside a SQL Server using the Session Definition DMV’s and how to find information about the Active Event Sessions that exist using the Active Session DMV’s. Session Definition DMV’s The Session Definition DMV’s provide information...(read more)

    Read the article

  • An XEvent a Day (2 of 31) – Querying the Extended Events Metadata

    - by Jonathan Kehayias
    In yesterdays post, An Overview of Extended Events , I provided some of the necessary background for Extended Events that you need to understand to begin working with Extended Events in SQL Server. After receiving some feedback by email (thanks Aaron I appreciate it), I have changed the post naming convention associated with the post to reflect “2 of 31” instead of 2/31, which apparently caused some confusion in Paul Randal’s and Glenn Berry’s series which were mentioned in the round up post for...(read more)

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >