Search Results

Search found 20313 results on 813 pages for 'batch size'.

Page 246/813 | < Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >

  • WPF format displayed text?

    - by Mark
    I have a column defined like this: <DataGridTextColumn Binding="{Binding Path=FileSizeBytes, Mode=OneWay}" Header="Size" IsReadOnly="True" /> But instead of displaying the file size as a big number, I'd like to display units, but still have it sort by the actual FileSizeBytes. Is there some way I can run it through a function or something before displaying it?

    Read the article

  • C# Dictionary Performance

    - by derek
    I am using a Dictionary to store data, and will be caching it. I would like to avoid server memory issues, and have good performance by limiting the size of the Dictionary<, either in size or number of entries. What is the best method of doing this? Is there another class I should be considering other than a Dictionary?

    Read the article

  • Is there a way to apply a CSS class from within a style?

    - by zashu
    I'm trying to be more modular in my CSS style sheets and was wondering if there is some feature like an include or apply that allows the author to apply a set of styles dynamically. Since I am having a hard time wording the question, perhaps an example will make more sense. Let's say, for example, I have the following CSS: .red {color:#e00b0b} #footer a {font-size:0.8em} h2 {font-size:1.4em; font-weight:bold;} In my page, let's say that I want both the footer links and h2 elements to use the special red color (there may be other locations I would like to use it as well). Ideally, I would like to do something like the following: .red {color:#e00b0b} #footer a {font-size:0.8em; apply-class:".red";} h2 {font-size:1.4em; font-weight:bold; apply-class:".red";} To me, this feels "modular" in a way because I can make modifications to the .red class without having to worry so much about where it is used, and other locations can use the styles in that class without worrying about, specifically, what they are. I understand that I have the following options and have included why, in my fairly inexperienced opinion, they are less-than-perfect: Add the color property to every element I want to be that color. Not ideal because, if I change the color, I have to update every rule to match the new color. Add the red class to every element I want to be red. Not ideal because it means that my HTML is dictating presentation. Create an additional rule that selects every element I want to be red and apply the color property to that. Not ideal because it is harder to find all of the rules that style a specific element, making maintenance more of a challenge Maybe I'm just being an ass and the following options are the only options and I should stick with them. I'm wondering, however, if the "ideal" (well, my ideal) method exists and, if so, what is the proper syntax? If it doesn't exist, option 3 above seems like my best bet. However, I would like to get confirmation.

    Read the article

  • How to submit the correct div elements (not the div that's hidden)?

    - by user356651
    Hello, I have the following code working fine but the problem is that it always submits the first div (article) even though it's hidden. My question is how do I submit the form and the elements in the form in the div that's shown? (if I select Music radiobutton, I want to submit the input elements of the Music Div not the Article div. Thanks. $(document).ready(function(){ $("input[name$='itemlist']").click(function() { var selection = $(this).val(); $("div.box").hide(); $("#"+selection).show(); }); }); <!--radio buttons--> <div id="articleselection"><input name="itemlist" type="radio" value="article" /> Article/Book </div> <div id="musicselection"><input name="itemlist" type="radio" value="music" /> Music</div> <!--article div starts--> <div id="article" class="box"> <table class="fieldgroup"> <tr><td>Journal Title: <input id="JournalTitle" name="JournalTitle" type="text" size="60" class="f-name" tabindex="1" value="JournalTitle"> </table> <table class="fieldgroup"> <tr><td>Article Author: <input id="ArticleAuthor" name="ArticleAuthor" type="text" size="40" class="f-name" tabindex="2" value="<"ArticleAuthor"></td></tr> </table> </div> <!--music div starts--> <div id="music" class="box"> <table class="fieldgroup"> <tr><td>Music Title: <input id="Music Title" name="Music Title" type="text" size="60" class="f-name" tabindex="1" value="Music Title"> </table> <table class="fieldgroup"> <tr><td> Music Author: <input id="MusicAuthor" name="Music Author" type="text" size="40" class="f-name" tabindex="2" value="<"MusicAuthor"></td></tr> </table> </div>

    Read the article

  • Abstract class and an inheritor: is it possible to factorize .parent() here?

    - by fge
    Here are what I think are the relevant parts of the code of these two classes. First, TreePointer (original source here): public abstract class TreePointer<T extends TreeNode> implements Iterable<TokenResolver<T>> { //... /** * What this tree can see as a missing node (may be {@code null}) */ private final T missing; /** * The list of token resolvers */ protected final List<TokenResolver<T>> tokenResolvers; /** * Main protected constructor * * <p>This constructor makes an immutable copy of the list it receives as * an argument.</p> * * @param missing the representation of a missing node (may be null) * @param tokenResolvers the list of reference token resolvers */ protected TreePointer(final T missing, final List<TokenResolver<T>> tokenResolvers) { this.missing = missing; this.tokenResolvers = ImmutableList.copyOf(tokenResolvers); } /** * Alternate constructor * * <p>This is the same as calling {@link #TreePointer(TreeNode, List)} with * {@code null} as the missing node.</p> * * @param tokenResolvers the list of token resolvers */ protected TreePointer(final List<TokenResolver<T>> tokenResolvers) { this(null, tokenResolvers); } //... /** * Tell whether this pointer is empty * * @return true if the reference token list is empty */ public final boolean isEmpty() { return tokenResolvers.isEmpty(); } @Override public final Iterator<TokenResolver<T>> iterator() { return tokenResolvers.iterator(); } // .equals(), .hashCode(), .toString() follow } Then, JsonPointer, which contains this .parent() method which I'd like to factorize here (original source here: public final class JsonPointer extends TreePointer<JsonNode> { /** * The empty JSON Pointer */ private static final JsonPointer EMPTY = new JsonPointer(ImmutableList.<TokenResolver<JsonNode>>of()); /** * Return an empty JSON Pointer * * @return an empty, statically allocated JSON Pointer */ public static JsonPointer empty() { return EMPTY; } //... /** * Return the immediate parent of this JSON Pointer * * <p>The parent of the empty pointer is itself.</p> * * @return a new JSON Pointer representing the parent of the current one */ public JsonPointer parent() { final int size = tokenResolvers.size(); return size <= 1 ? EMPTY : new JsonPointer(tokenResolvers.subList(0, size - 1)); } // ... } As mentioned in the subject, the problem I have here is with JsonPointer's .parent() method. In fact, the logic behind this method applies to TreeNode all the same, and therefore to its future implementations. Except that I have to use a constructor, and of course such a constructor is implementation dependent :/ Is there a way to make that .parent() method available to each and every implementation of TreeNode or is it just a pipe dream?

    Read the article

  • fill array with binary numbers

    - by davit-datuashvili
    hi, first of all this is not homework!! my question is from book: Algorithms in C++ third edition by robert sedgewick question is: there is given array of size n by 2^n(two dimensional) and we should fill it by binary numbers with bits size exactly n or for example n=5 so result will be 00001 00010 00011 00100 00101 00110 00111 and so on we should put this sequence of bits into arrays please help me

    Read the article

  • SQL - Add up all row-values of one column in a singletable

    - by ThE_-_BliZZarD
    Hello Everybody, I've got a question regarding a SQL-select-query: The table contains several columns, one of which is an Integer-column called "size" - the task I'm trying to perform is query the table for the sum of all rows (their values), or to be more exact get a artifical column in my ResultSet called "overallSize" which contains the sum of all "size"-values in the table. Preferable it would be possible to use a WHERE-clause to add only certain values ("WHERE bla = 5" or something similar). The DB-engine is HSQLDB (HyperSQL), which is compliant to SQL2008. Thank you in advance :)

    Read the article

  • Search inside dynamic array in python

    - by user2091683
    I want to implement a code that loops inside an array that its size is set by the user that means that the size isn't constant. for example: A=[1,2,3,4,5] then I want the output to be like this: [1],[2],[3],[4],[5] [1,2],[1,3],[1,4],[1,5] [2,3],[2,4],[2,5] [3,4],[3,5] [4,5] [1,2,3],[1,2,4],[1,2,5] [1,3,4],[1,3,5] and so on [1,2,3,4],[1,2,3,5] [2,3,4,5] [1,2,3,4,5] Can you help me implement this code?

    Read the article

  • Indexing on only part of a field in MongoDB

    - by Rob Hoare
    Is there a way to create an index on only part of a field in MongoDB, for example on the first 10 characters? I couldn't find it documented (or asked about on here). The MySQL equivalent would be CREATE INDEX part_of_name ON customer (name(10));. Reason: I have a collection with a single field that varies in length from a few characters up to over 1000 characters, average 50 characters. As there are a hundred million or so documents it's going to be hard to fit the full index in memory (testing with 8% of the data the index is already 400MB, according to stats). Indexing just the first part of the field would reduce the index size by about 75%. In most cases the search term is quite short, it's not a full-text search. A work-around would be to add a second field of 10 (lowercased) characters for each item, index that, then add logic to filter the results if the search term is over ten characters (and that extra field is probably needed anyway for case-insensitive searches, unless anybody has a better way). Seems like an ugly way to do it though. [added later] I tried adding the second field, containing the first 12 characters from the main field, lowercased. It wasn't a big success. Previously, the average object size was 50 bytes, but I forgot that includes the _id and other overheads, so my main field length (there was only one) averaged nearer to 30 bytes than 50. Then, the second field index contains the _id and other overheads. Net result (for my 8% sample) is the index on the main field is 415MB and on the 12 byte field is 330MB - only a 20% saving in space, not worthwhile. I could duplicate the entire field (to work around the case insensitive search problem) but realistically it looks like I should reconsider whether MongoDB is the right tool for the job (or just buy more memory and use twice as much disk space). [added even later] This is a typical document, with the source field, and the short lowercased field: { "_id" : ObjectId("505d0e89f56588f20f000041"), "q" : "Continental Airlines", "f" : "continental " } Indexes: db.test.ensureIndex({q:1}); db.test.ensureIndex({f:1}); The 'f" index, working on a shorter field, is 80% of the size of the "q" index. I didn't mean to imply I included the _id in the index, just that it needs to use that somewhere to show where the index will point to, so it's an overhead that probably helps explain why a shorter key makes so little difference. Access to the index will be essentially random, no part of it is more likely to be accessed than any other. Total index size for the full file will likely be 5GB, so it's not extreme for that one index. Adding some other fields for other search cases, and their associated indexes, and copies of data for lower case, does start to add up, which I why I started looking into a more concise index.

    Read the article

  • Best way to add/change 1 GET value while keeping others?

    - by John Isaacks
    How can I make a link that justs adds or changes 1 GET var while maintaining all others? I have a page that is created using different GET vars. So it will be like mypage.php?color=red&size=7&brand=some%20brand So I want to have a link that sets it to page=2 or size=8. Whats the easiest way to have a link do that without reseting all the other vars? I hope that makes sense, let me know if I need to further explain anything

    Read the article

  • % _ in search form displays all results

    - by fusion
    if the search form is blank, it should display an error that something should be entered by the user. it should only show those results which contain the keywords the user has entered in the search textbox. however, if the user enters % or _ or +, it displays all results. how do i display an error when the user enters these wildcard characters? my search php code: $search_result = ""; $search_result = $_GET["q"]; $search_result = trim($search_result); if ($search_result == "") { echo "<p>Search Error</p><p>Please enter a search...</p>" ; exit(); } $result = mysql_query('SELECT cQuotes, vAuthor, cArabic, vReference FROM thquotes WHERE cQuotes LIKE "%' . mysql_real_escape_string($search_result) .'%" ORDER BY idQuotes DESC', $conn) or die ('Error: '.mysql_error()); // there's either one or zero records. Again, no need for a while loop function h($s) { echo htmlspecialchars($s, ENT_QUOTES); } ?> <div class="caption">Search Results</div> <div class="center_div"> <table> <?php while ($row= mysql_fetch_array($result)) { ?> <tr> <td style="text-align:right; font-size:15px;"><?php h($row['cArabic']); ?></td> <td style="font-size:16px;"><?php h($cQuotes); ?></td> <td style="font-size:12px;"><?php h($row['vAuthor']); ?></td> <td style="font-size:12px; font-style:italic; text-align:right;"><?php h($row['vReference']); ?></td> </tr> <?php } ?> </table> <?php ?> </div>

    Read the article

  • SSAS processing error: Client unable to establish connection; 08001; Encryption not supported on the client.; 08001

    - by Kevin Shyr
    After getting the cube to successfully deploy and process on Friday, I was baffled on Monday that the newly added dimension caused the cube processing to break.  I then followed the first instinct, discarded all my changes to reverted back to the version on Friday, and had no luck.  The error message (attached below) did not help as I was looking for some kind of SQL service error.  After examining the windows server log and the SQL server log, I just couldn't see anything wrong with it.After swearing for some time, and with the help of going off and working on something else for a while.  I came back to the solution and looked at the data source.  Even though I know I have never changed the provider (the default setup gave me SQL native client), I decided to change it and give OLE DB a try.This simple change allows my cube to process successfully again.  While I don't understand why the same settings that worked last week doesn't work this week, I don't have all the information to say with certainty that nothing has changed in the environment (firewall changes, server updates, etc.).SSAS processing error:<Batch >  <Parallel>    <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">      <Object>        <DatabaseID>DWH Sales Facts</DatabaseID>        <CubeID>DWH Sales Facts</CubeID>      </Object>      <Type>ProcessFull</Type>      <WriteBackTableCreation>UseExisting</WriteBackTableCreation>    </Process>  </Parallel></Batch>                Processing Dimension 'Date' completed.                                Errors and Warnings from Response                OLE DB error: OLE DB or ODBC error: A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.; 08001; Client unable to establish connection; 08001; Encryption not supported on the client.; 08001.                Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'DWH Sales Facts', Name of 'DWH Sales Facts'.                Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Currency', Name of 'Currency' was being processed.                Errors in the OLAP storage engine: An error occurred while the 'Currency Dim ID' attribute of the 'Currency' dimension from the 'DWH Sales Facts' database was being processed.                Internal error: The operation terminated unsuccessfully.                Server: The operation has been cancelled.

    Read the article

  • Simple MSBuild Configuration: Updating Assemblies With A Version Number

    - by srkirkland
    When distributing a library you often run up against versioning problems, once facet of which is simply determining which version of that library your client is running.  Of course, each project in your solution has an AssemblyInfo.cs file which provides, among other things, the ability to set the Assembly name and version number.  Unfortunately, setting the assembly version here would require not only changing the version manually for each build (depending on your schedule), but keeping it in sync across all projects.  There are many ways to solve this versioning problem, and in this blog post I’m going to try to explain what I think is the easiest and most flexible solution.  I will walk you through using MSBuild to create a simple build script, and I’ll even show how to (optionally) integrate with a Team City build server.  All of the code from this post can be found at https://github.com/srkirkland/BuildVersion. Create CommonAssemblyInfo.cs The first step is to create a common location for the repeated assembly info that is spread across all of your projects.  Create a new solution-level file (I usually create a Build/ folder in the solution root, but anywhere reachable by all your projects will do) called CommonAssemblyInfo.cs.  In here you can put any information common to all your assemblies, including the version number.  An example CommonAssemblyInfo.cs is as follows: using System.Reflection; using System.Resources; using System.Runtime.InteropServices;   [assembly: AssemblyCompany("University of California, Davis")] [assembly: AssemblyProduct("BuildVersionTest")] [assembly: AssemblyCopyright("Scott Kirkland & UC Regents")] [assembly: AssemblyConfiguration("")] [assembly: AssemblyTrademark("")]   [assembly: ComVisible(false)]   [assembly: AssemblyVersion("1.2.3.4")] //Will be replaced   [assembly: NeutralResourcesLanguage("en-US")] .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Cleanup AssemblyInfo.cs & Link CommonAssemblyInfo.cs For each of your projects, you’ll want to clean up your assembly info to contain only information that is unique to that assembly – everything else will go in the CommonAssemblyInfo.cs file.  For most of my projects, that just means setting the AssemblyTitle, though you may feel AssemblyDescription is warranted.  An example AssemblyInfo.cs file is as follows: using System.Reflection;   [assembly: AssemblyTitle("BuildVersionTest")] .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Next, you need to “link” the CommonAssemblyinfo.cs file into your projects right beside your newly lean AssemblyInfo.cs file.  To do this, right click on your project and choose Add | Existing Item from the context menu.  Navigate to your CommonAssemblyinfo.cs file but instead of clicking Add, click the little down-arrow next to add and choose “Add as Link.”  You should see a little link graphic similar to this: We’ve actually reduced complexity a lot already, because if you build all of your assemblies will have the same common info, including the product name and our static (fake) assembly version.  Let’s take this one step further and introduce a build script. Create an MSBuild file What we want from the build script (for now) is basically just to have the common assembly version number changed via a parameter (eventually to be passed in by the build server) and then for the project to build.  Also we’d like to have a flexibility to define what build configuration to use (debug, release, etc). In order to find/replace the version number, we are going to use a Regular Expression to find and replace the text within your CommonAssemblyInfo.cs file.  There are many other ways to do this using community build task add-ins, but since we want to keep it simple let’s just define the Regular Expression task manually in a new file, Build.tasks (this example taken from the NuGet build.tasks file). <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" DefaultTargets="Go" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <UsingTask TaskName="RegexTransform" TaskFactory="CodeTaskFactory" AssemblyFile="$(MSBuildToolsPath)\Microsoft.Build.Tasks.v4.0.dll"> <ParameterGroup> <Items ParameterType="Microsoft.Build.Framework.ITaskItem[]" /> </ParameterGroup> <Task> <Using Namespace="System.IO" /> <Using Namespace="System.Text.RegularExpressions" /> <Using Namespace="Microsoft.Build.Framework" /> <Code Type="Fragment" Language="cs"> <![CDATA[ foreach(ITaskItem item in Items) { string fileName = item.GetMetadata("FullPath"); string find = item.GetMetadata("Find"); string replaceWith = item.GetMetadata("ReplaceWith"); if(!File.Exists(fileName)) { Log.LogError(null, null, null, null, 0, 0, 0, 0, String.Format("Could not find version file: {0}", fileName), new object[0]); } string content = File.ReadAllText(fileName); File.WriteAllText( fileName, Regex.Replace( content, find, replaceWith ) ); } ]]> </Code> </Task> </UsingTask> </Project> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If you glance at the code, you’ll see it’s really just going a Regex.Replace() on a given file, which is exactly what we need. Now we are ready to write our build file, called (by convention) Build.proj. <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" DefaultTargets="Go" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="$(MSBuildProjectDirectory)\Build.tasks" /> <PropertyGroup> <Configuration Condition="'$(Configuration)' == ''">Debug</Configuration> <SolutionRoot>$(MSBuildProjectDirectory)</SolutionRoot> </PropertyGroup>   <ItemGroup> <RegexTransform Include="$(SolutionRoot)\CommonAssemblyInfo.cs"> <Find>(?&lt;major&gt;\d+)\.(?&lt;minor&gt;\d+)\.\d+\.(?&lt;revision&gt;\d+)</Find> <ReplaceWith>$(BUILD_NUMBER)</ReplaceWith> </RegexTransform> </ItemGroup>   <Target Name="Go" DependsOnTargets="UpdateAssemblyVersion; Build"> </Target>   <Target Name="UpdateAssemblyVersion" Condition="'$(BUILD_NUMBER)' != ''"> <RegexTransform Items="@(RegexTransform)" /> </Target>   <Target Name="Build"> <MSBuild Projects="$(SolutionRoot)\BuildVersionTest.sln" Targets="Build" /> </Target>   </Project> Reviewing this MSBuild file, we see that by default the “Go” target will be called, which in turn depends on “UpdateAssemblyVersion” and then “Build.”  We go ahead and import the Bulid.tasks file and then setup some handy properties for setting the build configuration and solution root (in this case, my build files are in the solution root, but we might want to create a Build/ directory later).  The rest of the file flows logically, we setup the RegexTransform to match version numbers such as <major>.<minor>.1.<revision> (1.2.3.4 in our example) and replace it with a $(BUILD_NUMBER) parameter which will be supplied externally.  The first target, “UpdateAssemblyVersion” just runs the RegexTransform, and the second target, “Build” just runs the default MSBuild on our solution. Testing the MSBuild file locally Now we have a build file which can replace assembly version numbers and build, so let’s setup a quick batch file to be able to build locally.  To do this you simply create a file called Build.cmd and have it call MSBuild on your Build.proj file.  I’ve added a bit more flexibility so you can specify build configuration and version number, which makes your Build.cmd look as follows: set config=%1 if "%config%" == "" ( set config=debug ) set version=%2 if "%version%" == "" ( set version=2.3.4.5 ) %WINDIR%\Microsoft.NET\Framework\v4.0.30319\msbuild Build.proj /p:Configuration="%config%" /p:build_number="%version%" .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now if you click on the Build.cmd file, you will get a default debug build using the version 2.3.4.5.  Let’s run it in a command window with the parameters set for a release build version 2.0.1.453.   Excellent!  We can now run one simple command and govern the build configuration and version number of our entire solution.  Each DLL produced will have the same version number, making determining which version of a library you are running very simple and accurate. Configure the build server (TeamCity) Of course you are not really going to want to run a build command manually every time, and typing in incrementing version numbers will also not be ideal.  A good solution is to have a computer (or set of computers) act as a build server and build your code for you, providing you a consistent environment, excellent reporting, and much more.  One of the most popular Build Servers is JetBrains’ TeamCity, and this last section will show you the few configuration parameters to use when setting up a build using your MSBuild file created earlier.  If you are using a different build server, the same principals should apply. First, when setting up the project you want to specify the “Build Number Format,” often given in the form <major>.<minor>.<revision>.<build>.  In this case you will set major/minor manually, and optionally revision (or you can use your VCS revision number with %build.vcs.number%), and then build using the {0} wildcard.  Thus your build number format might look like this: 2.0.1.{0}.  During each build, this value will be created and passed into the $BUILD_NUMBER variable of our Build.proj file, which then uses it to decorate your assemblies with the proper version. After setting up the build number, you must choose MSBuild as the Build Runner, then provide a path to your build file (Build.proj).  After specifying your MSBuild Version (equivalent to your .NET Framework Version), you have the option to specify targets (the default being “Go”) and additional MSBuild parameters.  The one parameter that is often useful is manually setting the configuration property (/p:Configuration="Release") if you want something other than the default (which is Debug in our example).  Your resulting configuration will look something like this: [Under General Settings] [Build Runner Settings]   Now every time your build is run, a newly incremented build version number will be generated and passed to MSBuild, which will then version your assemblies and build your solution.   A Quick Review Our goal was to version our output assemblies in an automated way, and we accomplished it by performing a few quick steps: Move the common assembly information, including version, into a linked CommonAssemblyInfo.cs file Create a simple MSBuild script to replace the common assembly version number and build your solution Direct your build server to use the created MSBuild script That’s really all there is to it.  You can find all of the code from this post at https://github.com/srkirkland/BuildVersion. Enjoy!

    Read the article

  • Introducing Data Annotations Extensions

    - by srkirkland
    Validation of user input is integral to building a modern web application, and ASP.NET MVC offers us a way to enforce business rules on both the client and server using Model Validation.  The recent release of ASP.NET MVC 3 has improved these offerings on the client side by introducing an unobtrusive validation library built on top of jquery.validation.  Out of the box MVC comes with support for Data Annotations (that is, System.ComponentModel.DataAnnotations) and can be extended to support other frameworks.  Data Annotations Validation is becoming more popular and is being baked in to many other Microsoft offerings, including Entity Framework, though with MVC it only contains four validators: Range, Required, StringLength and Regular Expression.  The Data Annotations Extensions project attempts to augment these validators with additional attributes while maintaining the clean integration Data Annotations provides. A Quick Word About Data Annotations Extensions The Data Annotations Extensions project can be found at http://dataannotationsextensions.org/, and currently provides 11 additional validation attributes (ex: Email, EqualTo, Min/Max) on top of Data Annotations’ original 4.  You can find a current list of the validation attributes on the afore mentioned website. The core library provides server-side validation attributes that can be used in any .NET 4.0 project (no MVC dependency). There is also an easily pluggable client-side validation library which can be used in ASP.NET MVC 3 projects using unobtrusive jquery validation (only MVC3 included javascript files are required). On to the Preview Let’s say you had the following “Customer” domain model (or view model, depending on your project structure) in an MVC 3 project: public class Customer { public string Email { get; set; } public int Age { get; set; } public string ProfilePictureLocation { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } When it comes time to create/edit this Customer, you will probably have a CustomerController and a simple form that just uses one of the Html.EditorFor() methods that the ASP.NET MVC tooling generates for you (or you can write yourself).  It should look something like this: With no validation, the customer can enter nonsense for an email address, and then can even report their age as a negative number!  With the built-in Data Annotations validation, I could do a bit better by adding a Range to the age, adding a RegularExpression for email (yuck!), and adding some required attributes.  However, I’d still be able to report my age as 10.75 years old, and my profile picture could still be any string.  Let’s use Data Annotations along with this project, Data Annotations Extensions, and see what we can get: public class Customer { [Email] [Required] public string Email { get; set; }   [Integer] [Min(1, ErrorMessage="Unless you are benjamin button you are lying.")] [Required] public int Age { get; set; }   [FileExtensions("png|jpg|jpeg|gif")] public string ProfilePictureLocation { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now let’s try to put in some invalid values and see what happens: That is very nice validation, all done on the client side (will also be validated on the server).  Also, the Customer class validation attributes are very easy to read and understand. Another bonus: Since Data Annotations Extensions can integrate with MVC 3’s unobtrusive validation, no additional scripts are required! Now that we’ve seen our target, let’s take a look at how to get there within a new MVC 3 project. Adding Data Annotations Extensions To Your Project First we will File->New Project and create an ASP.NET MVC 3 project.  I am going to use Razor for these examples, but any view engine can be used in practice.  Now go into the NuGet Extension Manager (right click on references and select add Library Package Reference) and search for “DataAnnotationsExtensions.”  You should see the following two packages: The first package is for server-side validation scenarios, but since we are using MVC 3 and would like comprehensive sever and client validation support, click on the DataAnnotationsExtensions.MVC3 project and then click Install.  This will install the Data Annotations Extensions server and client validation DLLs along with David Ebbo’s web activator (which enables the validation attributes to be registered with MVC 3). Now that Data Annotations Extensions is installed you have all you need to start doing advanced model validation.  If you are already using Data Annotations in your project, just making use of the additional validation attributes will provide client and server validation automatically.  However, assuming you are starting with a blank project I’ll walk you through setting up a controller and model to test with. Creating Your Model In the Models folder, create a new User.cs file with a User class that you can use as a model.  To start with, I’ll use the following class: public class User { public string Email { get; set; } public string Password { get; set; } public string PasswordConfirm { get; set; } public string HomePage { get; set; } public int Age { get; set; } } Next, create a simple controller with at least a Create method, and then a matching Create view (note, you can do all of this via the MVC built-in tooling).  Your files will look something like this: UserController.cs: public class UserController : Controller { public ActionResult Create() { return View(new User()); }   [HttpPost] public ActionResult Create(User user) { if (!ModelState.IsValid) { return View(user); }   return Content("User valid!"); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Create.cshtml: @model NuGetValidationTester.Models.User   @{ ViewBag.Title = "Create"; }   <h2>Create</h2>   <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script>   @using (Html.BeginForm()) { @Html.ValidationSummary(true) <fieldset> <legend>User</legend> @Html.EditorForModel() <p> <input type="submit" value="Create" /> </p> </fieldset> } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } In the Create.cshtml view, note that we are referencing jquery validation and jquery unobtrusive (jquery is referenced in the layout page).  These MVC 3 included scripts are the only ones you need to enjoy both the basic Data Annotations validation as well as the validation additions available in Data Annotations Extensions.  These references are added by default when you use the MVC 3 “Add View” dialog on a modification template type. Now when we go to /User/Create we should see a form for editing a User Since we haven’t yet added any validation attributes, this form is valid as shown (including no password, email and an age of 0).  With the built-in Data Annotations attributes we can make some of the fields required, and we could use a range validator of maybe 1 to 110 on Age (of course we don’t want to leave out supercentenarians) but let’s go further and validate our input comprehensively using Data Annotations Extensions.  The new and improved User.cs model class. { [Required] [Email] public string Email { get; set; }   [Required] public string Password { get; set; }   [Required] [EqualTo("Password")] public string PasswordConfirm { get; set; }   [Url] public string HomePage { get; set; }   [Integer] [Min(1)] public int Age { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now let’s re-run our form and try to use some invalid values: All of the validation errors you see above occurred on the client, without ever even hitting submit.  The validation is also checked on the server, which is a good practice since client validation is easily bypassed. That’s all you need to do to start a new project and include Data Annotations Extensions, and of course you can integrate it into an existing project just as easily. Nitpickers Corner ASP.NET MVC 3 futures defines four new data annotations attributes which this project has as well: CreditCard, Email, Url and EqualTo.  Unfortunately referencing MVC 3 futures necessitates taking an dependency on MVC 3 in your model layer, which may be unadvisable in a multi-tiered project.  Data Annotations Extensions keeps the server and client side libraries separate so using the project’s validation attributes don’t require you to take any additional dependencies in your model layer which still allowing for the rich client validation experience if you are using MVC 3. Custom Error Message and Globalization: Since the Data Annotations Extensions are build on top of Data Annotations, you have the ability to define your own static error messages and even to use resource files for very customizable error messages. Available Validators: Please see the project site at http://dataannotationsextensions.org/ for an up-to-date list of the new validators included in this project.  As of this post, the following validators are available: CreditCard Date Digits Email EqualTo FileExtensions Integer Max Min Numeric Url Conclusion Hopefully I’ve illustrated how easy it is to add server and client validation to your MVC 3 projects, and how to easily you can extend the available validation options to meet real world needs. The Data Annotations Extensions project is fully open source under the BSD license.  Any feedback would be greatly appreciated.  More information than you require, along with links to the source code, is available at http://dataannotationsextensions.org/. Enjoy!

    Read the article

  • Transformation of Product Management in Telecommunications for Rapid Launch of Next Generation Products

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } The Telecom industry continues to evolve through disruptive products, uncertain markets, shorter product lifecycles and convergence of technologies. Today’s market has moved from network centric to consumer centric and focuses primarily on the customer experience. It has resulted in several product management challenges such as an increased complexity and volume of offerings, creating product variants, accelerating time-to-market, ability to provide multiple product views for varied stakeholders, leveraging OSS intelligence to BSS layer, product co-creation and increasing audit and security concerns for service providers. The document discusses how enterprise product management enabled by PLM-based product catalogue solutions helps to launch next generation products rapidly in the context of the Telecommunication Industry.   1.0.       Introduction   Figure 1: Business Scenario   Modern business demands the launch of complex products in a very short timeframe and effecting changes in the price plan faster without IT intervention. One of the key transformation initiatives companies are focusing on is in the area of product management transformation and operational efficiency improvement. As part of these initiatives, companies are investing in best- in-class COTs-based Product Management solutions developed on industry-wide standards.   The new COTs packages are planned to integrate with existing or new B/OSS systems to provide a strategic end-to-end agile solution for reduced time-to-market and order journey time. In addition, system rationalization is being undertaken to phase out legacy systems and migrate to strategic systems.   2.0.       An Overview of Product Management in Telecom   Product data in telecom is multi- dimensional and difficult to manage. It increased significantly due to the complexity of the product, product offerings on the converged network, increased volume of offerings, bundled offering structures and ever increasing regulatory requirements.   In addition, the shrinking product lifecycle in telecom makes it difficult to manage the dynamic product data. Mergers and acquisitions coupled with organic growth pose major challenges in product portfolio management. It is a roadblock in the journey towards becoming an agile organization.       Figure 2: Complexity in Product Management   Network Technology’ is the new dimension in telecom product management where the same products are realized through different networks i.e., Soiled network to Converged network. Consequently, the product solution is different.     Figure 3: Current Scenario - Pain Points in Product Management   The major business implications arising out of the current scenario are slow time-to-market and an inefficient process that affects innovation.   3.0. Transformation of Next Generation Product Management   Companies must focus on their Product Management Transformation Journey in the areas of:   ·       Management of single truth of product information across the organization/geographies which is currently managed in heterogeneous systems   ·       Management of the Intellectual Property (IP) on the product concept and partnership in the design of discrete components to integrate into the system   ·       Leveraging structured and unstructured product data within the extended enterprise to extract consumer insights and drive innovation   ·       Management of effective operational separation to comply with regulatory bodies   ·       Reuse of existing designs and add relevant features such as value-added services to enable effective product bundling     Figure 4: Next generation needs   PLM-based Enterprise Product Catalogue solutions efficiently address the above requirements and act as an enabler towards product management transformation and rapid product launch.   4.0. PLM-based Enterprise Product Management     Figure 5: PLM-based Enterprise Product Mastering   Enterprise Product Management (EPM) enables the business to manage complex product attributes of data in complex environments. Product Mastering helps create a 'single view' of the product by creating a business-driven, IT-supported environment where a global 'single truth record' is created, managed and reused.   4.1 The Business Case for Telco PLM-based solutions for Enterprise Product Management   ·       Telco PLM-based Product Mastering solutions provide a centralized authoring environment for product definition and control of all product data and rules   ·       PLM packages are designed to support multiple perspectives of product data (ordering perspective, billing perspective, provisioning perspective)   ·       Maintains relationships/links between different elements of the entire product definition   ·       Telco PLM packages are specialized in next generation lifecycle management requirements of products such as revision and state management, test and release management, role management and impact analysis)   ·       Takes into consideration all aspects of OSS product requirements compared to CRM product catalogue solutions where the product data managed is mostly order oriented and transactional     ·       New breed of Telco PLM packages are designed with 'open' standards such as SID and eTOM. They are interoperable, support integration frameworks such as subscription and notification.   ·       Telco PLM packages have developed good collaboration frameworks to integrate suppliers and partners into the product development value chain   4.2 Various Architectures/Approaches for Product Mastering using Telco PLM systems   4. 2.a Single Central Product Management (Mastering) Approach   Figure 6: Single Central Product Management (Master) Approach       This approach is implemented across verticals such as aerospace and automotive. It focuses on a physically centralized product master to which other sources are dependent on. The product definition data (Product bundles, service bundles, price plans, offers and discounts, product configuration rules and market campaigns) is created and maintained physically in a centralized environment. In addition, the product definition/authoring environment is centralized. The existing legacy product definition data available in CRM product catalogue, billing catalogue and the legacy product catalogue is migrated to the centralized PLM-based Enterprise Product Management solution.   Architectural changes must be made in the existing business landscape of applications to create and revise data because the applications have to refer to the central repository for approvals and validation of product configurations. It is achieved by modifying how the applications write data or how the applications can be adapted to use the rules to be managed and published.   Complete product configuration validation will be done in enterprise / central product catalogue and final configuration will be sent to the B/OSS system through the SOA compliant product distribution architecture. The approach/architecture enables greater control in terms of product data management and product data governance.   4.2.b Federated Product Management (Mastering) Architecture     Figure 7: Federated Product Management (Mastering) Architecture   In the federated product mastering approach, the basic unique product definition data (product id, description product hierarchy, basic price plans and simple product design rules) will be centrally created and will be maintained. And, the advanced product definition (Product bundling, promotions, offers & discount plans) will be created in respective down stream OSS systems. The advanced product definition (Product bundling, promotions, offers and discount plans) will be created in respective downstream OSS systems.   For example, basic product definitions such as attributes, product hierarchy and basic price plans will be created and maintained in Enterprise/Central product reference catalogue and distributed to downstream OSS systems. Respective downstream OSS systems build product bundles, promotions, advanced price plans over the basic product definition and master the advanced product definition. Central reference database accesses the respective other source product master data and assembles a point-in-time consolidated view of the product. The approach is typically adapted in some merger and acquisition scenarios where there is a low probability of a central physical authority managing the data. In addition, the migration effort in this case is minimal and there are no big architectural changes to the organization application landscape. However, this approach will not result in better product data management and data governance.   5.0 Customer Scenario – Before EPC deployment   A leading global telecommunications service provider wanted to launch a quad play and triple play service offering in the shortest possible lead time. The service provider was offering Broadband and VoIP services to customers. The company wanted to reuse a majority of the Broadband services and price plans and bundle them with new wireless and IPTV services for quad play and triple play. The challenges in launching the new service offerings were:       Figure 8: Triple Play Plan   ·       Broadband product data was stored in multiple product catalogues (CRM catalogue, Billing catalogue, spread sheets)   ·       Product managers spent a lot of time performing tasks involving duplication or re-keying of data. Manual effort caused errors, cost and time over-runs.   ·       No effective product and price data governance mechanism. Price change issues arising from the lack of data consistency across systems resulted in leakage of customer value and revenue.   ·       Product data had re-usability issues and was not in a structured format. It resulted in uncontrolled product portfolio creation and product management issues.   ·       Lack of enterprise product model resulted into product distribution challenges and thus delays in product launch.   ·       Designers are constrained by existing legacy product management solutions to model product/service requirements and product configuration rules such as upgrading, downgrading and cross selling.    5.1 Customer Scenario - After EPC deployment     Figure 9: SOA-based end-to-end EPC Solution   The company deployed PLM-based Enterprise Product Catalogue solutions to launch quad play service after evaluating various product catalogues. The broadband product offering, service and price data were migrated to the new system, and the product and price plan hierarchy for new offerings were created using the entities defined in the Enterprise Product Model. Supplier product catalogue data such as routers and set up boxes were loaded onto the new solution through SOA-based web service. Price plans and configuration rules were built in the new system. The validated final product configurations were extracted from the product catalogue in a SID format and were distributed to the downstream B/OSS systems through exposed SOA-based web services. The transformations required for the B/OSS system were handled using the transformation layer as part of the solution.   6.0 How PLM enabled Product Management Transformation         Figure 10: Product Management Transformation     PLM-based Product Catalogue Solution helped the customer reduce the product launch cycle time by 30% and enable transformation of Product Management for next generation services.   7.0 Conclusion   On the one hand, the telecom industry is undergoing changes due to disruptions, uncertain product markets and increased complexity of products. On the other hand, the ARPU is decreasing year-on-year. Communications Service Providers are embarking on convergence, bundled service offerings, flexibility to cross-sell and up-sell, introduce new value-added services, leverage Web 2.0 concepts and network capabilities. Consequently, large scale IT transformation initiatives to improve their ARPU supporting network and business transformations are a business imperative. Product Management has become a focus area. Companies are investing in best-in- class COTS solutions to reduce time-to-market, ensure rapid service delivery and improve operational efficiency. An efficient PLM-based enterprise product mastering solution plays a key role in achieving zero touch automation and rapid product launch.   References:   1.     Preston G.Smith, Donald G.Reineristsem, Van Nostrand Reinhold “Developing Products in Half the time”.   2.     John G. Innes, "Achieving Successful Product Change", Pitman Publishing.   3.     D T Pham and R M Setchi (16th Jan, 2001) "Authoring environment for documentation development" University of Wales Cardiff, U.K., Proceedings on Institution of Mechanical Engineers, Vol. 215, Part B.   4.     Oracle Product Hub for Communications:   http://www.oracle.com/us/products/applications/master-data-management/product-hub-082059.html  

    Read the article

  • SharePoint OCR image files indexing

    Introduction This article describes how to setup indexing of the image files (including TIFF, PDF, JPEG, BMP...) using OCR technology. The indexing described below utilizes Microsoft IFilter technology and as such is not specific to SharePoint, but can be used with any product that uses Microsoft indexing: Microsoft Search, Desktop search, SQL Server search, and through the plug-ins with Google desktop search. I however use it with Microsoft Windows SharePoint Services 2003. For those other products, the registration may need to be slightly different. Background  One of the projects I was working on required a storage of old documents scanned into PDF files. Then there was a separate team of people responsible for providing a tags for a search engine so those image documents could be found. The whole process was clumsy, labor intensive, and error prone. That was what started me on my exploration path. OCR The first search I fired was for the Open Source OCR products. Pretty quickly, I narrowed it down to TESSERACT (http://code.google.com/p/tesseract-ocr/). Tesseract is an orphaned brain child of HP that worked on it from 1985 to 1995. Then it was moved to the Open Source, and now if I understand it correctly, Google is working on it. With credentials like that, it's no wonder that Tesseract scores one of the highest marks on OCR recognition and accuracy. After downloading and struggling just a bit, I got Tesseract to work. The struggling part was that the home page claims that its base input format is a TIFF file. May be my TIFFs were bad, but I was able to get it to work only for BMP files. Image files conversion So now that I have an OCR that can convert BMP files into text, how do I get text out of the image PDF files? One more search, and I settled down on ImageMagic (http://www.imagemagick.org/). This is another wonderful Open Source utility that can convert any file into image. It did work out of the box, converting any TIFF files into bitmaps, but to get PDF files converted, it requires a GhostScript (http://mirror.cs.wisc.edu/pub/mirrors/ghost/GPL/gs864/gs864w32.exe). Dealing with text PDFs With that utility installed, I was cooking - I can convert any file (in particular PDF and TIFF) into bitmap, and then I can extract the text out of the bitmap. The only consideration was to somehow treat PDF files containing text differently - after all, OCR is very computation intensive and somewhat error prone even with perfect image quality and resolution. So another quick search, and I have a PDFTOTEXT (ftp://ftp.foolabs.com/pub/xpdf/xpdf-3.02pl4-win32.zip) - thank God for Open Source! With these guys, I can pull text out of PDF in an eye blink. However, I would get nothing for pure image PDFs, but I already have a solution for that! Batch process It took another 15 minutes to setup a batch script to automate the process: Check the file extension If file is a PDF file try to extract text out of it if there is more than certain amount of text in the file - done! if there is no text, convert first page into bitmap run OCR on the bitmap For any other file type, convert file into bitmap Run OCR on the bitmap Once you unzip the attached project, check out the bin\OCR.BAT file. It will create a temporary file in the directory where your source file is with the same name + the '.txt' extension.Continue span.fullpost {display:none;}

    Read the article

  • Auto-cancel reason not found (6, 13906)

    - by Rajesh Sharma
    There are many errors in the application which are never invoked because of appropriate application configuration done at the time of implementation by the solution architects. So typically, as an application end user you would never stumble upon such errors. But what if the application administrator inadvertently changes the configuration/setup in the development, test, QA, or production environment? This is the time when you as an end user are introduced to a brand-new error for which you may not have a clue or understanding to what it means and neither the access/privilege to rectify it.    In this post we'll focus on one such error '6, 13906 - Auto-cancel reason not found'.   You get this error if you have not defined a Bill (Segment) Cancel Reason (Admin Menu, B, Bill Cancel Reason) code with System Default value of Turn off auto-cancel.   Consider a scenario when you are about to final bill an 'Account' for which the bill period's cut-off date you selected is falling on or after the Service Agreement's (SA) end/stop date (basically SA is Stopped with a date earlier than it was billed previously). And for the same 'Account' either: Bill segments exists that end after the SA's end date OR Non-closing bill segments exists that end on the SA's end date (OR closing bill segments that do not end on SA's end date or do not exist at all - remember closing/final bill segment is generated if the SA is in Stopped status).   CC&B detects such scenario and attempts to cancel all such violating bill segments automatically, but NOT if you are generating the bill Online. If online, the system assumes that you know what you are doing, and prompts you with error 2, 13716 - Bill segments that violate the SA (%1) End Date (%2) exist to take necessary action.   If in batch, system automatically cancels these kinds of bill segment(s).   Since this happens in the background, you have to define within the application which System Default Bill (Segment) cancellation reason code identified as Turn off auto-cancel, should be used by the process when it attempts to cancel any such violating bill segments (You already know that you cannot cancel a bill segment without giving a reason for cancellation).   So what exactly happens during batch billing?   Bill Segment generation routine at first determines billing eligibility of the service agreement being billed. One of the billing eligibility criteria is to check the SA's previous bill segments which have end dates greater than the current cut-off date/end date. Technically, the routine retrieves a count of such violating bill segments.     SELECT COUNT (*) FROM CI_BSEG WHERE SA_ID = :SA-ID AND BSEG_STAT_FLG = '50' -- Frozen AND END_DT IS NOT NULL AND (END_DT > '03-JUN-2010' -- Bill segment greater than SA's End Date OR OR (END_DT = '03-JUN-2010' AND CLOSING_BSEG_SW = 'N')) -- Non-closing bill segment ending on SA's end date   If the count is greater than zero, Bill segment generation routine executes another program to auto-cancel such bill segments. Auto-cancel program retrieves the 'Bill Cancel Reason' code which is identified as Turn off auto-cancel. Retrieved cancel reason code is then placed on the bill segments that are being cancelled automatically.   During this process if the routine fails to determine the bill cancel reason code having System Default Turn off auto-cancel because it was not been configured, you get a bill exception 6, 13906 - Auto-cancel reason not found.   Also note that duplicate or multiple System Default codes identified as Turn off auto-cancel are not allowed. CC&B would complain with an error 2, 54201.   Duplicate validation/check is also performed within Auto-cancel routine, if suppose for test purposes you executed a DML statement updating CI_BILL_CAN_RSN.BSCAN_SYS_DFLT_FLG with a value 'T'.

    Read the article

  • When searching in Outlook, encountered this error "Instant Search encountered a problem while trying

    - by Imagineer
    Sometime when searching for certain keyword, I get this error "Instant Search encountered a problem while trying to display search results. Modifying your query may resolve this problem." I have enabled Outlook logging to determine what is the error as suggested by someone in other forum. but I haven't had a clue how to decipher it. 2010.05.11 09:38:10 <<<< Logging Started (level is LTF_TRACE) >>>> 2010.05.11 09:38:10 HELPER::Initialize called 2010.05.11 09:38:10 Initializing: Finding a Transport 2010.05.11 09:38:10 MAPI XP Call: XPProviderInit in EMSMDB.DLL, hr = 0x00000000 2010.05.11 09:38:10 MAPI XP Call: TransportLogon, hr = 0x8004011d 2010.05.11 09:38:10 MAPI XP Call: Shutdown, hr = 0x00000000 2010.05.11 09:38:10 MAPI XP Call: XPProviderInit in EMSMDB.DLL, hr = 0x00000000 2010.05.11 09:38:10 MAPI Status: (-- -- ---/--- -- ---) 2010.05.11 09:38:10 MAPI XP Call: TransportLogon, hr = 0x00000000 2010.05.11 09:38:10 Initializing: Found a transport, Error code = 0x00000000 2010.05.11 09:38:10 MAPI XP Call: AddressTypes, hr = 0x00000000, cAddrs = 4, cUids = 1 2010.05.11 09:38:10 MAPI XP Call: RegisterOptions, hr = 0x00000000, cOptions = 2 2010.05.11 09:38:10 MAPI Status: (IN -- ---/OUT -- ---) 2010.05.11 09:38:10 MAPI XP Call: TransportNotify(BEGIN_IN|BEGIN_OUT), hr = 0x00000000 2010.05.11 09:38:10 HELPER::Initialize done, Error code = 0x00000000 2010.05.11 09:38:10 HELPER::GetCapabilities called, Error code = 0x00000000 2010.05.11 09:38:10 Microsoft Exchange: Synch operation started (flags = 00000031) 2010.05.11 09:38:10 Microsoft Exchange: StartImport(flags = 00000000, max msg = ffffffff): full items 2010.05.11 09:38:10 Microsoft Exchange: UploadItems: 0 messages to send 2010.05.11 09:38:11 Starting the Spooling Cycle 2010.05.11 09:38:11 MAPI Status: (IN fl ---/OUT -- ---) 2010.05.11 09:38:11 MAPI XP Call: FlushQueues, hr = 0x00000000, ulFlushFlags = 0x0000001c 2010.05.11 09:38:11 MAPI XP Call: Poll, hr = 0x00000000, cPollCount = 855 2010.05.11 09:38:11 Progress: Receiving message (message 1 out of 856, size unknown) 2010.05.11 09:38:11 Downloading one message 2010.05.11 09:38:11 Transport tightly coupled with store, download is NOOP 2010.05.11 09:38:11 Downloading done, Error code = 0x8004010f 2010.05.11 09:38:11 MAPI Status: (IN -- ---/OUT -- ---) 2010.05.11 09:38:11 FINISHED MAPI TASK 2010.05.11 09:38:11 Microsoft Exchange: ReportStatus: RSF_COMPLETED, hr = 0x00000000 2010.05.11 09:38:11 Finishing the Spooling Cycle, Error code = 0x00000000 2010.05.11 09:38:11 EXECUTING EndSession MAPI TASK 2010.05.11 09:38:11 Starting the Simplified Transfer Cycle 2010.05.11 09:38:11 MAPI XP Call: Poll, hr = 0x00000000, iMsgsReceived = 0, cPollCount = 855 2010.05.11 09:38:11 Progress: Receiving message (message 1 out of 856, size unknown) 2010.05.11 09:38:11 Downloading one message 2010.05.11 09:38:11 MAPI Status: (IN -- act/OUT -- ---) 2010.05.11 09:38:11 MAPI Status: (IN -- ---/OUT -- ---) 2010.05.11 09:38:11 Downloading done, Error code = 0x8004010f 2010.05.11 09:38:11 Finishing the Spooling Cycle, Error code = 0x00000000 2010.05.11 09:38:11 FINISHED MAPI TASK 2010.05.11 09:38:11 Microsoft Exchange: ReportStatus: RSF_COMPLETED, hr = 0x00000000 2010.05.11 09:38:11 Microsoft Exchange: Synch operation completed 2010.05.11 10:08:15 Microsoft Exchange: Synch operation started (flags = 00000031) 2010.05.11 10:08:15 Microsoft Exchange: StartImport(flags = 00000000, max msg = ffffffff): full items 2010.05.11 10:08:15 Microsoft Exchange: UploadItems: 0 messages to send 2010.05.11 10:08:16 Starting the Spooling Cycle 2010.05.11 10:08:16 MAPI Status: (IN fl ---/OUT -- ---) 2010.05.11 10:08:16 MAPI XP Call: FlushQueues, hr = 0x00000000, ulFlushFlags = 0x0000001c 2010.05.11 10:08:16 MAPI XP Call: Poll, hr = 0x00000000, cPollCount = 858 2010.05.11 10:08:16 Progress: Receiving message (message 1 out of 859, size unknown) 2010.05.11 10:08:16 Downloading one message 2010.05.11 10:08:16 Transport tightly coupled with store, download is NOOP 2010.05.11 10:08:16 Downloading done, Error code = 0x8004010f 2010.05.11 10:08:16 MAPI Status: (IN -- ---/OUT -- ---) 2010.05.11 10:08:16 FINISHED MAPI TASK 2010.05.11 10:08:16 Microsoft Exchange: ReportStatus: RSF_COMPLETED, hr = 0x00000000 2010.05.11 10:08:16 Finishing the Spooling Cycle, Error code = 0x00000000 2010.05.11 10:08:16 EXECUTING EndSession MAPI TASK 2010.05.11 10:08:16 Starting the Simplified Transfer Cycle 2010.05.11 10:08:16 MAPI XP Call: Poll, hr = 0x00000000, iMsgsReceived = 0, cPollCount = 858 2010.05.11 10:08:16 Progress: Receiving message (message 1 out of 859, size unknown) 2010.05.11 10:08:16 Downloading one message 2010.05.11 10:08:16 MAPI Status: (IN -- act/OUT -- ---) 2010.05.11 10:08:16 MAPI Status: (IN -- ---/OUT -- ---) 2010.05.11 10:08:16 Downloading done, Error code = 0x8004010f 2010.05.11 10:08:16 Finishing the Spooling Cycle, Error code = 0x00000000 2010.05.11 10:08:16 FINISHED MAPI TASK 2010.05.11 10:08:16 Microsoft Exchange: ReportStatus: RSF_COMPLETED, hr = 0x00000000 2010.05.11 10:08:16 Microsoft Exchange: Synch operation completed 2010.05.11 10:09:48 HELPER::Uninitialize called 2010.05.11 10:09:48 MAPI Status: (-- -- ---/--- -- ---) 2010.05.11 10:09:48 MAPI XP Call: TransportNotify(END_IN|END_OUT), hr = 0x00000000 2010.05.11 10:09:48 MAPI XP Call: TransportLogoff in EMSMDB.DLL, hr = 0x00000000 2010.05.11 10:09:48 MAPI XP Call: Shutdown, hr = 0x00000000 2010.05.11 10:09:48 Resource manager terminated I'm running Outlook 2007 SP1 in Citrix environment and should be running in Cache Mode. In my Outlook Tools-Options-Search Option, there is nothing under indexing. Any help is greatly appreciated! Thank you.

    Read the article

  • How do I achieve lossless JPEG joining without truncation of partial MCUs?

    - by Karan
    I am working on a project for which I need to join thousands of JPEG images losslessly (I'm not talking about the Lossless JPEG/JPEG 2000/JPEG-LS formats here). Aforementioned images have varying levels of chroma subsampling (1x1, 1x2, 2x1, 2x2), resulting in varying MCU sizes (8x8, 8x16, 16x8, 16x16 px). However, in any given set of images to be joined together, each image has identical characteristics. For now, let's assume I only have 2 images. Image #1 (I1) is 256x256px in size and #2 (I2) is 239x256px in size. 2x2 subsampling is used such that MCU size is 16x16px. I2 thus obviously has partial MCUs at the right edge, since its width is not evenly divisible by 16. (I've read that so-called 'partial' MCUs actually contain the data for a complete MCU, but the image dimensions instruct the renderer to only display the relevant pixels and ignore/hide the extra ones.) Looking around for tools that could help me accomplish this, I came across a modified version of JpegTran, that contains an experimental lossless crop 'n' drop (cut & paste) feature. All the other apps I encountered that support lossless JPEG editing seem to utilise IJG's (JpegTran) code, so this seemed to be the logical choice. Also, given the sheer number of images, I wanted something that could preferably be run from the command-line so that I could automate the process with a script. Unfortunately, while everything else worked fine, it seems JpegTran truncates the partial MCUs instead of retaining them. Thus in the example above, the final joined image contains all of I1, but only 224x256px of I2. Why 224? because 239 = 14x16+15, which means there are 14 full MCUs along the width, and 1 partial MCU (just 1px short of the complete 16px). The last 15px is what is getting blanked, leading to a 495x256px image with 15px of blank (grey) pixels at the right edge. See images below (shame that imgur re-compresses them): (left )+ (right) = As you can clearly see, the red portion (15px) of I2 has been truncated by JpegTran. If the MCUs were 8px in width, the lost portion would have been the right-most 7px of I2. Similarly, joining I3 (256x239px) *below * I1 would cause the loss of 7 or 15px, depending on the MCU height of course: (top) + (bottom) = If this is better suited to some other StackExchange (or even non-SE) site/forum where JPEG/image encoding experts hang out, do let me know. Can what I am attempting even be done, or is the so-called 'lossless' JPEG crop 'n' drop only valid for images with no partial MCUs? (Maybe that is why the feature is still in an "experimental state" more than a decade after being introduced...) Until I know for sure that it is impossible, I am not interested in suggestions for lossy joining. Avoiding any generational loss whatsoever is the sole reason why I'm breaking my head over this, else I'd have had this done and dusted ages ago. Also, I am not interested in suggestions related to switching image formats. I do not control the source of the images. If it can be done, how? Please keep in mind that any alternate apps suggested must ideally be capable of automation, given the requirements stated above. (But given how it's unlikely I'm even going to receive a useful answer given the constraints, I would be happy with any app suggestion just as long as it actually works. I can always look into an AutoIT/AHK script or something later to automate it.) I understand that an odd-sized final image might cause issues, so I am fully prepared to accept any solution, even if it results in blank (preferably black) padding pixels to the right/bottom. What I mean is, I don't care if I1 + I2 is 496x256px (1px padding) or even 512x256px (17px padding) in size, as long as the final image contains all the actual image data from both source images, and the entire process is lossless. Obviously the lesser the padding (if any), the better, but at this point any solution will do. A Windows-based solution would be perfect, but a Linux-based one would be entirely acceptable.

    Read the article

  • Building vs. Buying a Master Data Management Solution

    - by david.butler(at)oracle.com
    Many organizations prefer to build their own MDM solutions. The argument is that they know their data quality issues and their data better than anyone. Plus a focused solution will cost less in the long run then a vendor supplied general purpose product. This is not unreasonable if you think of MDM as a point solution for a particular data quality problem. But this approach carries significant risk. We now know that organizations achieve significant competitive advantages when they deploy MDM as a strategic enterprise wide solution: with the most common best practice being to deploy a tactical MDM solution and grow it into a full information architecture. A build your own approach most certainly will not scale to a larger architecture unless it is done correctly with the larger solution in mind. It is possible to build a home grown point MDM solution in such a way that it will dovetail into broader MDM architectures. A very good place to start is to use the same basic technologies that Oracle uses to build its own MDM solutions. Start with the Oracle 11g database to create a flexible, extensible and open data model to hold the master data and all needed attributes. The Oracle database is the most flexible, highly available and scalable database system on the market. With its Real Application Clusters (RAC) it can even support the mixed OLTP and BI workloads that represent typical MDM data access profiles. Use Oracle Data Integration (ODI) for batch data movement between applications, MDM data stores, and the BI layer. Use Oracle Golden Gate for more real-time data movement. Use Oracle's SOA Suite for application integration with its: BPEL Process Manager to orchestrate MDM connections to business processes; Identity Management for managing users; WS Manager for managing web services; Business Intelligence Enterprise Edition for analytics; and JDeveloper for creating or extending the MDM management application. Oracle utilizes these technologies to build its MDM Hubs.  Customers who build their own MDM solution using these components will easily migrate to Oracle provided MDM solutions when the home grown solution runs out of gas. But, even with a full stack of open flexible MDM technologies, creating a robust MDM application can be a daunting task. For example, a basic MDM solution will need: a set of data access methods that support master data as a service as well as direct real time access as well as batch loads and extracts; a data migration service for initial loads and periodic updates; a metadata management capability for items such as business entity matrixed relationships and hierarchies; a source system management capability to fully cross-reference business objects and to satisfy seemingly conflicting data ownership requirements; a data quality function that can find and eliminate duplicate data while insuring correct data attribute survivorship; a set of data quality functions that can manage structured and unstructured data; a data quality interface to assist with preventing new errors from entering the system even when data entry is outside the MDM application itself; a continuing data cleansing function to keep the data up to date; an internal triggering mechanism to create and deploy change information to all connected systems; a comprehensive role based data security system to control and monitor data access, update rights, and maintain change history; a flexible business rules engine for managing master data processes such as privacy and data movement; a user interface to support casual users and data stewards; a business intelligence structure to support profiling, compliance, and business performance indicators; and an analytical foundation for directly analyzing master data. Oracle's pre-built MDM Hub solutions are full-featured 3-tier Internet applications designed to participate in the full Oracle technology stack or to run independently in other open IT SOA environments. Building MDM solutions from scratch can take years. Oracle's pre-built MDM solutions can bring quality data to the enterprise in a matter of months. But if you must build, at lease build with the world's best technology stack in a way that simplifies the eventual upgrade to Oracle MDM and to the full enterprise wide information architecture that it enables.

    Read the article

  • To sample or not to sample...

    - by [email protected]
    Ideally, we would know the exact answer to every question. How many people support presidential candidate A vs. B? How many people suffer from H1N1 in a given state? Does this batch of manufactured widgets have any defective parts? Knowing exact answers is expensive in terms of time and money and, in most cases, is impractical if not impossible. Consider asking every person in a region for their candidate preference, testing every person with flu symptoms for H1N1 (assuming every person reported when they had flu symptoms), or destructively testing widgets to determine if they are "good" (leaving no product to sell). Knowing exact answers, fortunately, isn't necessary or even useful in many situations. Understanding the direction of a trend or statistically significant results may be sufficient to answer the underlying question: who is likely to win the election, have we likely reached a critical threshold for flu, or is this batch of widgets good enough to ship? Statistics help us to answer these questions with a certain degree of confidence. This focuses on how we collect data. In data mining, we focus on the use of data, that is data that has already been collected. In some cases, we may have all the data (all purchases made by all customers), in others the data may have been collected using sampling (voters, their demographics and candidate choice). Building data mining models on all of your data can be expensive in terms of time and hardware resources. Consider a company with 40 million customers. Do we need to mine all 40 million customers to get useful data mining models? The quality of models built on all data may be no better than models built on a relatively small sample. Determining how much is a reasonable amount of data involves experimentation. When starting the model building process on large datasets, it is often more efficient to begin with a small sample, perhaps 1000 - 10,000 cases (records) depending on the algorithm, source data, and hardware. This allows you to see quickly what issues might arise with choice of algorithm, algorithm settings, data quality, and need for further data preparation. Instead of waiting for a model on a large dataset to build only to find that the results don't meet expectations, once you are satisfied with the results on the initial sample, you can  take a larger sample to see if model quality improves, and to get a sense of how the algorithm scales to the particular dataset. If model accuracy or quality continues to improve, consider increasing the sample size. Sampling in data mining is also used to produce a held-aside or test dataset for assessing classification and regression model accuracy. Here, we reserve some of the build data (data that includes known target values) to be used for an honest estimate of model error using data the model has not seen before. This sampling transformation is often called a split because the build data is split into two randomly selected sets, often with 60% of the records being used for model building and 40% for testing. Sampling must be performed with care, as it can adversely affect model quality and usability. Even a truly random sample doesn't guarantee that all values are represented in a given attribute. This is particularly troublesome when the attribute with omitted values is the target. A predictive model that has not seen any examples for a particular target value can never predict that target value! For other attributes, values may consist of a single value (a constant attribute) or all unique values (an identifier attribute), each of which may be excluded during mining. Values from categorical predictor attributes that didn't appear in the training data are not used when testing or scoring datasets. In subsequent posts, we'll talk about three sampling techniques using Oracle Database: simple random sampling without replacement, stratified sampling, and simple random sampling with replacement.

    Read the article

  • A temporary disagreement

    - by Tony Davis
    Last month, Phil Factor caused a furore amongst some MVPs with an article that attempted to offer simple advice to developers regarding the use of table variables, versus local and global temporary tables, in their code. Phil makes clear that the table variables do come with some fairly major limitations.no distribution statistics, no parallel query plans for queries that modify table variables.but goes on to suggest that for reasonably small-scale strategic uses, and with a bit of due care and testing, table variables are a "good thing". Not everyone shares his opinion; in fact, I imagine he was rather aghast to learn that there were those felt his article was akin to pulling the pin out of a grenade and tossing it into the database; table variables should be avoided in almost all cases, according to their advice, in favour of temp tables. In other words, a fairly major feature of SQL Server should be more-or-less 'off limits' to developers. The problem with temp tables is that, because they are scoped either in the procedure or the connection, it is easy to allow them to hang around for too long, eating up precious memory and bulking up the shared tempdb database. Unless they are explicitly dropped, global temporary tables, and local temporary tables created within a connection rather than within a stored procedure, will persist until the connection is closed or, with connection pooling, until the connection is reused. It's also quite common with ASP.NET applications to have connection leaks, as Bill Vaughn explains in his chapter in the "SQL Server Deep Dives" book, meaning that the web page exits without closing the connection object, maybe due to an error condition. This will then hang around in the heap for what might be hours before picked up by the garbage collector. Table variables are much safer in this regard, since they are batch-scoped and so are cleaned up automatically once the batch is complete, which also means that they are intuitive to use for the developer because they conform to scoping rules that are closer to those in procedural code. On the surface then, an ideal way to deal with issues related to tempdb memory hogging. So why did Phil qualify his recommendation to use Table Variables? This is another of those cases where, like scalar UDFs and table-valued multi-statement UDFs, developers can sometimes get into trouble with a relatively benign-looking feature, due to way it's been implemented in SQL Server. Once again the biggest problem is how they are handled internally, by the SQL Server query optimizer, which can make very poor choices for JOIN orders and so on, in the absence of statistics, especially when joining to tables with highly-skewed data. The resulting execution plans can be horrible, as will be the resulting performance. If the JOIN is to a large table, that will hurt. Ideally, Microsoft would simply fix this issue so that developers can't get burned in this way; they've been around since SQL Server 2000, so Microsoft has had a bit of time to get it right. As I commented in regard to UDFs, when developers discover issues like with such standard features, the database becomes an alien planet to them, where death lurks around each corner, and they continue to avoid these "killer" features years after the problems have been eventually resolved. In the meantime, what is the right approach? Is it to say "hammers can kill, don't ever use hammers", or is it to try to explain, as Phil's article and follow-up blog post have tried to do, what the feature was intended for, why care must be applied in its use, and so enable developers to make properly-informed decisions, without requiring them to delve deep into the inner workings of SQL Server? Cheers, Tony.

    Read the article

  • Tuning Red Gate: #3 of Lots

    - by Grant Fritchey
    I'm drilling down into the metrics about SQL Server itself available to me in the Analysis tab of SQL Monitor to see what's up with our two problematic servers. In the previous post I'd noticed that rg-sql01 had quite a few CPU spikes. So one of the first things I want to check there is how much CPU is getting used by SQL Server itself. It's possible we're looking at some other process using up all the CPU Nope, It's SQL Server. I compared this to the rg-sql02 server: You can see that there is a more, consistently low set of CPU counters there. I clearly need to look at rg-sql01 and capture more specific data around the queries running on it to identify which ones are causing these CPU spikes. I always like to look at the Batch Requests/sec on a server, not because it's an indication of a problem, but because it gives you some idea of the load. Just how much is this server getting hit? Here are rg-sql01 and rg-sql02: Of the two, clearly rg-sql01 has a lot of activity. Remember though, that's all this is a measure of, activity. It doesn't suggest anything other than what it says, the number of requests coming in. But it's the kind of thing you want to know in order to understand how the system is used. Are you seeing a correlation between the number of requests and the CPU usage, or a reverse correlation, the number of requests drops as the CPU spikes? See, it's useful. Some of the details you can look at are Compilations/sec, Compilations/Batch and Recompilations/sec. These give you some idea of how the cache is getting used within the system. None of these showed anything interesting on either server. One metric that I like (even though I know it can be controversial) is the Page Life Expectancy. On the average server I expect see a series of mountains as the PLE climbs then drops due to a data load or something along those lines. That's not the case here: Those spikes back in January suggest that the servers weren't really being used much. The PLE on the rg-sql01 seems to be somewhat consistent growing to 3 hours or so then dropping, but the rg-sql02 PLE looks like it might be all over the map. Instead of continuing to look at this high level gathering data view, I'm going to drill down on rg-sql02 and see what it's done for the last week: And now we begin to see where we might have an issue. Memory on this system is getting flushed every 1/2 hour or so. I'm going to check another metric, scans: Whoa! I'm going back to the system real quick to look at some disk information again for rg-sql02. Here is the average disk queue length on the server: and the transfers Right, I think I have a guess as to what's up here. We're seeing memory get flushed constantly and we're seeing lots of scans. The disks are queuing, especially that F drive, and there are lots of requests that correspond to the scans and the memory flushes. In short, we've got queries that are scanning the data, a lot, so we either have bad queries or bad indexes. I'm going back to the server overview for rg-sql02 and check the Top 10 expensive queries. I'm modifying it to show me the last 3 days and the totals, so I'm not looking at some maintenance routine that ran 10 minutes ago and is skewing the results: OK. I need to look into these queries that are getting executed this much. They're generating a lot of reads, but which queries are generating the most reads: Ow, all still going against the same database. This is where I'm going to temporarily leave SQL Monitor. What I want to do is connect up to the server, validate that the Warehouse database is using the F:\ drive (which I'll put money down it is) and then start seeing what's up with these queries. Part 1 of the Series Part 2 of the Series

    Read the article

< Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >