Search Results

Search found 2628 results on 106 pages for 'lexical analysis'.

Page 16/106 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Balanced Search Tree Query, Asymtotic Analysis..

    - by AGeek
    Hi, The situation is as follows:- We have n number and we have print them in sorted order. We have access to balanced dictionary data structure, which supports the operations serach, insert, delete, minimum, maximum each in O(log n) time. We want to retrieve the numbers in sorted order in O(n log n) time using only the insert and in-order traversal. The answer to this is:- Sort() initialize(t) while(not EOF) read(x) insert(x,t); Traverse(t); Now the query is if we read the elements in time "n" and then traverse the elements in "log n"(in-order traversal) time,, then the total time for this algorithm (n+logn)time, according to me.. Please explain the follow up of this algorithm for the time calculation.. How it will sort the list in O(nlogn) time?? Thanks.

    Read the article

  • Code Analysis Warning CA1004 with generic method

    - by Vaccano
    I have the following generic method: // Load an object from the disk public static T DeserializeObject<T>(String filename) where T : class { XmlSerializer xmlSerializer = new XmlSerializer(typeof(T)); try { TextReader textReader = new StreamReader(filename); var result = (T)xmlSerializer.Deserialize(textReader); textReader.Close(); return result; } catch (FileNotFoundException) { } return null; } When I compile I get the following warning: CA1004 : Microsoft.Design : Consider a design where 'MiscHelpers.DeserializeObject(string)' doesn't require explicit type parameter 'T' in any call to it. I have considered this and I don't know a way to do what it requests with out limiting the types that can be deserialized. I freely admit that I might be missing an easy way to fix this. But if I am not, then is my only recourse to suppress this warning? I have a clean project with no warnings or messages. I would like to keep it that way. I guess I am asking "why this is a warning?" At best this seems like it should be a message. And even that seems a bit much. Either it can or it can't be fixed. If it can't then you are just stuck with the warning with no recourse but suppressing it. Am I wrong?

    Read the article

  • How to create a conditional measure in SQL Server 2008 Analysis services

    - by Jonathan
    Hi there I am not sure if the title has the correct terms, I have a developer and am very new to cubes. I have a cube which has data associated to materials that are broken down into chemical compounds. For example a rock material has 10% of this chemical and 10% of that chemical, etc. Samples are taken daily and sample is a dimension with date, etc. So, the measure needs to average by the sample dimension but needs to sum across the chemical compound dimension (To add up to 100% for example). Is this at all possible?

    Read the article

  • MySQL Non Index Queries Analysis

    - by Markii
    I'm using the log queries not using index but it logs all that use indexes but just more advanced or using IFs. Is there a parser or a program out there that can analyze the log and give me a literal output of saying "table.column should be a index" Thanks

    Read the article

  • Ignore designer and generated files in Resharper analysis

    - by RaYell
    I've been using Resharper for a few days and I really like this tool, but there's one thing that annoys me about it and I wonder if it can be changed. I'm getting lots of issue notifications from generated code (almost 1400 in my project). I'd want to set those files as ignored so they won't be checked as you can do with StyleCop and CodeAnalysis. Unfortunatelly it looks like Resharper ignores Generated Code settings from it's options because I'm still getting the same notifications. I've tried setting a file mask (i.e. for *.resx) and add files manually to generated, but still it doesn't change anything. I don't know if it matters but I'm using VS 2010.

    Read the article

  • Image analysis on the iPhone

    - by user362808
    Hi, guys! Hopefully a quick one. Working on a devious little algorithm that'll automatically scan a user's iPhone for a photo of some emotional poignance (e.g. go back in time a ways, look for one of a group of photos that are proximally timestamped, etc.). Need a quick-and-dirty way of picking out a photo containing two people in close proximity to each other. I know that the OpenCV Python lib [for image processing] works on the iPhone, I'm just curious whether it's actually the best way of doing this (from scratch or otherwise). Cheers!

    Read the article

  • Sum and average over null values SQL Server 2008 Analysis Services

    - by Jonathan
    I have a simple problem, I think, but I have googled and can't find the solution. I have a cube that has MeasureA, MeasureB and MeasureC. Not all three measures have values for each record, sometimes they can be null, it's depending if it was applicable. Now for my totals, I need to average but the average must not take nulls into account. Any help will be much appreciated. When I view the measures, the null values show as zeros.

    Read the article

  • Performance analysis for java application

    - by user1827614
    I want to do a performance measurement of my application and would like to be able to configure the stats for specific module like (enable for specific module and disable for some) and I want to measure things like memory usage, threads, average band width etc.. Can any one suggest something please, I am new to this. I think Visual VM is good but it doesnot support configuring for different modules. Does Perf4j or Admin4j work here? any one has used these before?

    Read the article

  • ?# string analysis

    - by Sergei
    I have a string for example like " :)text :)text:) :-) word :-( " i need append it in textbox(or somewhere else), with condition: Instead of ':)' ,':-(', etc. need to call function which enter specific symbol I thinck exists solution with Finite-state machine, but how implement it don't know. Waiting for advises.

    Read the article

  • PL/SQL pre-compile and Code Quality checks in an automatted build environment?

    - by Lars Corneliussen
    We build software using Hudson and Maven. We have C#, java and last, but not least PL/SQL sources (sprocs, packages, DDL, crud) For C# and Java we do unit tests and code analysis, but we don't really know the health of our PL/SQL sources before we actually publish them to the target database. Requirements There are a couple of things we wan't to test in the following priority: Are the sources valid, hence "compilable"? For packages, with respect to a certain database, would they compile? Code Quality: Do we have code flaws like duplicates, too complex methods or other violations to a defined set of rules? Also, the tool must run head-less (commandline, ant, ...) we wan't to do analysis on a partial code base (changed sources only) Tools We did a little research and found the following tools that could potencially help: Cast Application Intelligence Platform (AIP): Seems to be a server that grasps information about "anything". Couldn't find a console version that would export in readable format. Toad for Oracle: The Professional version is said to include something called Xpert validates a set of rules against a code base. Sonar + PL/SQL-Plugin: Uses Toad for Oracle to display code-health the sonar-way. This is for browsing the current state of the code base. Semantic Designs DMSToolkit: Quite general analysis of source code base. Commandline available? Semantic Designs Clones Detector: Detects clones. But also via command line? Fortify Source Code Analyzer: Seems to be focussed on security issues. But maybe it is extensible? more... So far, Toad for Oracle together with Sonar seems to be an elegant solution. But may be we are missing something here? Any ideas? Other products? Experiences? Related Questions on SO: http://stackoverflow.com/questions/531430/any-static-code-analysis-tools-for-stored-procedures http://stackoverflow.com/questions/839707/any-code-quality-tool-for-pl-sql http://stackoverflow.com/questions/956104/is-there-a-static-analysis-tool-for-python-ruby-sql-cobol-perl-and-pl-sql

    Read the article

  • SSAS – Synchronisation performance

    - by ACALVETT
    I’ve always thought of SSAS synchronisation as a clever file mirroring utility built into SSAS and i have never considered the technology as bringing any performance gains to the table. So, its a good job I like to revisit areas…. :) I decided to compare the performance of robocopy and SSAS Synchronisation between 2 Windows 2003 servers running SSAS 2008 SP1 CU7 with 1gb network links. For the robocopy of the data directory i used the SQLCat Robocopy Script . The results are shown below. SSAS Sync...(read more)

    Read the article

  • Sublinear Extra Space MergeSort

    - by hulkmeister
    I am reviewing basic algorithms from a book called Algorithms by Robert Sedgewick, and I came across a problem in MergeSort that I am, sad to say, having difficulty solving. The problem is below: Sublinear Extra Space. Develop a merge implementation that reduces that extra space requirement to max(M, N/M), based on the following idea: Divide the array into N/M blocks of size M (for simplicity in this description, assume that N is a multiple of M). Then, (i) considering the blocks as items with their first key as the sort key, sort them using selection sort; and (ii) run through the array merging the first block with the second, then the second block with the third, and so forth. The problem I have with the problem is that based on the idea Sedgewick recommends, the following set of arrays will not be sorted: {0, 10, 12}, {3, 9, 11}, {5, 8, 13}. The algorithm I use is the following: Divide the full array into subarrays of size M. Run Selection Sort on each of the subarrays. Merge each of the subarrays using the method Sedgwick recommends in (ii). (This is where I encounter the problem of where to store the results after the merge.) This leads to wanting to increase the size of the auxiliary space needed to handle at least two subarrays at a time (for merging), but based on the specifications of the problem, that is not allowed. I have also considered using the original array as space for one subarray and using the auxiliary space for the second subarray. However, I can't envision a solution that does not end up overwriting the entries of the first subarray. Any ideas on other ways this can be done? NOTE: If this is suppose to be on StackOverflow.com, please let me know how I can move it. I posted here because the question was academic.

    Read the article

  • SSAS 2008 R2– Little Gems

    - by ACALVETT
    I have spent the last few days working with SSAS 2008 R2 and noticed a few small enhancements which many people probably won’t notice but i will list them here and why they are important to me. New profiler events Commit: This is a new sub class event for “progress report end”. This represents the elapsed time taken for the server to commit your data. It is important because for the duration of this event a server level lock will be in place blocking all incoming connections and causing time out...(read more)

    Read the article

  • Which is Better? The Start Screen in Windows 8 or the Old Start Menu? [Analysis]

    - by Asian Angel
    There has been quite a bit of controversy surrounding Microsoft’s emphasis on the new Metro UI Start Screen in Windows 8, but when it comes down to it which is really better? The Start Screen in Windows 8 or the old Start Menu? Tech blog 7 Tutorials has done a quick analysis to see which one actually works better (and faster) when launching applications and doing searches. Images courtesy of 7 Tutorials. You can view the results and a comparison table by visiting the blog post linked below. Windows 8 Analysis: Is the Start Screen an Improvement vs. the Start Menu? [7 Tutorials] How to Stress Test the Hard Drives in Your PC or Server How To Customize Your Android Lock Screen with WidgetLocker The Best Free Portable Apps for Your Flash Drive Toolkit

    Read the article

  • SSAS Multithreaded sync with Windows 2008 R2

    - by ACALVETT
    We have been happily running some of our systems on WIndows 2003 and have had an upgrade to W2K8 R2 on the list for quite some time. The upgrade has now completed and we can start taking advantage of some of the new features which is the reason for this post. For a long time we have used the sample Robocopy script from the SQLCat team to synchronize some of our larger SSAS databases. If your wondering what i mean by large, around 5 TB with a good few thousand partitions. The script works like a dream...(read more)

    Read the article

  • Optimize SUMMARIZE with ADDCOLUMNS in Dax #ssas #tabular #dax #powerpivot

    - by Marco Russo (SQLBI)
    If you started using DAX as a query language, you might have encountered some performance issues by using SUMMARIZE. The problem is related to the calculation you put in the SUMMARIZE, by adding what are called extension columns, which compute their value within a filter context defined by the rows considered in the group that the SUMMARIZE uses to produce each row in the output. Most of the time, for simple table expressions used in the first parameter of SUMMARIZE, you can optimize performance by removing the extended columns from the SUMMARIZE and adding them by using an ADDCOLUMNS function. In practice, instead of writing SUMMARIZE( <table>, <group_by_column>, <column_name>, <expression> ) you can write: ADDCOLUMNS(     SUMMARIZE( <table>, <group by column> ),     <column_name>, CALCULATE( <expression> ) ) The performance difference might be huge (orders of magnitude) but this optimization might produce a different semantic and in these cases it should not be used. A longer discussion of this topic is included in my Best Practices Using SUMMARIZE and ADDCOLUMNS article on SQLBI, which also include several details about the DAX syntax with extended columns. For example, did you know that you can create an extended column in SUMMARIZE and ADDCOLUMNS with the same name of existing measures? It is *not* a good thing to do, and by reading the article you will discover why. Enjoy DAX!

    Read the article

  • Is there any open source code analyzer for java which I can adopt my software metrics algorithm on it?

    - by daneshkohan
    I am doing my masters dissertation and I have conducted a software metrics. I need to adopt my metrics on an open source tool. I have found PMD and check style on sourceforge.net but there is not adequate explanation about their codes. However, I couldn't to find their source code to customize them. I will be appreciated, if you introduce one open source tool for java which I can customize it's code.

    Read the article

  • Implement Budget Allocation in DAX for Power Pivot and Tabular #powerpivot #tabular #ssas #dax

    - by Marco Russo (SQLBI)
    Comparing sales and budget, or costs and budget, is a very common operation. However, it is often the case that you have different granularities for different tables containing budget and the data to compare with. There are two ways to do that: you can limit the comparison to the granularity that is common to the two tables, or you can allocate the budget where it’s not defined. For example, if you have a budget defined by quarter and category, you might want to allocate it by month and product. In this way, you will do the comparison as you had a more granular definition of the budget, without actually having to do the manual job of allocating data (usually in an Excel worksheet!). If you want to do budget allocation in DAX, you can use the Budget Patterns we published on DAX Patterns. If you come from and MDX/OLAP background, at first you might find it hard to solve the problem of not having attribute hierarchies that helps you in propagating the budget values to lower hierarchical levels. However, I think that once you get used to DAX, you will find the behavior very predictable and easy to “debug” also for more complex allocation formula. You just have to be careful in writing the DAX formula, but probably the pattern we wrote should help you designing the right data model, without creating physical relationships to the budget table! This pattern is also based on the Handling Different Granularities scenario I discussed a couple of weeks ago.

    Read the article

  • SSAS Native v .net Provider

    - by ACALVETT
    Recently I was investigating why a new server which is in its parallel running phase was taking significantly longer to process the daily data than the server its due to replace. The server has SQL & SSAS installed so the problem was not likely to be in the network transfer as its using shared memory. As i dug around the SQL dmv’s i noticed in sys.dm_exec_connections that the SSAS connection had a packet size of 8000 bytes instead of the usual 4096 bytes and from there i found that the datasource...(read more)

    Read the article

  • Help with algorithmic complexity in custom merge sort implementation

    - by bitcycle
    I've got an implementation of the merge sort in C++ using a custom doubly linked list. I'm coming up with a big O complexity of n^2, based on the merge_sort() slice operation. But, from what I've read, this algorithm should be n*log(n), where the log has a base of two. Can someone help me determine if I'm just determining the complexity incorrectly, or if the implementation can/should be improved to achieve n*log(n) complexity? If you would like some background on my goals for this project, see my blog. I've added comments in the code outlining what I understand the complexity of each method to be. Clarification - I'm focusing on the C++ implementation with this question. I've got another implementation written in Python, but that was something that was added in addition to my original goal(s).

    Read the article

  • What skills should a developer/tester learn in order to move into a permanent Systems Analysis role?

    - by shenaz
    I have been with a software services firm for 5 years and have fallen into a "jack of all trades" role, which I am looking to move out of. I've spent about 1 year each in programming (VB/VB.NET), application support, systems analysis, and most recently, software testing, which in my current position is all manual. I've really lost interest in the programming and testing roles; I would prefer a position where I get to work more with people, such as systems analysis. I even got a chance to be a trainer at the same company for a few months, a temporary position which I enjoyed very much. Given that most of my real experience is with software, support, and testing, what knowledge areas and skills should I focus on learning and mastering in order to make myself an attractive candidate for a permanent position as a business/systems analyst?

    Read the article

  • New licensing for SQL Server 2012 and #BISM #Tabular usage

    - by Marco Russo (SQLBI)
    Last week Microsoft announced a new licensing schema for SQL Server 2012. If you are interested in an extensive discussion of the new licensing scheme, Denny Cherry wrote a great blog post about that. I’d like to comment about the new BI Edition license. Teo Lachev already commented about the numbers and I agree with him. I generally like the new licensing mode of SQL 2012. It maintains a very low-entry barrier for SSRS/SSAS/SSIS (Standard Edition). It has a reasonable licensing schema for 20-50...(read more)

    Read the article

  • Strategy to find bottleneck in a network

    - by Simone
    Our enterprise is having some problem when the number of incoming request goes beyond a certain amount. To make things simpler, we have N websites that uses, amongst other, a local web service. This service is hosted by IIS, and it's a .NET 4.0 (C#) application executed in a farm. It's REST-oriented, built around OpenRasta. As already mentioned, by stress testing it with JMeter, we've found that beyond a certain amount of request the service's performance drop. Anyway, this service is, amongst other, a client itself of other 3 distinct web services and also a client for a DB server, so it's not very clear what really is the culprit of this abrupt decay. In turn, these 3 other web services are installed in our farm too, and client of other DB servers (and services, possibly, that are out of my team control). What strategy do you suggest to try to locate where the bottleneck(s) are? Do you have any high-level suggestions?

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >