Search Results

Search found 23103 results on 925 pages for 'performance issues and ha'.

Page 183/925 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • Windows Azure Table Storage LINQ Operators

    - by Ryan Elkins
    Currently Table Storage supports From, Where, Take, and First. Are there plans to support any of the other 29 operators? If we have to code for these ourselves, how much of a performance difference are we looking at to something similar via SQL and SQL Server? Do you see it being somewhat comparable or will it be far far slower if I need to do a Count or Sum or Group By over a gigantic dataset? I like the Azure platform and the idea of cloud based storage. I like Windows Azure for the amount of data it can store and the schema-less nature of table storage. SQL Azure just won't work due to the high cost to storage space.

    Read the article

  • Must .aspx files have a page directive?

    - by Keith Bloom
    Around 90% of the pages for our websites have no .Net code embedded in them yet are published as .aspx files. I want these to render as fast as possible so I'm removing as much as I can. Does the .Net page directive have an impact on performance? I am thinking about two factors; the page speed for each GET and what happens when the file changes. The CMS system re-creates each page daily and I'm wondering if this triggers the ASP.Net compilation process.

    Read the article

  • Python threads all executing on a single core

    - by Rob Lourens
    I have a Python program that spawns many threads, runs 4 at a time, and each performs an expensive operation. Pseudocode: for object in list: t = Thread(target=process, args=(object)) # if fewer than 4 threads are currently running, t.start(). Otherwise, add t to queue But when the program is run, Activity Monitor in OS X shows that 1 of the 4 logical cores is at 100% and the others are at nearly 0. Obviously I can't force the OS to do anything but I've never had to pay attention to performance in multi-threaded code like this before so I was wondering if I'm just missing or misunderstanding something. Thanks.

    Read the article

  • MySQL: Storage of multiple text fields for a record

    - by Tom
    An inexperienced question: I need to store about 10 unknown-length text fields per record into a MySQL table. I expect no more than 50K rows in total for this table but speed is important. The database actions will be solely SELECTs for all practical purposes. I'm using InnoDB. In other words: id | text1 | text2 | text3 | .... | text10 As I understand that MySQL will store the text elsewhere and use its own indicators on the table itself, I'm wondering whether there's any fundamental performance implications that I should be worrying about given the way the data is stored? (i.e. several "sub-fetches" from the table). Thank you.

    Read the article

  • Best data-structure to use for two ended sorted list

    - by fmark
    I need a collection data-structure that can do the following: Be sorted Allow me to quickly pop values off the front and back of the list Remain sorted after I insert a new value Allow a user-specified comparison function, as I will be storing tuples and want to sort on a particular value Thread-safety is not required Optionally allow efficient haskey() lookups (I'm happy to maintain a separate hash-table for this though) My thoughts at this stage are that I need a priority queue and a hash table, although I don't know if I can quickly pop values off both ends of a priority queue. I'm interested in performance for a moderate number of items (I would estimate less than 200,000). Another possibility is simply maintaining an OrderedDictionary and doing an insertion sort it every-time I add more data to it. Furthermore, are there any particular implementations in Python. I would really like to avoid writing this code myself.

    Read the article

  • Oracle - timed sampling from v$session_longops

    - by FrustratedWithFormsDesigner
    I am trying to track performance on some procedures that run too slow (and seem to keep getting slower). I am using v$session_longops to track how much work has been done, and I have a query (sofar/((v$session_longops.LAST_UPDATE_TIME-v$session_longops.start_time)*24*60*60)) that tells me the rate at which work is being done. What I'd like to be able to do is capture the rate at which work is being done and how it changes over time. Right now, I just re-execute the query manually, and then copy/paste to Excel. Not very optimal, especially when the phone rings or something else happens to interrupt my sampling frequency. Is there a way to have script in SQL*Plus run a query evern n seconds, spool the results to a file, and then continue doing this until the job ends? (Oracle 10g)

    Read the article

  • Strange profiling results: definitely non-bottleneck method pops up

    - by jkff
    I'm profiling a program using sampling profiling in YourKit and JProfiler, and also "manually" (I launch it and press Ctrl-Break several times to get thread dumps). All three methods give me extremely strange results: some tens of percents of time spent in a 3-line method that does not even do any allocation or synchronization and doesn't have loops etc. Moreover, after I made this method into a NOP and even removed its invocation completely, the observable program performance didn't change at all (although it got a negligible memory leak, since it was a method for freeing a cheap resource). I'm thinking that this might be because of the constraints that JVM puts on the moments at which a thread's stacktrace may be taken, and it somehow turns out that in my program it is exactly the moments where this method is invoked, although there is absolutely nothing special about it or the context in which it is invoked. What can be the explanation for this phenomenon? What are the aforementioned constraints? What further measurements can I take to clarify the situation?

    Read the article

  • What to monitor on MSSQL Server

    - by user361434
    Hi all I have been asked to monitor MSSQL Server (2005 & 2008) and am wondering what are good metrics to look at? I can access WMI counters but am slightly lost as to how much depth is going to be useful. Currently I have on my list: user connections logins per second latch waits per second total latch wait time dead locks per second errors per second Log and data file sizes I am looking to be able to monitor values that will indicate a degradation of performance on the machine or a potential serious issue. To this end I am also wondering at what values some of these things would be considered normal vs problematic? As I reckon it would probably be a really good question to have answered for the general community I thought i'd court some of you DBA experts out there (I am certainly not one of them!) Apologies if a rather open ended question. Ry

    Read the article

  • Does Ruby on Rails "has_many" array provide data on a "need to know" basis?

    - by Jian Lin
    On Ruby on Rails, say, if the Actor model object is Tom Hanks, and the "has_many" fans is 20,000 Fan objects, then actor.fans gives an Array with 20,000 elements. Probably, the elements are not pre-populated with values? Otherwise, getting each Actor object from the DB can be extremely time consuming. So it is on a "need to know" basis? So does it pull data when I access actor.fans[500], and pull data when I access actor.fans[0]? If it jumps from each record to record, then it won't be able to optimize performance by doing sequential read, which can be faster on the hard disk because those records could be in the nearby sector / platter layer -- for example, if the program touches 2 random elements, then it will be faster just to read those 2 records, but what if it touches all elements in random order, then it may be faster just to read all records in a sequential way, and then process the random elements. But how will RoR know whether I am doing only a few random elements or all elements in random?

    Read the article

  • Avoiding string copying in Lua

    - by Matt Sheppard
    Say I have a C program which wants to call a very simple Lua function with two strings (let's say two comma separated lists, returning true if the lists intersect at all, false if not). The obvious way to do this is to push them onto the stack with lua_pushstring, which works fine, however, from the doc it looks like lua_pushstring but makes a copy of the string for Lua to work with. That means that to cross over to the Lua function is going to require two string copies which I could avoid by rewriting the Lua function in C. Is there any way to arrange things so that the existing C strings could be reused on the Lua side for the sake of performance (or would the strcpy cost pale into insignificance anyway)? From my investigation so far (my first couple of hours looking seriously at Lua), lite userdata seems like the sort of thing I want, but in the form of a string.

    Read the article

  • Avoid IF statement after condition has been met

    - by greye
    I have a division operation inside a cycle that repeats many times. It so happens that in the first few passes through the loop (more or less first 10 loops) the divisor is zero. Once it gains value, a div by zero error is not longer possible. I have an if condition to test the divisor value in order to avoid the div by zero, but I am wondering that there is a performance impact that evaluating this if will have for each run in subsequent loops, especially since I know it's of no use anymore. How should this be coded? in Python?

    Read the article

  • COMET chat application - IIS7 slows down over time

    - by Yaron
    I have built a chat application which uses this code to push messages to clients (web pages) and to monitor online users and their information. Basically, the code creates and manages a custom thread pool for maintaining the list of connected users & their state. The application was hosted on a shared hosting account (IIS6), and worked fine. After moving the site (ASP.Net App) to a dedicated virtual server it seems I have a problem where IIS7 gets slower and slower as time passes, and my only "solution" is to restart IIS. I am trying to look at the performance counters and have do idea on which one to look.

    Read the article

  • How to calculate a operation's time in micro second precision

    - by Sanjeet Daga
    I want to calculate performance of a function in micro second precision on Windows platform. Now Windows itself has milisecond granuality, so how can I achieve this. I tried following sample, but not getting correct results. LARGE_INTEGER ticksPerSecond = {0}; LARGE_INTEGER tick_1 = {0}; LARGE_INTEGER tick_2 = {0}; double uSec = 1000000; // Get the frequency QueryPerformanceFrequency(&ticksPerSecond); //Calculate per uSec freq double uFreq = ticksPerSecond.QuadPart/uSec; // Get counter b4 start of op QueryPerformanceCounter(&tick_1); // The ope itself Sleep(10); // Get counter after opfinished QueryPerformanceCounter(&tick_2); // And now the op time in uSec double diff = (tick_2.QuadPart/uFreq) - (tick_1.QuadPart/uFreq);

    Read the article

  • Will PHP Die In Web Page Development World?

    - by Morgan Cheng
    I know that PHP is still the most popular web programming language in the world. This question just want to bring some of my concerns about PHP. PHP is naturally bound to C10K problem. Since PHP (generally run in Apache) cannot be event-driven or asynchronous, each HTTP request will occupy at least one thread or process. This makes it resistant to be more scalable. Currently, a lot of web sites (like Facebook) with high performance and scalability still depends on PHP in their front end servers. I suppose it is due to legacy reason. Is it possible that PHP will be replaced by language more suitable for C10K?

    Read the article

  • Is str.replace(..).replace(..) ad nauseam a standard idiom in Python?

    - by meeselet
    For instance, say I wanted a function to escape a string for use in HTML (as in Django's escape filter): def escape(string): """ Returns the given string with ampersands, quotes and angle brackets encoded. """ return string.replace('&', '&amp;').replace('<', '&lt;').replace('>', '&gt;').replace("'", '&#39;').replace('"', '&quot;') This works, but it gets ugly quickly and appears to have poor algorithmic performance (in this example, the string is repeatedly traversed 5 times). What would be better is something like this: def escape(string): """ Returns the given string with ampersands, quotes and angle brackets encoded. """ # Note that ampersands must be escaped first; the rest can be escaped in # any order. return replace_multi(string.replace('&', '&amp;'), {'<': '&lt;', '>': '&gt;', "'": '&#39;', '"': '&quot;'}) Does such a function exist, or is the standard Python idiom to use what I wrote before?

    Read the article

  • Reading from a file not line-by-line

    - by MadH
    Assigning a QTextStream to a QFile and reading it line-by-line is easy and works fine, but I wonder if the performance can be inreased by first storing the file in memory and then processing it line-by-line. Using FileMon from sysinternals, I've encountered that the file is read in chunks of 16KB and since the files I've to process are not that big (~2MB, but many!), loading them into memory would be a nice thing to try. Any ideas how can I do so? QFile is inhereted from QIODevice, which allows me to ReadAll() it into QByteArray, but how to proceed then and divide it into lines?

    Read the article

  • Should I use integer primary IDs?

    - by arthurprs
    For example, I always generate an auto-increment field for the users table, but I also specify a UNIQUE index on their usernames. There are situations that I first need to get the userId for a given username and then execute the desired query, or use a JOIN in the desired query. It's 2 trips to the database or a JOIN vs. a varchar index. Should I use integer primary IDs? Is there a real performance benefit on INT over small VARCHAR indexes?

    Read the article

  • Under what circumstances does Groovy use AbstractConcurrentMap?

    - by Electrons_Ahoy
    (Specifically, org.codehaus.groovy.util.AbstractConcurrentMap) While doing some profiling of our application thats mixed Java/Groovy, I'm seeing a lot of references to the AbstractConcurrentMap class, none of which are explicit in the code base. Does groovy use this class when maps are instantiated in the groovy dynamic def myMap = [:] style? Are there rules somewhere about when groovy chooses to use this as opposed to, say, java.util.HashMap? And does anyone have any performance information comparing the two? My rough "eyeball check" says that AbstractConcurrentMap seems to be much slower - anyone know if I'm right?

    Read the article

  • C++ Asymptotic Profiling

    - by Travis
    I have a performance issue where I suspect one standard C library function is taking too long and causing my entire system (suite of processes) to basically "hiccup". Sure enough if I comment out the library function call, the hiccup goes away. This prompted me to investigate what standard methods there are to prove this type of thing? What would be the best practice for testing a function to see if it causes an entire system to hang for a sec (causing other processes to be momentarily starved)? I would at least like to definitively correlate the function being called and the visible freeze. Thanks

    Read the article

  • Simple Self Join Query Bad Performance

    - by user1514042
    Could anyone advice on how do I improve the performance of the following query. Note, the problem seems to be caused by where clause. Data (table contains a huge set of rows - 500K+, the set of parameters it's called with assums the return of 2-5K records per query, which takes 8-10 minutes currently): USE [SomeDb] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[Data]( [x] [money] NOT NULL, [y] [money] NOT NULL, CONSTRAINT [PK_Data] PRIMARY KEY CLUSTERED ( [x] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO The Query select top 10000 s.x as sx, e.x as ex, s.y as sy, e.y as ey, e.y - s.y as y_delta, e.x - s.x as x_delta from Data s inner join Data e on e.x > s.x and e.x - s.x between xFrom and xTo --where e.y - s.y > @yDelta -- when uncommented causes a huge delay Update 1 - Execution Plan <?xml version="1.0" encoding="utf-16"?> <ShowPlanXML xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" Version="1.2" Build="11.0.2100.60" xmlns="http://schemas.microsoft.com/sqlserver/2004/07/showplan"> <BatchSequence> <Batch> <Statements> <StmtSimple StatementCompId="1" StatementEstRows="100" StatementId="1" StatementOptmLevel="FULL" StatementOptmEarlyAbortReason="GoodEnoughPlanFound" StatementSubTreeCost="0.0263655" StatementText="select top 100&#xD;&#xA;s.x as sx,&#xD;&#xA;e.x as ex,&#xD;&#xA;s.y as sy,&#xD;&#xA;e.y as ey,&#xD;&#xA;e.y - s.y as y_delta,&#xD;&#xA;e.x - s.x as x_delta&#xD;&#xA;from Data s &#xD;&#xA; inner join Data e&#xD;&#xA; on e.x &gt; s.x and e.x - s.x between 100 and 105&#xD;&#xA;where e.y - s.y &gt; 0.01&#xD;&#xA;" StatementType="SELECT" QueryHash="0xAAAC02AC2D78CB56" QueryPlanHash="0x747994153CB2D637" RetrievedFromCache="true"> <StatementSetOptions ANSI_NULLS="true" ANSI_PADDING="true" ANSI_WARNINGS="true" ARITHABORT="true" CONCAT_NULL_YIELDS_NULL="true" NUMERIC_ROUNDABORT="false" QUOTED_IDENTIFIER="true" /> <QueryPlan DegreeOfParallelism="0" NonParallelPlanReason="NoParallelPlansInDesktopOrExpressEdition" CachedPlanSize="24" CompileTime="13" CompileCPU="13" CompileMemory="424"> <MemoryGrantInfo SerialRequiredMemory="0" SerialDesiredMemory="0" /> <OptimizerHardwareDependentProperties EstimatedAvailableMemoryGrant="52199" EstimatedPagesCached="14561" EstimatedAvailableDegreeOfParallelism="4" /> <RelOp AvgRowSize="55" EstimateCPU="1E-05" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimatedExecutionMode="Row" EstimateRows="100" LogicalOp="Compute Scalar" NodeId="0" Parallel="false" PhysicalOp="Compute Scalar" EstimatedTotalSubtreeCost="0.0263655"> <OutputList> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="y" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> <ColumnReference Column="Expr1004" /> <ColumnReference Column="Expr1005" /> </OutputList> <ComputeScalar> <DefinedValues> <DefinedValue> <ColumnReference Column="Expr1004" /> <ScalarOperator ScalarString="[SomeDb].[dbo].[Data].[y] as [e].[y]-[SomeDb].[dbo].[Data].[y] as [s].[y]"> <Arithmetic Operation="SUB"> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> </Identifier> </ScalarOperator> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="y" /> </Identifier> </ScalarOperator> </Arithmetic> </ScalarOperator> </DefinedValue> <DefinedValue> <ColumnReference Column="Expr1005" /> <ScalarOperator ScalarString="[SomeDb].[dbo].[Data].[x] as [e].[x]-[SomeDb].[dbo].[Data].[x] as [s].[x]"> <Arithmetic Operation="SUB"> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> </Identifier> </ScalarOperator> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> </Identifier> </ScalarOperator> </Arithmetic> </ScalarOperator> </DefinedValue> </DefinedValues> <RelOp AvgRowSize="39" EstimateCPU="1E-05" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimatedExecutionMode="Row" EstimateRows="100" LogicalOp="Top" NodeId="1" Parallel="false" PhysicalOp="Top" EstimatedTotalSubtreeCost="0.0263555"> <OutputList> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="y" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> </OutputList> <RunTimeInformation> <RunTimeCountersPerThread Thread="0" ActualRows="100" ActualEndOfScans="1" ActualExecutions="1" /> </RunTimeInformation> <Top RowCount="false" IsPercent="false" WithTies="false"> <TopExpression> <ScalarOperator ScalarString="(100)"> <Const ConstValue="(100)" /> </ScalarOperator> </TopExpression> <RelOp AvgRowSize="39" EstimateCPU="151828" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimatedExecutionMode="Row" EstimateRows="100" LogicalOp="Inner Join" NodeId="2" Parallel="false" PhysicalOp="Nested Loops" EstimatedTotalSubtreeCost="0.0263455"> <OutputList> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="y" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> </OutputList> <RunTimeInformation> <RunTimeCountersPerThread Thread="0" ActualRows="100" ActualEndOfScans="0" ActualExecutions="1" /> </RunTimeInformation> <NestedLoops Optimized="false"> <OuterReferences> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> </OuterReferences> <RelOp AvgRowSize="23" EstimateCPU="1.80448" EstimateIO="3.76461" EstimateRebinds="0" EstimateRewinds="0" EstimatedExecutionMode="Row" EstimateRows="1" LogicalOp="Clustered Index Scan" NodeId="3" Parallel="false" PhysicalOp="Clustered Index Scan" EstimatedTotalSubtreeCost="0.0032831" TableCardinality="1640290"> <OutputList> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> </OutputList> <RunTimeInformation> <RunTimeCountersPerThread Thread="0" ActualRows="15225" ActualEndOfScans="0" ActualExecutions="1" /> </RunTimeInformation> <IndexScan Ordered="false" ForcedIndex="false" ForceScan="false" NoExpandHint="false"> <DefinedValues> <DefinedValue> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> </DefinedValue> <DefinedValue> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> </DefinedValue> </DefinedValues> <Object Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Index="[PK_Data]" Alias="[e]" IndexKind="Clustered" /> </IndexScan> </RelOp> <RelOp AvgRowSize="23" EstimateCPU="0.902317" EstimateIO="1.88387" EstimateRebinds="1" EstimateRewinds="0" EstimatedExecutionMode="Row" EstimateRows="100" LogicalOp="Clustered Index Seek" NodeId="4" Parallel="false" PhysicalOp="Clustered Index Seek" EstimatedTotalSubtreeCost="0.0263655" TableCardinality="1640290"> <OutputList> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="y" /> </OutputList> <RunTimeInformation> <RunTimeCountersPerThread Thread="0" ActualRows="100" ActualEndOfScans="15224" ActualExecutions="15225" /> </RunTimeInformation> <IndexScan Ordered="true" ScanDirection="FORWARD" ForcedIndex="false" ForceSeek="false" ForceScan="false" NoExpandHint="false" Storage="RowStore"> <DefinedValues> <DefinedValue> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> </DefinedValue> <DefinedValue> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="y" /> </DefinedValue> </DefinedValues> <Object Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Index="[PK_Data]" Alias="[s]" IndexKind="Clustered" /> <SeekPredicates> <SeekPredicateNew> <SeekKeys> <EndRange ScanType="LT"> <RangeColumns> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="[SomeDb].[dbo].[Data].[x] as [e].[x]"> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> </Identifier> </ScalarOperator> </RangeExpressions> </EndRange> </SeekKeys> </SeekPredicateNew> </SeekPredicates> <Predicate> <ScalarOperator ScalarString="([SomeDb].[dbo].[Data].[x] as [e].[x]-[SomeDb].[dbo].[Data].[x] as [s].[x])&gt;=($100.0000) AND ([SomeDb].[dbo].[Data].[x] as [e].[x]-[SomeDb].[dbo].[Data].[x] as [s].[x])&lt;=($105.0000) AND ([SomeDb].[dbo].[Data].[y] as [e].[y]-[SomeDb].[dbo].[Data].[y] as [s].[y])&gt;(0.01)"> <Logical Operation="AND"> <ScalarOperator> <Compare CompareOp="GE"> <ScalarOperator> <Arithmetic Operation="SUB"> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> </Identifier> </ScalarOperator> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> </Identifier> </ScalarOperator> </Arithmetic> </ScalarOperator> <ScalarOperator> <Const ConstValue="($100.0000)" /> </ScalarOperator> </Compare> </ScalarOperator> <ScalarOperator> <Compare CompareOp="LE"> <ScalarOperator> <Arithmetic Operation="SUB"> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> </Identifier> </ScalarOperator> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> </Identifier> </ScalarOperator> </Arithmetic> </ScalarOperator> <ScalarOperator> <Const ConstValue="($105.0000)" /> </ScalarOperator> </Compare> </ScalarOperator> <ScalarOperator> <Compare CompareOp="GT"> <ScalarOperator> <Arithmetic Operation="SUB"> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> </Identifier> </ScalarOperator> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="y" /> </Identifier> </ScalarOperator> </Arithmetic> </ScalarOperator> <ScalarOperator> <Const ConstValue="(0.01)" /> </ScalarOperator> </Compare> </ScalarOperator> </Logical> </ScalarOperator> </Predicate> </IndexScan> </RelOp> </NestedLoops> </RelOp> </Top> </RelOp> </ComputeScalar> </RelOp> </QueryPlan> </StmtSimple> </Statements> </Batch> </BatchSequence> </ShowPlanXML>

    Read the article

  • StringBuilder/StringBuffer vs. "+" Operator

    - by matt.seil
    I'm reading "Better, Faster, Lighter Java" (by Bruce Tate and Justin Gehtland) and am familiar with the readability requirements in agile type teams, such as what Robert Martin discusses in his clean coding books. On the team I'm on now, I've been told explicitly not to use the "+" operator because it creates extra (and unnecessary) string objects during runtime. But this article: http://www.ibm.com/developerworks/java/library/j-jtp01274.html Written back in '04 talks about how object allocation is about 10 machine instructions. (essentially free) It also talks about how the GC also helps to reduce costs in this environment. What is the actual performance tradeoffs between using "+," "StringBuilder," or "StringBuffer?" (In my case it is StringBuffer only as we are limited to Java 1.4.2.) StringBuffer to me results in ugly, less readable code, as a couple of examples in Tate's book demonstrates. And StringBuffer is thread-synchronized which seems to have its own costs that outweigh the "danger" in using the "+" operator. Thoughts/Opinions?

    Read the article

  • Pros and cons of sorting data in DB?

    - by Roman
    Let's assume I have a table with field of type VARCHAR. And I need to get data from that table sorted alphabetically by that field. What is the best way (for performance): add sort by field to the SQL-query or sort the data when it's already fetched? I'm using Java (with Hibernate), but I can't tell anything about DB engine. It could be any popular relational database (like MySQL or MS Sql Server or Oracle or HSQL DB or any other). The amount of records in table can vary greatly but let's assume there are 5k records.

    Read the article

  • Managing StringBuilder Resources in C#

    - by Jim Fell
    Hello. My C# (.NET 2.0) application has a StringBuilder variable with a capacity of 2.5MB. Obviously, I do not want to copy such a large buffer to a larger buffer space every time it fills. By that point, there is so much data in the buffer anyways, removing the older data is a viable option. Can anyone see any obvious problems with how I'm doing this (i.e. am I introducing more performance problems than I'm solving), or does it look okay? tText_c = new StringBuilder(2500000, 2500000); private void AppendToText(string text) { if (tText_c.Length * 100 / tText_c.Capacity > 95) { tText_c.Remove(0, tText_c.Length / 2); } tText_c.Append(text); } Thanks.

    Read the article

  • Is this a valid benefit of using embedded SQL over stored procedures?

    - by George
    Here's an argument for SPs that I haven't heard. Flamers, be gentle with the down tick, Since there is overhead associated with each trip to the database server, I would suggest that a POSSIBLE reason for placing your SQL in SPs over embedded code is that you are more insulated to change without taking a performance hit. For example. Let's say you need to perform Query A that returns a scalar integer. Then, later, the requirements change and you decide that it the results of the scalar is x that then, and only then, you need to perform another query. If you performed the first query in a SP, you could easily check the result of the first query and conditionally execute the 2nd SQL in the same SP. How would you do this efficiently in embedded SQL w/o perform a separate query or an unnecessary query? Here's an example: --This SP may return 1 or two queries. SELECT @CustCount = COUNT(*) FROM CUSTOMER IF @CustCount 10 SELECT * FROM PRODUCT Can this/what is the best way to do this in embedded SQL?

    Read the article

  • Slow query with unexpected scan

    - by zerkms
    Hello I have this query: SELECT * FROM SAMPLE SAMPLE INNER JOIN TEST TEST ON SAMPLE.SAMPLE_NUMBER = TEST.SAMPLE_NUMBER INNER JOIN RESULT RESULT ON TEST.TEST_NUMBER = RESULT . TEST_NUMBER WHERE SAMPLED_DATE BETWEEN '2010-03-17 09:00' AND '2010-03-17 12:00' the biggest table here is RESULT, contains 11.1M records. The left 2 tables about 1M. this query works slowly (more than 10 minutes) and returns about 800 records. executing plan shows clustered index scan over all 11M records. RESULT.TEST_NUMBER is a clustered primary key. if I change 2010-03-17 09:00 to 2010-03-17 10:00 - i get about 40 records. it executes for 300ms. and plan shows clustered index seek if i replace * in SELECT clause to RESULT.TEST_NUMBER (covered with index) - then all become fast in first case too. this points to hdd io issues, but doesn't clarifies changing plan. so, any ideas?

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >