Search Results

Search found 754 results on 31 pages for 'aggregate'.

Page 13/31 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • file read performance degrades as number of files increases

    - by bfallik-bamboom
    We're observing poor file read IO results that we'd like to better understand. We can use fio to write 100 files with a sustained aggregate throughput of ~700MB/s. When we switch the test to read instead of write, the aggregate throughput is only ~55MB/s. The drop seems related to the number of files since the throughput for read and write are comparable for a single file then diverge proportionally as we increase the number of files. The test server has 24 CPU cores, 48GB of memory, and is running CentOS 6.0. The disk hardware is a RAID 6 array with 12 disks and a Dell H800 controller. This device is partitioned with ext4 using the default settings. Increasing the readahead (using blockdev) improves the read throughput significantly but it still doesn't match write speed. For instance, increasing the readahead from 128KB to 1M improved the read throughput to ~145MB/s. Is this a known performance issue in our OS/disk/filesystem configuration? If so, how can we tell? If not, what tools or tests can we use to further isolate the issue? Thanks.

    Read the article

  • building median for each string in IList<IDictionary<string, double>>

    - by Oliver
    Actually i have a little brain bender and and just can't find the right direction to get it to work: Given is an IList<IDictionary<string, double>> and it is filled as followed: Name|Value ----+----- "x" | 3.8 "y" | 4.2 "z" | 1.5 ----+----- "x" | 7.2 "y" | 2.9 "z" | 1.3 ----+----- ... | ... To fill this up with some random data i used the following methods: var list = CreateRandomPoints(new string[] { "x", "y", "z" }, 20); This will work as followed: private IList<IDictionary<string, double>> CreateRandomPoints(string[] variableNames, int count) { var list = new List<IDictionary<string, double>>(count); list.AddRange(CreateRandomPoints(variableNames).Take(count)); return list; } private IEnumerable<IDictionary<string, double>> CreateRandomPoints(string[] variableNames) { while (true) yield return CreateRandomLine(variableNames); } private IDictionary<string, double> CreateRandomLine(string[] variableNames) { var dict = new Dictionary<string, double>(variableNames.Length); foreach (var variable in variableNames) { dict.Add(variable, _Rand.NextDouble() * 10); } return dict; } Also i can say that it is already ensured that every Dictionary within the list contains the same keys (but from list to list the names and count of the keys can change). So that's what i got. Now to the things i need: I'd like to get the median (or any other math aggregate operation) of each Key within all the dictionaries, so that my function to call would look something like: IDictionary<string, double> GetMedianOfRows(this IList<IDictionary<string, double>> list) The best would be to give some kind of aggregate operation as a parameter to the function to make it more generic (don't know if the func has the correct parameters, but should imagine what i'd like to do): private IDictionary<string, double> Aggregate(this IList<IDictionary<string, double>> list, Func<IEnumerable<double>, double> aggregator) Also my actual biggest problem is to do the job with a single iteration over the list, cause if within the list are 20 variables with 1000 values i don't like to iterate 20 times over the list. Instead i would go one time over the list and compute all twenty variables at once (The biggest advantage of doing it that way would be to use this also as IEnumerable<T> on any part of the list in a later step). So here is the code i already got: public static IDictionary<string, double> GetMedianOfRows(this IList<IDictionary<string, double>> list) { //Check of parameters is left out! //Get first item for initialization of result dictionary var firstItem = list[0]; //Create result dictionary and fill in variable names var dict = new Dictionary<string, double>(firstItem.Count); //Iterate over the whole list foreach (IDictionary<string, double> row in list) { //Iterate over each key/value pair within the list foreach (var kvp in row) { //How to determine median of all values? } } return dict; } Just to be sure here is a little explanation about the Median.

    Read the article

  • CreationName for SSIS 2008 and adding components programmatically

    If you are building SSIS 2008 packages programmatically and adding data flow components, you will probably need to know the creation name of the component to add. I can never find a handy reference when I need one, hence this rather mundane post. See also CreationName for SSS 2005. We start with a very simple snippet for adding a component: // Add the Data Flow Task package.Executables.Add("STOCK:PipelineTask"); // Get the task host wrapper, and the Data Flow task TaskHost taskHost = package.Executables[0] as TaskHost; MainPipe dataFlowTask = (MainPipe)taskHost.InnerObject; // Add OLE-DB source component - ** This is where we need the creation name ** IDTSComponentMetaData90 componentSource = dataFlowTask.ComponentMetaDataCollection.New(); componentSource.Name = "OLEDBSource"; componentSource.ComponentClassID = "DTSAdapter.OLEDBSource.2"; So as you can see the creation name for a OLE-DB Source is DTSAdapter.OLEDBSource.2. CreationName Reference  ADO NET Destination Microsoft.SqlServer.Dts.Pipeline.ADONETDestination, Microsoft.SqlServer.ADONETDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 ADO NET Source Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter, Microsoft.SqlServer.ADONETSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Aggregate DTSTransform.Aggregate.2 Audit DTSTransform.Lineage.2 Cache Transform DTSTransform.Cache.1 Character Map DTSTransform.CharacterMap.2 Checksum Konesans.Dts.Pipeline.ChecksumTransform.ChecksumTransform, Konesans.Dts.Pipeline.ChecksumTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Conditional Split DTSTransform.ConditionalSplit.2 Copy Column DTSTransform.CopyMap.2 Data Conversion DTSTransform.DataConvert.2 Data Mining Model Training MSMDPP.PXPipelineProcessDM.2 Data Mining Query MSMDPP.PXPipelineDMQuery.2 DataReader Destination Microsoft.SqlServer.Dts.Pipeline.DataReaderDestinationAdapter, Microsoft.SqlServer.DataReaderDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Derived Column DTSTransform.DerivedColumn.2 Dimension Processing MSMDPP.PXPipelineProcessDimension.2 Excel Destination DTSAdapter.ExcelDestination.2 Excel Source DTSAdapter.ExcelSource.2 Export Column TxFileExtractor.Extractor.2 Flat File Destination DTSAdapter.FlatFileDestination.2 Flat File Source DTSAdapter.FlatFileSource.2 Fuzzy Grouping DTSTransform.GroupDups.2 Fuzzy Lookup DTSTransform.BestMatch.2 Import Column TxFileInserter.Inserter.2 Lookup DTSTransform.Lookup.2 Merge DTSTransform.Merge.2 Merge Join DTSTransform.MergeJoin.2 Multicast DTSTransform.Multicast.2 OLE DB Command DTSTransform.OLEDBCommand.2 OLE DB Destination DTSAdapter.OLEDBDestination.2 OLE DB Source DTSAdapter.OLEDBSource.2 Partition Processing MSMDPP.PXPipelineProcessPartition.2 Percentage Sampling DTSTransform.PctSampling.2 Performance Counters Source DataCollectorTransform.TxPerfCounters.1 Pivot DTSTransform.Pivot.2 Raw File Destination DTSAdapter.RawDestination.2 Raw File Source DTSAdapter.RawSource.2 Recordset Destination DTSAdapter.RecordsetDestination.2 RegexClean Konesans.Dts.Pipeline.RegexClean.RegexClean, Konesans.Dts.Pipeline.RegexClean, Version=2.0.0.0, Culture=neutral, PublicKeyToken=d1abe77e8a21353e Row Count DTSTransform.RowCount.2 Row Count Plus Konesans.Dts.Pipeline.RowCountPlusTransform.RowCountPlusTransform, Konesans.Dts.Pipeline.RowCountPlusTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Number Konesans.Dts.Pipeline.RowNumberTransform.RowNumberTransform, Konesans.Dts.Pipeline.RowNumberTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Sampling DTSTransform.RowSampling.2 Script Component Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost, Microsoft.SqlServer.TxScript, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Slowly Changing Dimension DTSTransform.SCD.2 Sort DTSTransform.Sort.2 SQL Server Compact Destination Microsoft.SqlServer.Dts.Pipeline.SqlCEDestinationAdapter, Microsoft.SqlServer.SqlCEDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 SQL Server Destination DTSAdapter.SQLServerDestination.2 Term Extraction DTSTransform.TermExtraction.2 Term Lookup DTSTransform.TermLookup.2 Trash Destination Konesans.Dts.Pipeline.TrashDestination.Trash, Konesans.Dts.Pipeline.TrashDestination, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b8351fe7752642cc TxTopQueries DataCollectorTransform.TxTopQueries.1 Union All DTSTransform.UnionAll.2 Unpivot DTSTransform.UnPivot.2 XML Source Microsoft.SqlServer.Dts.Pipeline.XmlSourceAdapter, Microsoft.SqlServer.XmlSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Here is a simple console program that can be used to enumerate the pipeline components installed on your machine, and dumps out a list of all components like that above. You will need to add a reference to the Microsoft.SQLServer.ManagedDTS assembly. using System; using System.Diagnostics; using Microsoft.SqlServer.Dts.Runtime; public class Program { static void Main(string[] args) { Application application = new Application(); PipelineComponentInfos componentInfos = application.PipelineComponentInfos; foreach (PipelineComponentInfo componentInfo in componentInfos) { Debug.WriteLine(componentInfo.Name + "\t" + componentInfo.CreationName); } Console.Read(); } }

    Read the article

  • WebCenter Customer Spotlight: Texas Industries, Inc.

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryTexas Industries, Inc. (TXI) is a leading supplier of cement, aggregate, and consumer product building materials for residential, commercial, and public works projects. TXI is based in Dallas and employs around 2,000 employees. The customer had the challenge of decentralized and manual processes for entering 180,000 vendor invoices annually.  Invoice entry was a time- and resource-intensive process that entailed significant personnel requirements. TXI implemented a centralized solution leveraging Oracle WebCenter Imaging, a smart routing solution that enables users to capture invoices electronically with Oracle WebCenter Capture and Oracle WebCenter Forms Recognition to send  the invoices through to Oracle Financials for approvals and processing.  TXI significantly lowered resource needs for payable processing,  increase productivity by 80% and reduce invoice processing cycle times by 84%—from 20 to 30 days to just 3 to 5 days, on average. Company OverviewTexas Industries, Inc. (TXI) is a leading supplier of cement, aggregate, and consumer product building materials for residential, commercial, and public works projects. With operating subsidiaries in six states, TXI is the largest producer of cement in Texas and a major producer in California. TXI is a major supplier of stone, sand, gravel, and expanded shale and clay products, and one of the largest producers of bagged cement and concrete  products in the Southwest. Business ChallengesTXI had the challenge of decentralized and manual processes for entering 180,000 vendor invoices annually.  Invoice entry was a time- and resource-intensive process that entailed significant personnel requirements. Their business objectives were: Increase the efficiency of core business processes, such as invoice processing, to support the organization’s desire to maintain its role as the Southwest’s leader in delivering high-quality, low-cost products to the construction industry Meet the audit and regulatory requirements for achieving Sarbanes-Oxley (SOX) compliance Streamline entry of 180,000 invoices annually to accelerate processing, reduce errors, cut invoice storage and routing costs, and increase visibility into payables liabilities Solution DeployedTXI replaced a resource-intensive, paper-based, decentralized process for invoice entry with a centralized solution leveraging Oracle WebCenter Imaging 11g. They worked with the Oracle Partner Keste LLC to develop a smart routing solution that enables users to capture invoices electronically with Oracle WebCenter Capture and then uses Oracle WebCenter Forms Recognition and the Oracle WebCenter Imaging workflow to send the invoices through to Oracle Financials for approvals and processing. Business Results Significantly lowered resource needs for payable processing through centralization and improved efficiency  Enabled the company to process invoices faster and pay bills earlier, allowing it to take advantage of additional vendor discounts Tracked to increase productivity by 80% and reduce invoice processing cycle times by 84%—from 20 to 30 days to just 3 to 5 days, on average Achieved a 25% reduction in paper invoice storage costs now that invoices are captured digitally, and enabled a 50% reduction in shipping costs, as the company no longer has to send paper invoices between headquarters and production facilities for approvals “Entering and manually processing more than 180,000 vendor invoices annually was time and labor intensive. With Oracle Imaging and Process Management, we have automated and centralized invoice entry and processing at our corporate office, improving productivity by 80% and reducing invoice processing cycle times by 84%—a very important efficiency gain.” Terry Marshall, Vice President of Information Services, Texas Industries, Inc. Additional Information TXI Customer Snapshot Oracle WebCenter Content Oracle WebCenter Capture Oracle WebCenter Forms Recognition

    Read the article

  • Web Safe Area (optimal resolution) for web app design?

    - by M.A.X
    I'm in the process of designing a new web app and I'm wondering for what 'Web Safe Area' should I optimize the app layout and design. By Web Safe Area I mean the actual area available to display the website in the browser (which is influenced by monitor resolution as well as the space taken up by the browser and OS) I did some investigation and thinking on my own but wanted to share this to see what the general opinion is. Here is what I found: Optimal Display Resolution: w3schools web stats seems to be the most referenced source (however they state that these are results from their site and is biased towards tech savvy users) http://www.w3counter.com/globalstats.php (aggregate data from something like 15,000 different sites that use their tracking services) StatCounter Global Stats Display Resolution (Stats are based on aggregate data collected by StatCounter on a sample exceeding 15 billion pageviews per month collected from across the StatCounter network of more than 3 million websites) NetMarketShare Screen Resolutions (marketshare.hitslink.com) (a web analytics consulting firm, they get data from browsers of site visitors to their on-demand network of live stats customers. The data is compiled from approximately 160 million visitors per month) Display Resolution Summary: There is a bit of variation between the above sources but in general as of Jan 2011 looks like 1024x768 is about 20%, while ~85% have a higher resolution of at least 1280x768 (1280x800 is the most common of these with 15-20% of total web, depending on the source; 1280x1024 and 1366x768 follow behind with 9-14% of the share). My guess would be that the higher resolution values will be even more common if we filter on North America, and even higher if we filter on N.American corporate users (unfortunately I couldn't find any free geographically filtered statistics). Another point to note is that the 1024x768 desktop user population is likely lower than the aforementioned 20%, seeing as the iPad (1024x768 native display) is likely propping up those number (the app I'm designing is flash based, Apple mobile devices don't support flash so iPad support isn't a concern). My recommendation would be to optimize around the 1280x768 constraint (*note: 1280x768 is actually a relatively rare resolution, but I think it's a valid constraint range considering that 1366x768 is relatively common and 1280 is the most common horizontal resolution). Browser + OS Constraints: To further add to the constraints we have to subtract the space taken up by the browser (assuming IE, which is the most space consuming) and the OS (assuming WinXP-Win7): Win7 has the biggest taskbar footprint at a height of 40px (XP's and Vista's is 30px) The default IE8 view uses up 25px at the bottom of the screen with the status bar and a further 120px at the top of the screen with the windows title bar and the browser UI (assuming the default 'favorites' toolbar is present, it would instead be 91px without the favorites toolbar). Assuming no scrollbar, we also loose a total of 4px horizontally for the window outline. This means that we are left with 583px of vertical space and 1276px of horizontal. In other words, a Web Safe Area of 1276 x 583 Is this a correct line of thinking? I'm really surprised that I couldn't find this type of investigation anywhere on the web. Lots of websites talk about designing for 1024x768, but that's only half the equation! There is no mention of browser/OS influences on the actual area you have to display the site/app. Any help on this would be greatly appreciated! Thanks. EDIT Another caveat to my line of thinking above is that different browsers actually take up different amounts of pixels based on the OS they're running on. For example, under WinXP IE8 takes up 142px on top of the screen (instead the aforementioned 120px for Win7) because the file menu shows up by default on XP while in Win7 the file menu is hidden by default. So it looks like on WinXP + IE8 the Web Safe Area would be a mere 572px (768px-142-30-24=572)

    Read the article

  • Parameterized StreamInsight Queries

    - by Roman Schindlauer
    The changes in our APIs enable a set of scenarios that were either not possible before or could only be achieved through workarounds. One such use case that people ask about frequently is the ability to parameterize a query and instantiate it with different values instead of re-deploying the entire statement. I’ll demonstrate how to do this in StreamInsight 2.1 and combine it with a method of using subjects for dynamic query composition in a mini-series of (at least) two blog articles. Let’s start with something really simple: I want to deploy a windowed aggregate to a StreamInsight server, and later use it with different window sizes. The LINQ statement for such an aggregate is very straightforward and familiar: var result = from win in stream.TumblingWindow(TimeSpan.FromSeconds(5))               select win.Avg(e => e.Value); Obviously, we had to use an existing input stream object as well as a concrete TimeSpan value. If we want to be able to re-use this construct, we can define it as a IQStreamable: var avg = myApp     .DefineStreamable((IQStreamable<SourcePayload> s, TimeSpan w) =>         from win in s.TumblingWindow(w)         select win.Avg(e => e.Value)); The DefineStreamable API lets us define a function, in our case from a IQStreamable (the input stream) and a TimeSpan (the window length) to an IQStreamable (the result). We can then use it like a function, with the input stream and the window length as parameters: var result = avg(stream, TimeSpan.FromSeconds(5)); Nice, but you might ask: what does this save me, except from writing my own extension method? Well, in addition to defining the IQStreamable function, you can actually deploy it to the server, to make it re-usable by another process! When we deploy an artifact in V2.1, we give it a name: var avg = myApp     .DefineStreamable((IQStreamable<SourcePayload> s, TimeSpan w) =>         from win in s.TumblingWindow(w)         select win.Avg(e => e.Value))     .Deploy("AverageQuery"); When connected to the same server, we can now use that name to retrieve the IQStreamable and use it with our own parameters: var averageQuery = myApp     .GetStreamable<IQStreamable<SourcePayload>, TimeSpan, double>("AverageQuery"); var result = averageQuery(stream, TimeSpan.FromSeconds(5)); Convenient, isn’t it? Keep in mind that, even though the function “AverageQuery” is deployed to the server, its logic will still be instantiated into each process when the process is created. The advantage here is being able to deploy that function, so another client who wants to use it doesn’t need to ask the author for the code or assembly, but just needs to know the name of deployed entity. A few words on the function signature of GetStreamable: the last type parameter (here: double) is the payload type of the result, not the actual result stream’s type itself. The returned object is a function from IQStreamable<SourcePayload> and TimeSpan to IQStreamable<double>. In the next article we will integrate this usage of IQStreamables with Subjects in StreamInsight, so stay tuned! Regards, The StreamInsight Team

    Read the article

  • ASP MVC2 model binding issue on POST

    - by Brandon Linton
    So I'm looking at moving from MVC 1.0 to MVC 2.0 RTM. One of the conventions I'd like to start following is using the strongly-typed HTML helpers for generating controls like text boxes. However, it looks like it won't be an easy jump. I tried migrating my first form, replacing lines like this: <%= Html.TextBox("FirstName", Model.Data.FirstName, new {maxlength = 30}) %> ...for lines like this: <%= Html.TextBoxFor(x => x.Data.FirstName, new {maxlength = 30}) %> Previously, this would map into its appropriate view model on a POST, using the following method signature: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Registration(AccountViewInfo viewInfo) Instead, it currently gets an empty object back. I believe the disconnect is in the fact that we pass the view model into a larger aggregate object that has some page metadata and other fun stuff along with it (hence x.Data.FirstName instead of x.FirstName). So my question is: what is the best way to use the strongly-typed helpers while still allowing the MVC framework to appropriately cast the form collection to my view-model as it does in the original line? Is there any way to do it without changing the aggregate type we pass to the view? Thanks!

    Read the article

  • Formula parsing / evaluation routine or library with generic DLookup functionality

    - by tbone
    I am writing a .Net application where I must support user-defined formulas that can perform basic mathematics, as well as accessing data from any arbitrary table in the database. I have the math part working, using JScript Eval(). What I haven't decided on is what a nice way is to do the generic table lookups. For example, I may have a formula something like: Column: BonusAmount Formula: {CurrentSalary} * 1.5 * {[SystemSettings][Value][SettingName=CorpBonus AND Year={Year}]} So, in this example I would replace {xxx} and {Year} with the value of Column xxx from the current table, and I would replace the second part with the value of (select Value from SystemSettings WHERE SettingName='CorpBonus' AND Year=2008) So, basically, I am looking for something very much like the MS Access DLookup function: DLookup ( expression, domain, [criteria] ) DLookup("[UnitPrice]", "Order Details", "OrderID = 10248") But, I also need to overall parsing routine that can tell whether to just look up in the current row, or to look into another table. Would also be nice to support aggregate functions (ie: DAvg, DMax, etc), as well as all the weird edge cases handled. So I wonder if anyone knows of any sort of an existing library, or has a nice routine that can handle this formula parsing and database lookup / aggregate function resolution requirements.

    Read the article

  • How can I serialize and communicate ActiveRecord instances across identical Rails apps?

    - by Blaine LaFreniere
    The main idea is that I have several worker instances of a Rails app, and then a main aggregate I want to do something like this with the following pseudo pseudo-code posts = Post.all.to_json( :include => { :comments => { :include => :blah } }) # send data to another, identical, exactly the same Rails app # ... # Fast forward to the separate but identical Rails app: # ... # remote_posts is the posts results from the first Rails app posts = JSON.parse(remote_posts) posts.each do |post| p = Post.new p = post p.save end I'm shying away from Active Resource because I have thousands of records to create, which would mean thousands of requests for each record. Unless there is a way to do it all in one request with Active Resource that is simple, I'd like to avoid it. Format doesn't matter. Whatever makes it convenient. The IDs don't need to be sent, because the other app will just be creating records and assigning new IDs in the "aggregate" system. The hierarchy would need to be preserved (E.g. "Hey other Rails app, I have genres, and each genre has an artist, and each artist has an album, and each album has songs" etc.)

    Read the article

  • sqlite3 JOIN, GROUP_CONCAT using distinct with custom separator

    - by aiwilliams
    Given a table of "events" where each event may be associated with zero or more "speakers" and zero or more "terms", those records associated with the events through join tables, I need to produce a table of all events with a column in each row which represents the list of "speaker_names" and "term_names" associated with each event. However, when I run my query, I have duplication in the speaker_names and term_names values, since the join tables produce a row per association for each of the speakers and terms of the events: 1|Soccer|Bobby|Ball 2|Baseball|Bobby - Bobby - Bobby|Ball - Bat - Helmets 3|Football|Bobby - Jane - Bobby - Jane|Ball - Ball - Helmets - Helmets The group_concat aggregate function has the ability to use 'distinct', which removes the duplication, though sadly it does not support that alongside the custom separator, which I really need. I am left with these results: 1|Soccer|Bobby|Ball 2|Baseball|Bobby|Ball,Bat,Helmets 3|Football|Bobby,Jane|Ball,Helmets My question is this: Is there a way I can form the query or change the data structures in order to get my desired results? Keep in mind this is a sqlite3 query I need, and I cannot add custom C aggregate functions, as this is for an Android deployment. I have created a gist which makes it easy for you to test a possible solution: https://gist.github.com/4072840

    Read the article

  • correct way to create a pivot table in postgresql using CASE WHEN

    - by mojones
    I am trying to create a pivot table type view in postgresql and am nearly there! Here is the basic query: select acc2tax_node.acc, tax_node.name, tax_node.rank from tax_node, acc2tax_node where tax_node.taxid=acc2tax_node.taxid and acc2tax_node.acc='AJ012531'; And the data: acc | name | rank ----------+-------------------------+-------------- AJ012531 | Paromalostomum fusculum | species AJ012531 | Paromalostomum | genus AJ012531 | Macrostomidae | family AJ012531 | Macrostomida | order AJ012531 | Macrostomorpha | no rank AJ012531 | Turbellaria | class AJ012531 | Platyhelminthes | phylum AJ012531 | Acoelomata | no rank AJ012531 | Bilateria | no rank AJ012531 | Eumetazoa | no rank AJ012531 | Metazoa | kingdom AJ012531 | Fungi/Metazoa group | no rank AJ012531 | Eukaryota | superkingdom AJ012531 | cellular organisms | no rank What I am trying to get is the following: acc | species | phylum AJ012531 | Paromalostomum fusculum | Platyhelminthes I am trying to do this with CASE WHEN, so I've got as far as the following: select acc2tax_node.acc, CASE tax_node.rank WHEN 'species' THEN tax_node.name ELSE NULL END as species, CASE tax_node.rank WHEN 'phylum' THEN tax_node.name ELSE NULL END as phylum from tax_node, acc2tax_node where tax_node.taxid=acc2tax_node.taxid and acc2tax_node.acc='AJ012531'; Which gives me the output: acc | species | phylum ----------+-------------------------+----------------- AJ012531 | Paromalostomum fusculum | AJ012531 | | AJ012531 | | AJ012531 | | AJ012531 | | AJ012531 | | AJ012531 | | Platyhelminthes AJ012531 | | AJ012531 | | AJ012531 | | AJ012531 | | AJ012531 | | AJ012531 | | AJ012531 | | Now I know that I have to group by acc at some point, so I try select acc2tax_node.acc, CASE tax_node.rank WHEN 'species' THEN tax_node.name ELSE NULL END as sp, CASE tax_node.rank WHEN 'phylum' THEN tax_node.name ELSE NULL END as ph from tax_node, acc2tax_node where tax_node.taxid=acc2tax_node.taxid and acc2tax_node.acc='AJ012531' group by acc2tax_node.acc; But I get the dreaded ERROR: column "tax_node.rank" must appear in the GROUP BY clause or be used in an aggregate function All the previous examples I've been able to find use something like SUM() around the CASE statements, so I guess that is the aggregate function. I have tried using FIRST(): select acc2tax_node.acc, FIRST(CASE tax_node.rank WHEN 'species' THEN tax_node.name ELSE NULL END) as sp, FIRST(CASE tax_node.rank WHEN 'phylum' THEN tax_node.name ELSE NULL END) as ph from tax_node, acc2tax_node where tax_node.taxid=acc2tax_node.taxid and acc2tax_node.acc='AJ012531' group by acc2tax_node.acc; but get the error: ERROR: function first(character varying) does not exist Can anyone offer any hints?

    Read the article

  • subtotals in columns usind reshape2 in R

    - by user1043144
    I have spent some time now learning RESHAPE2 and plyr but I still do not get it. This time I have a problem with (a) subtotals and (b) passing different aggregate functions . Here an example using data from the excellent tutorial on the blog of mrdwab http://news.mrdwab.com/ # libraries library(plyr) library(reshape2) # get data and add few more variables book.sales = read.csv("http://news.mrdwab.com/data-booksales") book.sales$Stock = book.sales$Quantity + 10 book.sales$SubjCat[(book.sales$Subject == 'Economics') | (book.sales$Subject == 'Management') ] <- '1_EconSciences' book.sales$SubjCat[book.sales$Subject %in% c('Anthropology', 'Politics', 'Sociology') ] <- '2_SocSciences' book.sales$SubjCat[book.sales$Subject %in% c('Communication', 'Fiction', 'History', 'Research', 'Statistics') ] <- '3_other' # to get to my starting dataframe (close to the project I am working on) book.sales1 <- ddply(book.sales, c('Region', 'Representative', 'SubjCat', 'Subject', 'Publisher'), summarize, Stock = sum(Stock), Sold = sum(Quantity), Ratio = round((100 * sum(Quantity)/ sum(Stock)), digits = 1)) #melt it m.book.sales = melt(data = book.sales1, id.vars = c('Region', 'Representative', 'SubjCat', 'Subject', 'Publisher'), measured.vars = c('Stock', 'Sold', 'Ratio')) # cast it Tab1 <- dcast(data = m.book.sales, formula = Region + Representative ~ Publisher + variable, fun.aggregate = sum, margins = c('Region', 'Representative')) Now my questions : I have been able to add the subtotals in rows. But is it possible also to add margins in the columns. Say for example, Totals of Stock for one Publisher ? Sorry I meant to say example total sold for all publishers There is a problem with the columns with “ratio”. How can I get “mean” instead of “sum” for this variable ? P.S: I have seen some examples using reshape. Will you recommend to use it instead of reshape2 (which seems not to include the functionalities of two functions).

    Read the article

  • How do I sign requests reliably for the Last.fm api in C#?

    - by Arda Xi
    I'm trying to implement authorization through Last.fm. I'm submitting my arguments as a Dictionary to make the signing easier. This is the code I'm using to sign my calls: public static string SignCall(Dictionary<string, string> args) { IOrderedEnumerable<KeyValuePair<string, string>> sortedArgs = args.OrderBy(arg => arg.Key); string signature = sortedArgs.Select(pair => pair.Key + pair.Value). Aggregate((first, second) => first + second); return MD5(signature + SecretKey); } I've checked the output in the debugger, it's exactly how it should be, however, I'm still getting WebExceptions every time I try. Here's my code I use to generate the URL in case it'll help: public static string GetSignedURI(Dictionary<string, string> args, bool get) { var stringBuilder = new StringBuilder(); if (get) stringBuilder.Append("http://ws.audioscrobbler.com/2.0/?"); foreach (var kvp in args) stringBuilder.AppendFormat("{0}={1}&", kvp.Key, kvp.Value); stringBuilder.Append("api_sig="+SignCall(args)); return stringBuilder.ToString(); } And sample usage to get a SessionKey: var args = new Dictionary<string, string> { {"method", "auth.getSession"}, {"api_key", ApiKey}, {"token", token} }; string url = GetSignedURI(args, true); EDIT: Oh, and the code references an MD5 function implemented like this: public static string MD5(string toHash) { byte[] textBytes = Encoding.UTF8.GetBytes(toHash); var cryptHandler = new System.Security.Cryptography.MD5CryptoServiceProvider(); byte[] hash = cryptHandler.ComputeHash(textBytes); return hash.Aggregate("", (current, a) => current + a.ToString("x2")); }

    Read the article

  • MySQL table data transformation -- how can I dis-aggreate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • Collapsing data frame by selecing one row per group

    - by jkebinger
    I'm trying to collapse a data frame by removing all but one row from each group of rows with identical values in a particular column. In other words, the first row from each group. For example, I'd like to convert this > d = data.frame(x=c(1,1,2,4),y=c(10,11,12,13),z=c(20,19,18,17)) > d x y z 1 1 10 20 2 1 11 19 3 2 12 18 4 4 13 17 Into this: x y z 1 1 11 19 2 2 12 18 3 4 13 17 I'm using aggregate to do this currently, but the performance is unacceptable with more data: > d.ordered = d[order(-d$y),] > aggregate(d.ordered,by=list(key=d.ordered$x),FUN=function(x){x[1]}) I've tried split/unsplit with the same function argument as here, but unsplit complains about duplicate row numbers. Is rle a possibility? Is there an R idiom to convert rle's length vector into the indices of the rows that start each run, which I can then use to pluck those rows out of the data frame?

    Read the article

  • Use LINQ to group a sequence by date with no gaps

    - by Codesleuth
    I'm trying to select a subgroup of a list where items have contiguous dates, e.g. ID StaffID Type Title ActivityDate -- ------- ---- ----------------- ------------ 1 41 1 Doctors 07/06/2010 2 41 0 Meeting with John 08/06/2010 3 41 0 Meeting Continues 09/06/2010 4 41 0 Meeting Continues 10/06/2010 5 41 3 Annual Leave 11/06/2010 6 41 0 Meeting Continues 14/06/2010 I'm using a pivot point each time, so take the example pivot item as 3, I'd like to get the following resulting contiguous events around the pivot: ID StaffID Type Title ActivityDate -- ------- ---- ----------------- ------------ 2 41 0 Meeting with John 08/06/2010 3 41 0 Meeting Continues 09/06/2010 4 41 0 Meeting Continues 10/06/2010 My current implementation is a laborious "walk" into the past, then into the future, to build the list: var orderedEvents = activities.OrderBy(a => a.ActivityDate).ToArray(); // Walk into the past until a gap is found var preceedingEvents = orderedEvents.TakeWhile(a => a.ID != activity.ID); DateTime dayBefore; var previousEvent = activity; while (previousEvent != null) { dayBefore = previousEvent.ActivityDate.AddDays(-1).Date; previousEvent = preceedingEvents.TakeWhile(a => a.ID != previousEvent.ID).LastOrDefault(); if (previousEvent != null) { if (previousEvent.ActivityDate.Date == dayBefore) relatedActivities.Insert(0, previousEvent); else previousEvent = null; } } // Walk into the future until a gap is found var followingEvents = orderedEvents.SkipWhile(a => a.ID != activity.ID); DateTime dayAfter; var nextEvent = activity; while (nextEvent != null) { dayAfter = nextEvent.ActivityDate.AddDays(1).Date; nextEvent = followingEvents.SkipWhile(a => a.ID != nextEvent.ID).Skip(1).FirstOrDefault(); if (nextEvent != null) { if (nextEvent.ActivityDate.Date == dayAfter) relatedActivities.Add(nextEvent); else nextEvent = null; } } The list relatedActivities should then contain the contiguous events, in order. Is there a better way (maybe using LINQ) for this? I had an idea of using .Aggregate() but couldn't think how to get the aggregate to break out when it finds a gap in the sequence.

    Read the article

  • MySQL: Get average of time differences?

    - by Nebs
    I have a table called Sessions with two datetime columns: start and end. For each day (YYYY-MM-DD) there can be many different start and end times (HH:ii:ss). I need to find a daily average of all the differences between these start and end times. An example of a few rows would be: start: 2010-04-10 12:30:00 end: 2010-04-10 12:30:50 start: 2010-04-10 13:20:00 end: 2010-04-10 13:21:00 start: 2010-04-10 14:10:00 end: 2010-04-10 14:15:00 start: 2010-04-10 15:45:00 end: 2010-04-10 15:45:05 start: 2010-05-10 09:12:00 end: 2010-05-10 09:13:12 ... The time differences (in seconds) for 2010-04-10 would be: 50 60 300 5 The average for 2010-04-10 would be 103.75 seconds. I would like my query to return something like: day: 2010-04-10 ave: 103.75 day: 2010-05-10 ave: 72 ... I can get the time difference grouped by start date but I'm not sure how to get the average. I tried using the AVG function but I think it only works directly on column values (rather than the result of another aggregate function). This is what I have: SELECT TIME_TO_SEC(TIMEDIFF(end,start)) AS timediff FROM Sessions GROUP BY DATE(start) Is there a way to get the average of timediff for each start date group? I'm new to aggregate functions so maybe I'm misunderstanding something. If you know of an alternate solution please share. I could always do it ad hoc and compute the average manually in PHP but I'm wondering if there's a way to do it in MySQL so I can avoid running a bunch of loops. Thanks.

    Read the article

  • compare two text files using linq?

    - by bala3569
    I have 4 text files in one folder and a pattern.txt to compare these text files..In pattern.txt i have insert update delete drop I need to compare this text file with those four text files and if these patterns matches any line in that text files i have to write those lines in another log file...i had read those files using linq..i need to compare those files and write in a text file with line number..here is my code var foldercontent = Directory.GetFiles(pathA) .Select(filename => File.ReadAllText(filename)) .Aggregate(new StringBuilder(), (sb, s) => sb.Append(s).Append(Environment.NewLine), sb => sb.ToString()); var pattern = File.ReadAllLines(pathB).Aggregate(new StringBuilder(), (sb, s) => sb.Append(s).Append(Environment.NewLine), sb => sb.ToString()); using (var dest = File.AppendText(Path.Combine(_logFolderPath, "log.txt"))) { //dest.WriteLine("LineNo : " + counter.ToString() + " : " + "" + line); } EDIT I have already used c# to compare two text files but i need this in linq while ((line = file.ReadLine()) != null) { if (line.IndexOf(line2, StringComparison.CurrentCultureIgnoreCase) != -1) { dest.WriteLine("LineNo : " + counter.ToString() + " : " + " " + line.TrimStart()); } counter++; } file.BaseStream.Seek(0, SeekOrigin.Begin); counter = 1;

    Read the article

  • Modify passed, nested dict/list

    - by Gerenuk
    I was thinking of writing a function to normalize some data. A simple approach is def normalize(l, aggregate=sum, norm_by=operator.truediv): aggregated=aggregate(l) for i in range(len(l)): l[i]=norm_by(l[i], aggregated) l=[1,2,3,4] normalize(l) l -> [0.1, 0.2, 0.3, 0.4] However for nested lists and dicts where I want to normalize over an inner index this doesnt work. I mean I'd like to get l=[[1,100],[2,100],[3,100],[4,100]] normalize(l, ?? ) l -> [[0.1,100],[0.2,100],[0.3,100],[0.4,100]] Any ideas how I could implement such a normalize function? Maybe it would be crazy cool to write normalize(l[...][0]) Is it possible to make this work?? Or any other ideas? Also not only lists but also dict could be nested. Hmm... EDIT: I just found out that numpy offers such a syntax (for lists however). Anyone know how I would implement the ellipsis trick myself?

    Read the article

  • How to write a JOIN statement to combine data from disparate tables

    - by Amarundo
    I have the following 2 procedures that I use as my source for a report. As of now, I'm presenting 2 different tables in my SQL Server Reporting Services 2008 R2 report, because it doesn't let me put them together as they belong to 2 different data sets. I want to present them in a single table, but I have not been successful trying to use JOIN here. How do I do that? NOTE: cName in IAgentQueueStats corresponds to UserId in AgentActivityLog. /*** Aggregate values for Call Center Agents for calls, talk and hold time ***/ /*** The detail/row values is per 30-minute interval ***/ ALTER PROCEDURE [dbo].[sp_IAgentQueueStats_OnlyCalls_Grouped] @p_StartDate datetime, @p_EndDate datetime, @p_Agents varchar(8000) AS SELECT [cName] ,sum([nAnswered]) SumNAnswered ,sum([nAnsweredAcd]) SumNAnsweredAcd ,sum([tTalkAcd]) SumTTalkAcd ,sum([nHoldAcd]) SumNHoldAcd ,sum([tHoldAcd]) SumTHoldAcd ,sum([tAcw]) SumTAcw FROM [I3_IC].[dbo].[IAgentQueueStats] WHERE dIntervalStart between @p_StartDate and DATEADD(s, 86400-1, @p_EndDate) AND CHARINDEX ( cName ,@p_Agents)> 0 AND cReportGroup <> '*' AND cHKey3 = '*' and cHKey4 ='*' AND nEnteredAcd > 0 AND cReportGroup <> 'CCFax Email' GROUP BY cName And here is the second one: /*** Aggregate values for Call Center Agents for status/activity time ***/ /*** The detail/row values is per start-time/end-time ***/ ALTER PROCEDURE [dbo].[sp_AgentActivity_Grouped] @p_StartDate datetime, @p_EndDate datetime, @p_Agents varchar(8000) AS SELECT [UserId],[StatusCategory],SUM([StateDuration]) [StatusDuration] FROM ( SELECT [UserId] ,[StatusGroup] ,[StatusKey] , CASE [StatusKey] WHEN 'Available' THEN 'Productive' WHEN 'Follow Up' THEN 'Productive' WHEN 'Campaign Call' THEN 'Productive' WHEN 'Awaiting Callback' THEN 'Productive' WHEN 'In a Meeting' THEN 'Not Your Fault' WHEN 'Project Work' THEN 'Not Your Fault' WHEN 'At a Training Session'THEN 'Not Your Fault' WHEN 'System Issues' THEN 'Not Your Fault' WHEN 'Test' THEN 'Not Your Fault' WHEN 'At Lunch' THEN 'Non Productive' WHEN 'Available, Forward' THEN 'Non Productive' WHEN 'Available, Follow-Me' THEN 'Non Productive' WHEN 'At Play' THEN 'Non Productive' WHEN 'AcdAgentNotAnswering' THEN 'Non Productive' WHEN 'Do Not Disturb' THEN 'Non Productive' WHEN 'Available, No ACD' THEN 'Non Productive' WHEN 'Away from desk' THEN 'Non Productive' ELSE [StatusKey] END StatusCategory ,stateduration FROM [I3_IC].[dbo].[AgentActivityLog] WHERE [StatusDateTime] between @p_StartDate and DATEADD(s, 86400-1, @p_EndDate) AND CHARINDEX ( [UserId] ,@p_Agents)> 0 AND [StatusKey] not in ('Gone Home','Out of the Office','On Vacation','Out of Town') ) a GROUP BY [UserId],[StatusCategory] ORDER BY [UserId], [StatusCategory] desc BTW, if I take some time to comment/reply on your posts, it's not lack of interest, but of understanding...

    Read the article

  • Task Parallel Library exception handling

    - by user1680766
    When handling exceptions in TPL tasks I have come across two ways to handle exceptions. The first catches the exception within the task and returns it within the result like so: var task = Task<Exception>.Factory.StartNew( () => { try { // Do Something return null; } catch (System.Exception e) { return e; } }); task.ContinueWith( r => { if (r.Result != null) { // Handle Exception } }); The second is the one shown within the documentation and I guess the proper way to do things: var task = Task.Factory.StartNew( () => { // Do Something }); task.ContinueWith( r => { if (r.Exception != null) { // Handle Aggregate Exception r.Exception.Handle(y => true); } }); I am wondering if there is anything wrong with the first approach? I have received 'unhandled aggregate exception' exceptions every now and again using this technique and was wondering how this can happen?

    Read the article

  • How to reduce a data frame keeping the order for other columns

    - by betabandido
    I am trying to reduce a data frame using the max function on a given column. I would like to preserve other columns but keeping the values from the same rows where each maximum value was selected. An example will make this explanation easier. Let us assume we have the following data frame: dframe <- data.frame(list(BENCH=sort(rep(letters[1:4], 4)), CFG=rep(1:4, 4), VALUE=runif(4 * 4) )) This gives me: BENCH CFG VALUE 1 a 1 0.98828096 2 a 2 0.19630597 3 a 3 0.83539540 4 a 4 0.90988296 5 b 1 0.01191147 6 b 2 0.35164194 7 b 3 0.55094787 8 b 4 0.20744004 9 c 1 0.49864470 10 c 2 0.77845408 11 c 3 0.25278871 12 c 4 0.23440847 13 d 1 0.29795494 14 d 2 0.91766057 15 d 3 0.68044728 16 d 4 0.18448748 Now, I want to reduce the data in order to select the maximum VALUE for each different BENCH: aggregate(VALUE ~ BENCH, dframe, FUN=max) This gives me the expected result: BENCH VALUE 1 a 0.9882810 2 b 0.5509479 3 c 0.7784541 4 d 0.9176606 Next, I tried to preserve other columns: aggregate(cbind(VALUE, CFG) ~ BENCH, dframe, FUN=max) This reduction returns: BENCH VALUE CFG 1 a 0.9882810 4 2 b 0.5509479 4 3 c 0.7784541 4 4 d 0.9176606 4 Both VALUE and CFG are reduced using max function. But this is not what I want. For instance, in this example I would like to obtain: BENCH VALUE CFG 1 a 0.9882810 1 2 b 0.5509479 3 3 c 0.7784541 2 4 d 0.9176606 2 where CFG is not reduced, but it just keeps the value associated to the maximum VALUE for each different BENCH. How could I change my reduction in order to obtain the last result shown?

    Read the article

  • MDX: Filtering a member set by a measure's table values

    - by oyvinro
    I have some numbers in a fact table, and have generated a measure which use the SUM aggregator to summarize the numbers. But the problem is that I only want to sum the numbers that are higher than, say 10. I tried using a generic expression in the measure definition, and that works of course, but the problem is that I need to be able to dynamically set that value, because it's not always 10, meaning users should be able to select it themselves. More specifically, my current MDX looks like this: WITH SET [Email Measures] AS '{[Measures].[Number Of Answered Cases], [Measures].[Max Expedition Time First In Case], [Measures].[Avg Expedition Times First In Case], [Measures].[Number Of Incoming Email Requests], [Measures].[Avg Number Of Emails In Cases], [Measures].[Avg Expedition Times Total],[Measures].[Number Of Answered Incoming Emails]}' SET [Organizations] AS '{[Organization.Id].[860]}' SET [Operators] AS '{[Operator.Id].[3379],[Operator.Id].[3181]}' SET [Email Accounts] AS '{[Email Account.Id].[6]}' MEMBER [Time.Date].[Date Period] AS Aggregate ({[Time.Date].[2008].[11].[11] :[Time.Date].[2009].[1].[2] }) MEMBER [Email.Type].[Email Types] AS Aggregate ({[Email.Type].[0]}) SELECT {[Email Measures]} ON columns, [Operators] ON rows FROM [Email_Fact] WHERE ( [Time.Date].[Date Period] ) Now, the member in question is the calculated member [Avg Expedition Times Total]. This member takes in two measures; [Sum Expedition Times] and [Nr of Expedition Times] and splits one on the other to get the average, all this presently works. However, I want [Sum Expedition Times] to only summarize values over or under a parameter of my/the user's wish. How do I filter the numbers [Sum Expedition Times] iterates through, rather than filtering on the sum that the measure gives me in the end?

    Read the article

  • What makes my code DDD (domain-driven design) qualified?

    - by oykuo
    Hi All, I'm new to DDD and am thinking about using this design technique in my project. However, what strikes me about DDD is that how basic the idea is. Unlike other design techniques such as MVC and TDD, it doesn't seems to contain any ground breaking ideas. For example, I'm sure some of you will have the same feeling that the idea of root aggregates and repositories are nothing new because when you are was writing MVC web applications you have to have one single master object (i.e. the root aggregate) that contain other minor objects (i.e. value objects and entities) in the model layer in order to send data to a strongly typed view. To me, the only new idea in DDD is probably the "Smart" entities (i.e. you are supposed to have business rules on root aggregates) Separation between value object, root aggregate and entities. Can anyone tell me if I have missed out anything here? If that's all there is to DDD, if I update one of my existing MVC application with the above 2 new ideas, can I claim it's an TDD, MVC and DDD applcation?

    Read the article

  • MS SQL: How to get the newest date in a table with several equal keys

    - by Qohelet
    Unfortunately my knowledge related to statements like "group by" and "having" is quite limited, so hopefully you can help me: I have a view -here's an excerpt- (if we have some Europeans here - it's v021 of Winline/Mesonic): ID | Artikelbezeichnung1 | Bez2 | mesoyear _____________________________________________________________________ 1401MA70 | Marga ,Saracena grigio,1S,33,3/33,3 | Marazzi | 1344 1401MA70 | Marga ,Saracena grigio,1S,33,3/33,3 | Marazzi | 1356 1401MA70 | Marga ,Saracena grigio,1S,33,3/33,3 | Marazzi | 1356 1401MA71 | Marga ,Saracena beige,1S,33,3/33,3 | Marazzi | 1344 1401MA71 | Marga ,Saracena beige,1S,33,3/33,3 | Marazzi | 1356 1401MA71 | Marga ,Saracena beige,1S,33,3/33,3 | Marazzi | 1356 2401CR13 | Crista,Mahon rojo,1S,33,3/33,3 | Cristacer | 1332 2401CR13 | Crista,Mahon rojo,1S,33,3/33,3 | Cristacer | 1344 So the ID is not unique and I just need the one with the highest val in "mesoyear". My fist solution was: Select c015 as ID, c003 as Artikelbezeichnung1, c074 as Bez2, mesoyear from CWLDATEN_91.dbo.v021 group by c015 having mesoyear = max(mesoyear) But this doesn't work at all... Msg 8121, Level 16, State 1, Line 8 Column 'CWLDATEN_91.dbo.v021.mesoyear' is invalid in the HAVING clause because it is not contained in either an aggregate function or the GROUP BY clause. So I just removed the "having" statement and it went "better": Msg 8120, Level 16, State 1, Line 2 Column 'CWLDATEN_91.dbo.v021.c003' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. So I tried to remove the error just by adding things to the "group by". And it worked. Select c015 as ID, c003 as Artikelbezeichnung1, c074 as Bez2, max(mesoyear) from CWLDATEN_91.dbo.v021 group by c015,c003,c074 gives me exactly what I want. But the correct Select contains about 24 columns and some calculations as well. The problem can't be solved just by adding all the columns to the "group by"...? Can someone please help me to find a proper command? Thank you!

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >