Search Results

Search found 754 results on 31 pages for 'aggregate'.

Page 25/31 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • How to keep your unit test Arrange step simple and still guarantee DDD invariants ?

    - by ian31
    DDD recommends that the domain objects should be in a valid state at any time. Aggregate roots are responsible for guaranteeing the invariants and Factories for assembling objects with all the required parts so that they are initialized in a valid state. However this seems to complicate the task of creating simple, isolated unit tests a lot. Let's assume we have a BookRepository that contains Books. A Book has : an Author a Category a list of Bookstores you can find the book in These are required attributes : a book has to have an author, a category and at least a book store you can buy the book from. There's likely to be a BookFactory since it is quite a complex object, and the Factory will initialize the Book with at least all the mentioned attributes. Now we want to unit test a method of the BookRepository that returns all the Books. To test if the method returns the books, we have to set up a test context (the Arrange step in AAA terms) where some Books are already in the Repository. If the only tool at our disposal to create Book objects is the Factory, the unit test now also uses and is dependent on the Factory and inderectly on Category, Author and Store since we need those objects to build up a Book and then place it in the test context. Would you consider this is a dependency in the same way that in a Service unit test we would be dependent on, say, a Repository that the Service would call ? How would you solve the problem of having to re-create a whole cluster of objects in order to be able to test a simple thing ? How would you break that dependency and get rid of all these attributes we don't need in our test ? By using mocks or stubs ? If you mock up things a Repository contains, what kind of mock/stubs would you use as opposed to when you mock up something the object under test talks to or consumes ?

    Read the article

  • How to keep your unit tests simple and isolated and still guarantee DDD invariants ?

    - by ian31
    DDD recommends that the domain objects should be in a valid state at any time. Aggregate roots are responsible for guaranteeing the invariants and Factories for assembling objects with all the required parts so that they are initialized in a valid state. However this seems to complicate the task of creating simple, isolated unit tests a lot. Let's assume we have a BookRepository that contains Books. A Book has : an Author a Category a list of Bookstores you can find the book in These are required attributes : a book has to have an author, a category and at least a book store you can buy the book from. There's likely to be a BookFactory since it is quite a complex object, and the Factory will initialize the Book with at least all the mentioned attributes. Now we want to unit test a method of the BookRepository that returns all the Books. To test if the method returns the books, we have to set up a test context (the Arrange step in AAA terms) where some Books are already in the Repository. If the only tool at our disposal to create Book objects is the Factory, the unit test now also uses and is dependent on the Factory and inderectly on Category, Author and Store since we need those objects to build up a Book and then place it in the test context. Would you consider this is a dependency in the same way that in a Service unit test we would be dependent on, say, a Repository that the Service would call ? How would you solve the problem of having to re-create a whole cluster of objects in order to be able to test a simple thing ? How would you break that dependency and get rid of all these attributes we don't need in our test ? By using mocks or stubs ? If you mock up things a Repository contains, what kind of mock/stubs would you use as opposed to when you mock up something the object under test talks to or consumes ?

    Read the article

  • R : remove columns from dataframe where ALL values are NA

    - by Sophomore
    hello everybody! I'm having some trouble with my huge data frame and couldn't really resolve that question myself: The dataframe has some properties as columns and each row represents one data set. I've done some sanatizing to this dataframe (e.g. get rid of datasets which are not to be included in evaluation). (Whoever might be interested: Beforehand I aggregate around 5000 single text files and put them in a tsv, some of the proerties have a sequence number like "button.pressed.1" ... ""button.pressed.n". Some of the sets excluded had really high numbers for n but got excluded, all sets left have much smaller numbers for n but the property "button.presed.50" is still there and all remaining sets have an NA in that column. Actually its a different property but the example should clarify my intention...) So the question is quite simple (for some sophisticated R pro): I need to get rid of columns where for ALL rows the value is NA. Could someone please help me out? (All I have managed to get rid of columns where at least one NA exists which dropped about half my columns)...

    Read the article

  • Counting problem C#

    - by MadBoy
    Hello, I've a bit of a problem. I'm adding numbers to ArrayList like 156, 340 (when it is TransferIn or Buy) etc and then i remove them doing it like 156, 340 (when it's TransferOut, Sell). Following solution works for that without a problem. The problem I have is that for some old data employees were entering sum's like 1500 instead of 500+400+100+500. How would I change it so that when there's Sell/TransferOut and there's no match inside ArrayList it should try to add multiple items from that ArrayList and find elements that combine into aggregate. ArrayList alNew = new ArrayList(); ArrayList alNewPoIle = new ArrayList(); ArrayList alNewCo = new ArrayList(); string tempAkcjeCzynnosc = (string) alInstrumentCzynnoscBezNumerow[i]; string tempAkcjeInId = (string) alInstrumentNazwaBezNumerow[i]; decimal varAkcjeCena = (decimal) alInstrumentCenaBezNumerow[i]; decimal varAkcjeIlosc = (decimal) alInstrumentIloscBezNumerow[i]; int index; switch (tempAkcjeCzynnosc) { case "Sell": case "TransferOut": index = alNew.IndexOf(varAkcjeIlosc); if (index != -1) { alNew.RemoveAt(index); alNewPoIle.RemoveAt(index); alNewCo.RemoveAt(index); } else { // Number without match encountred } break; case "Buy": case "TransferIn": alNew.Add(varAkcjeIlosc); alNewPoIle.Add(varAkcjeCena); alNewCo.Add(tempAkcjeInId); break; } }

    Read the article

  • FluentNHibernate mapping of composite foreign keys

    - by Faron
    I have an existing database schema and wish to replace the custom data access code with Fluent.NHibernate. The database schema cannot be changed since it already exists in a shipping product. And it is preferable if the domain objects did not change or only changed minimally. I am having trouble mapping one unusual schema construct illustrated with the following table structure: CREATE TABLE [Container] ( [ContainerId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Container] PRIMARY KEY ( [ContainerId] ASC ) ) CREATE TABLE [Item] ( [ItemId] [uniqueidentifier] NOT NULL, [ContainerId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Item] PRIMARY KEY ( [ContainerId] ASC, [ItemId] ASC ) ) CREATE TABLE [Property] ( [ContainerId] [uniqueidentifier] NOT NULL, [PropertyId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Property] PRIMARY KEY ( [ContainerId] ASC, [PropertyId] ASC ) ) CREATE TABLE [Item_Property] ( [ContainerId] [uniqueidentifier] NOT NULL, [ItemId] [uniqueidentifier] NOT NULL, [PropertyId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Item_Property] PRIMARY KEY ( [ContainerId] ASC, [ItemId] ASC, [PropertyId] ASC ) ) CREATE TABLE [Container_Property] ( [ContainerId] [uniqueidentifier] NOT NULL, [PropertyId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Container_Property] PRIMARY KEY ( [ContainerId] ASC, [PropertyId] ASC ) ) The existing domain model has the following class structure: The Property class contains other members representing the property's name and value. The ContainerProperty and ItemProperty classes have no additional members. They exist only to identify the owner of the Property. The Container and Item classes have methods that return collections of ContainerProperty and ItemProperty respectively. Additionally, the Container class has a method that returns a collection of all of the Property objects in the object graph. My best guess is that this was either a convenience method or a legacy method that was never removed. The business logic mainly works with Item (as the aggregate root) and only works with a Container when adding or removing Items. I have tried several techniques for mapping this but none work so I won't include them here unless someone asks for them. How would you map this?

    Read the article

  • Which database I can used and relationship in it ??

    - by mimo-hamad
    My projece make me confused which I didn't find clear things that make me understand the required database and the relationships in it So, would a super one help me to solve it ?!! ;D this is required: 1) Model the data stored in the database (Identify the entities, roles, relationships, constraints, etc.) 2) Write the Oracle commands to create the database, find appropriate data, and populate the database 3) Write five different queries on your database, using the SELECT/FROM/WHERE construct provided in SQL. Your five queries should illustrate several different aspects of database querying, such as: a. Queries over more than one relation (by listing more than one relation in the FROM clause) b. Queries involving aggregate functions, such as SUM, COUNT, and AVG c. Queries involving complicated selects and joins d. Queries involving GROUP BY, HAVING or other similar functions. e. Queries that require the use of the DISTINCT keyword. And this the condition that we need to determine it to solve the required Q's above : 5) It is desired to develop an Internet membership club to buy products at special prices online. To join, new members must be referred by another existing member of the club. The system will keep the following information for each member: The member ID, referring member, birth date, member name, address, phone, mobile, credit card type, number and expiration date. The items are always shipped to the member's address noted in the membership application. The shipping fees will differ for each order.For each item to be requested, the member will select an item from a long list of possible items. For each item in the database, we store an item ID, an item name, description, and list price. The list price will be different from the actual sale price. The available quantity and the back-ordered quantity (the back-ordered quantity is the quantity on-order by the club from its suppliers) is also noted

    Read the article

  • How to cope with null results in SQL Tasks that return single rows in SSIS 2005?

    - by JSacksteder
    In a dataflow task, I can slip a rowcount into the processing flow and place the count into a variable. I can later use that variable to conditionally perform some other work if the rowcount was 0. This works well for me, but I have no corresponding strategy for sql tasks expected to return a single row. In that event, I'm returning those values into variables. If the lookup produces no rows, the sql task fails when assigning values into those variables. I can branch on that component failing, but there's a side effect of that - if I'm running the job as a SQL server agent job step, the step returns DTSER_FAILURE, causing the step to fail. I can tell the sql agent to disregard the step failure, but then I won't know if I have a legitimate error in that step. This seems harder than it should be. The only strategy I can think of is to run the same query with a count(*) aggregate and test if that returns a number 0 and if so running the query again without the count. That's ugly because I have the same query in two places that I need to keep in sync. Is there a better way?

    Read the article

  • how do I deconstruct COUNT()?

    - by user151841
    I have a view with some joins in it. I'm doing a select from that view with COUNT(*) as one of the columns of the select. I'm surprised by the number it's returning. Note that there is no GROUP BY nor aggregate column statement in the source view that the query is drawing from. How can I take it apart to see how it arrives at this number? I have three columns in the GROUP BY clause. SELECT column1, column2, column3, COUNT(*) FROM View GROUP BY column1, column2, column3 I get a result like +---------+---------+---------+----------+ | column1 | column2 | column3 | COUNT(*) | +---------+---------+---------+----------+ | value1 | valueA | value_a | 103 | +---------+---------+---------+----------+ | value2 | valueB | value_b | 56 | +---------+---------+---------+----------+ etc. I'd like to see how it arrives at that 103, 26, etc. In other words, I want to run a query that returns 103 rows of something, so that I know that I've expressed the query properly. I'm double-checking my work. I'm not saying that I think COUNT(*) doesn't work ( I know that "SELECT is not broken" ), what I want to double-check is exactly what I'm expressing in my query, because I think I've expressed the wrong thing, which would be why I'm getting unexpected values. I need to see more what I'm actually directing MySQL to count. So should I take them one by one, and try out each value in a WHERE clause? In other words, should I do SELECT column1 FROM View WHERE column1 = 'first_grouped_value' SELECT column1 FROM View WHERE column1 = 'second_grouped_value' SELECT column2 FROM View WHERE column1 = 'first_grouped_value' SELECT column2 FROM View WHERE column1 = 'second_grouped_value' and see the row count returned matches the COUNT(*) value in the grouped results? Because of confidentiality, I won't be able to post any of the query or database structure. All I'm asking for is a general technique to see what COUNT(*) is actually counting.

    Read the article

  • Group vs role (Any real difference?)

    - by Ondrej
    Can anyone tell me, what's the real difference between group and role? Ive been trying to figure this out for some time now and the more information I read, the more I get the sence, that this is brought up just to confuse people and there is no proper difference in this. Both can do the other one's job. Ive always used a group to manage users and their access rights. Recently, I've come accross an administration software, where is a bunch of users. Each user can have assigned a module (whole system is split into a few parts called modules ie. Administration module, Survey module, Orders module, Customer module). On top of it, each module have a list of functionalities, that can be allowed or denied for each user. So let's say, a user John Smith can access module Orders and can edit any order, but havent given a right to delete any of them. If there was more users with the same competency, I would use a group to manage that. I would aggregate such users into the same group and assign access rights to modules and their functions to the group. All users in the same group would have the same access rights. Why call it a group and not role? I don't know, I just feel it that way. It seems to me, that simply it just doesnt really matter :] But I still would like to know the real difference. What about you guys? Any suggestions why this should be rather called role than group or the other way round? Thanks to everyone.

    Read the article

  • What is the corrrect way to increment a field making up part of a composit key

    - by Tr1stan
    I have a bunch of tables whose primary key is made up of the foreign keys of other tables (Composite key). Therefore for example the attributes (as a very cut down version) might look like this: A[aPK, SomeFields] 1:M B[bPK, aFK, SomeFields] 1:M C[cPK, bFK, aFK, SomeFields] as data this could look like: A[aPK, SomeFields]: 1, Foo 2, Bar B[bPK, aFK, SomeFields]: 1, 1, FooData1 2, 1, FooData2 1, 2, BarData1 2, 2, BarData2 C[cPK, bFK, aFK, SomeFields]: 1, 1, 1, FooData1More 2, 1, 1, FooData1More 1, 2, 1, FooData2More 2, 2, 1, FooData2More 1, 1, 2, BarData1More 2, 1, 2, BarData1More 1, 2, 2, BarData2More 2, 2, 2, BarData2More I've got this running in a MSSQL DBMS and I'm looking for the best way to increment the left most column, in each table when a new tuple is added to it. I can't use the Auto Increment Identity Specification option as that has no idea that it is part of a composite key. I also don't want to use any aggregate function such as: MAX(field)+1 as this will have adverse affects with multiple users inputting data, rolling back etc. There might however be a nice trigger based option here, but I'm not sure. This must be a common issue so I'm hoping that someone has a lovely solution. As a side which may or may not affect the answer, I'm using Entity Framework 1.0 as my ORM, within a c# MVC application.

    Read the article

  • Two radically different queries against 4 mil records execute in the same time - one uses brute force.

    - by IanC
    I'm using SQL Server 2008. I have a table with over 3 million records, which is related to another table with a million records. I have spent a few days experimenting with different ways of querying these tables. I have it down to two radically different queries, both of which take 6s to execute on my laptop. The first query uses a brute force method of evaluating possibly likely matches, and removes incorrect matches via aggregate summation calculations. The second gets all possibly likely matches, then removes incorrect matches via an EXCEPT query that uses two dedicated indexes to find the low and high mismatches. Logically, one would expect the brute force to be slow and the indexes one to be fast. Not so. And I have experimented heavily with indexes until I got the best speed. Further, the brute force query doesn't require as many indexes, which means that technically it would yield better overall system performance. Below are the two execution plans. If you can't see them, please let me know and I'll re-post then in landscape orientation / mail them to you. Brute-force query: Index-based exception query: My question is, based on the execution plans, which one look more efficient? I realize that thing may change as my data grows.

    Read the article

  • SQL - Updating records based on most recent date

    - by Remnant
    I am having difficulty updating records within a database based on the most recent date and am looking for some guidance. By the way, I am new to SQL. As background, I have a windows forms application with SQL Express and am using ADO.NET to interact with the database. The application is designed to enable the user to track employee attendance on various courses that must be attended on a periodic basis (e.g. every 6 months, every year etc.). For example, they can pull back data to see the last time employees attended a given course and also update attendance dates if an employee has recently completed a course. I have three data tables: EmployeeDetailsTable - simple list of employees names, email address etc., each with unique ID CourseDetailsTable - simple list of courses, each with unique ID (e.g. 1, 2, 3 etc.) AttendanceRecordsTable - has 3 columns { EmployeeID, CourseID, AttendanceDate, Comments } For any given course, an employee will have an attendance history i.e. if the course needs to be attended each year then they will have one record for as many years as they have been at the company. What I want to be able to do is to update the 'Comments' field for a given employee and given course based on the most recent attendance date. What is the 'correct' SQL syntax for this? I have tried many things (like below) but cannot get it to work: UPDATE AttendanceRecordsTable SET Comments = @Comments WHERE AttendanceRecordsTable.EmployeeID = (SELECT EmployeeDetailsTable.EmployeeID FROM EmployeeDetailsTable WHERE (EmployeeDetailsTable.LastName =@ParameterLastName AND EmployeeDetailsTable.FirstName =@ParameterFirstName) AND AttendanceRecordsTable.CourseID = (SELECT CourseDetailsTable.CourseID FROM CourseDetailsTable WHERE CourseDetailsTable.CourseName =@CourseName)) GROUP BY MAX(AttendanceRecordsTable.LastDate) After much googling, I discovered that MAX is an aggregate function and so I need to use GROUP BY. I have also tried using the HAVING keyword but without success. Can anybody point me in the right direction? What is the 'conventional' syntax to update a database record based on the most recent date?

    Read the article

  • Do all C compilers allow functions to return structures?

    - by Jordan S
    I am working on a program in C and using the SDCC compiler for a 8051 architecture device. I am trying to write a function called GetName that will read 8 characters from Flash Memory and return the character array in some form. I know that it is not possible to return an array in C so I am trying to do it using a struct like this: //********************FLASH.h file******************************* MyStruct GetName(int i); //Function prototype #define NAME_SIZE 8 typedef struct { char Name[NAME_SIZE]; } MyStruct; extern MyStruct GetName(int i); // *****************FLASH.c file*********************************** #include "FLASH.h" MyStruct GetName( int i) { MyStruct newNameStruct; //... // Fill the array by reading data from Flash //... return newNameStruct; } I don't have any references to this function yet but for some reason, I get a compiler error that says "Function cannot return aggregate." Does this mean that my compiler does not support functions that return structs? Or am I just doing something wrong?

    Read the article

  • Getting the first of a GROUP BY clause in SQL

    - by Michael Bleigh
    I'm trying to implement single-column regionalization for a Rails application and I'm running into some major headaches with a complex SQL need. For this system, a region can be represented by a country code (e.g. us) a continent code that is uppercase (e.g. NA) or it can be NULL indicating the "default" information. I need to group these items by some relevant information such as a foreign key (we'll call it external_id). Given a country and its continent, I need to be able to select only the most specific region available. So if records exist with the country code, I select them. If, not I want a records with the continent code. If not that, I want records with a NULL code so I can receive the default values. So far I've figured that I may be able to use a generated CASE statement to get an arbitrary sort order. Something like this: SELECT *, CASE region WHEN 'us' THEN 1 WHEN 'NA' THEN 2 ELSE 3 END AS region_sort FROM my_table WHERE region IN ('us','NA') OR region IS NULL GROUP BY external_id ORDER BY region_sort The problem is that without an aggregate function the actual data returned by the GROUP BY for a given row seems to be untameable. How can I massage this query to make it return only the first record of the region_sort ordered groups?

    Read the article

  • Entity Sql Group By problem, please help

    - by Zviadi
    Hello, help me please with this simple E-sql query: var qStr = "SELECT SqlServer.Month(o.DatePaid) as month, SqlServer.Sum(o.PaidMoney) as PaidMoney FROM XACCModel.OrdersIncomes as o group by SqlServer.Month(o.DatePaid)"; heres what I have. I have simple Entity called OrdersIncomes with ID,PaidMoney,DatePaid,Order_ID properties I want to select Month and Summed PaidMoney like this: month Paidmoney 1 500 2 700 3 1200 T-SQL looks like this and works fine: select MONTH(o.DatePaid), SUM(o.PaidMoney) from OrdersIncomes as o group by MONTH(o.DatePaid) results: 3 31.0000 4 127.0000 5 20.0000 (3 row(s) affected) but E-SQL doesnot work and I dont know what to do. here my E-SQL which needs refactoring: var qStr = "SELECT SqlServer.Month(o.DatePaid) as month, SqlServer.Sum(o.PaidMoney) as PaidMoney FROM XACCModel.OrdersIncomes as o group by SqlServer.Month(o.DatePaid)"; theres exception: ErrorDescription = "The identifier 'o' is not valid because it is not contained either in an aggregate function or in the GROUP BY clause." if I include o in group by clause, like: FROM XACCModel.OrdersIncomes as o group by o then I dont get summed and agregated results. is this some bug? or what Im doing wrong. heres Linq to Entities query and it works too: var incomeResult = from ic in _context.OrdersIncomes group ic by ic.DatePaid.Month into gr select new { Month = gr.Key, PaidMoney = gr.Sum(i = i.PaidMoney) };

    Read the article

  • Need a workaround to filter on related model and aggregated fields in Django

    - by parxier
    I opened a ticket for this problem. In a nutshell here is my model: class Plan(models.Model): cap = models.IntegerField() class Phone(models.Model): plan = models.ForeignKey(Plan, related_name='phones') class Call(models.Model): phone = models.ForeignKey(Phone, related_name='calls') cost = models.IntegerField() I want to run a query like this one: Phone.objects.annotate(total_cost=Sum('calls__cost')).filter(total_cost__gte=0.5*F('plan__cap')) Unfortunately Django generates bad SQL: SELECT "app_phone"."id", "app_phone"."plan_id", SUM("app_call"."cost") AS "total_cost" FROM "app_phone" INNER JOIN "app_plan" ON ("app_phone"."plan_id" = "app_plan"."id") LEFT OUTER JOIN "app_call" ON ("app_phone"."id" = "app_call"."phone_id") GROUP BY "app_phone"."id", "app_phone"."plan_id" HAVING SUM("app_call"."cost") >= 0.5 * "app_plan"."cap" and errors with: ProgrammingError: column "app_plan.cap" must appear in the GROUP BY clause or be used in an aggregate function LINE 1: ...."plan_id" HAVING SUM("app_call"."cost") >= 0.5 * "app_plan".... Is there any workaround apart from running raw SQL?

    Read the article

  • Slope requires a real as parameter 2?

    - by Dave Jarvis
    Question How do you pass the correct value to udf_slope's second parameter type? Attempts CAST(Y.YEAR AS FLOAT), but that failed (SQL error). Y.YEAR + 0.0, but that failed, too (see error message). slope(D.AMOUNT, 1.0), failed as well Error Message Using udf_slope fails due to: Can't initialize function 'slope'; slope() requires a real as parameter 2 Code SELECT D.AMOUNT, Y.YEAR, slope(D.AMOUNT, Y.YEAR + 0.0) as SLOPE, intercept(D.AMOUNT, Y.YEAR + 0.0) as INTERCEPT FROM YEAR_REF Y, DAILY D Here, D.AMOUNT is a FLOAT and Y.YEAR is an INTEGER. Create Function The slope function was created as follows: CREATE AGGREGATE FUNCTION slope RETURNS REAL SONAME 'udf_slope.so'; Function Signature From udf_slope.cc: double slope( UDF_INIT* initid, UDF_ARGS* args, char* is_null, char* is_error ) Example Usages Reading the fine manual reveals: UDF intercept() Calculates the intercept of the linear regression of two sets of variables. Function name intercept Input parameter(s) 2 (dependent variable: REAL, independent variable: REAL) Examples SELECT intercept(income,age) FROM customers UDF slope() Calculates the slope of the linear regression of two sets of variables. Function name slope Input parameter(s) 2 (dependent variable: REAL, independent variable: REAL) Examples SELECT slope(income,age) FROM customers Thoughts? Thank you!

    Read the article

  • What are good strategies for organizing single class per query service layer?

    - by KallDrexx
    Right now my Asp.net MVC application is structured as Controller - Services - Repositories. The services consist of aggregate root classes that contain methods. Each method is a specific operation that gets performed, such as retrieving a list of projects, adding a new project, or searching for a project etc. The problem with this is that my service classes are becoming really fat with a lot of methods. As of right now I am separating methods out into categories separated by #region tags, but this is quickly becoming out of control. I can definitely see it becoming hard to determine what functionality already exists and where modifications need to go. Since each method in the service classes are isolated and don't really interact with each other, they really could be more stand alone. After reading some articles, such as this, I am thinking of following the single query per class model, as it seems like a more organized solution. Instead of trying to figure out what class and method you need to call to perform an operation, you just have to figure out the class. My only reservation with the single query per class method is that I need some way to organize the 50+ classes I will end up with. Does anyone have any suggestions for strategies to best organize this type of pattern?

    Read the article

  • SQL Server error handling: exceptions and the database-client contract

    - by gbn
    We’re a team of SQL Servers database developers. Our clients are a mixed bag of C#/ASP.NET, C# and Java web services, Java/Unix services and some Excel. Our client developers only use stored procedures that we provide and we expect that (where sensible, of course) they treat them like web service methods. Some our client developers don’t like SQL exceptions. They understand them in their languages but they don’t appreciate that the SQL is limited in how we can communicate issues. I don’t just mean SQL errors, such as trying to insert “bob” into a int column. I also mean exceptions such as telling them that a reference value is wrong, or that data has already changed, or they can’t do this because his aggregate is not zero. They’d don’t really have any concrete alternatives: they’ve mentioned that we should output parameters, but we assume an exception means “processing stopped/rolled back. How do folks here handle the database-client contract? Either generally or where there is separation between the DB and client code monkeys. Edits: we use SQL Server 2005 TRY/CATCH exclusively we log all errors after the rollback to an exception table already we're concerned that some of our clients won't check output paramaters and assume everything is OK. We need errors flagged up for support to look at. everything is an exception... the clients are expected to do some message parsing to separate information vs errors. To separate our exceptions from DB engine and calling errors, they should use the error number (ours are all 50,000 of course)

    Read the article

  • MySQL to PostreSQL and Named Scope

    - by Lowgain
    I've got a named scope for one of my models that works fine. The code is: named_scope :inbox_threads, lambda { |user| { :include => [:deletion_flags, :recipiences], :conditions => ["recipiences.user_id = ? AND deletion_flags.user_id IS NULL", user.id], :group => "msg_threads.id" }} This works fine on my local copy of the app with a MySQL database, but when I push my app to Heroku (which only uses PostgreSQL), I get the following error: ActiveRecord::StatementInvalid (PGError: ERROR: column "msg_threads.subject" must appear in the GROUP BY clause or be used in an aggregate function: SELECT "msg_threads"."id" AS t0_r0, "msg_threads"."subject" AS t0_r1, "msg_threads"."originator_id" AS t0_r2, "msg_thr eads"."created_at" AS t0_r3, "msg_threads"."updated_at" AS t0_r4, "msg_threads"."url_key" AS t0_r5, "deletion_flags"."id" AS t1_r0, "deletion_flags"."user_id" AS t1_r1, "deletion_flags"."msg_thread_id" AS t1_r2, "deletion_flags"."confirmed" AS t1_r3, "deletion_flags"."created_at" AS t1_r4, "deletion_flags"."updated_at" AS t1_r5, "recipiences"."id" AS t2_r0, "recipiences"."user_id" AS t2_r1, "recipiences"."msg_thread_id" AS t2_r2, "recipiences"."created_at" AS t2_r3, "recipien ces"."updated_at" AS t2_r4 FROM "msg_threads" LEFT OUTER JOIN "deletion_flags" ON deletion_flags.msg_thread_id = msg_threads.id LEFT OUTER JOIN "recipiences" ON recipiences.msg_thread_id = msg_threads.id WHERE (recipiences.user_id = 1 AND deletion_flags.user_id IS NULL) GROUP BY msg_threads.id) I'm not as familiar with the working of Postgres, so what would I need to add here to get this working? Thanks!

    Read the article

  • Can one connection get details of another? Or, how can I get the most detailed pending transaction

    - by bob-the-destroyer
    Is there a Mysql statement which provides full details of any other open connection or user? For this particular case, on myisam tables specifically. Looking at Mysql's SHOW TABLE STATUS documentation, it's missing some very important information for my purpose. For example: remote odbc connection one is inserting several thousand records, which due to a slow connection speed can take up to an hour. Tcp connection two, using PHP on the server's localhost, is running select queries with aggregate functions on that data. Before allowing connection two to run those queries, I'd like connection two to first check to make sure there's no pending inserts on any other connection on those specific tables so it can instead wait until all data is available. If the table is currently being written to, I'd like to spit back to the user of connection two an approximation of how much longer to wait based on the number of pending inserts. Ideally by table, I'd like to get back using a query the timestamp when connection one began the write, total inserts left to be done, and total inserts already completed. Instead of insert counts, even knowing number of bytes written and left to write would work just fine here. Obviously since connection two is a tcp connection via a PHP script, all I can really use in that script is some sort of query. I suppose if I have to, since it is on localhost, I can exec() it if the only way is by a mysql command line option that outputs this info, but I'd rather not. I suppose I could simply update a custom-made transaction log before and after this massive insert task which the PHP script can check, but hopefully there's already a built-in Mysql feature I can take advantage of.

    Read the article

  • How can i mock or test my deferred execution functionality?

    - by cottsak
    I have what could be seen as a bizarre hybrid of IQueryable<T> and IList<T> collections of domain objects passed up my application stack. I'm trying to maintain as much of the 'late querying' or 'lazy loading' as possible. I do this in two ways: By using a LinqToSql data layer and passing IQueryable<T>s through by repositories and to my app layer. Then after my app layer passing IList<T>s but where certain elements in the object/aggregate graph are 'chained' with delegates so as to defer their loading. Sometimes even the delegate contents rely on IQueryable<T> sources and the DataContext are injected. This works for me so far. What is blindingly difficult is proving that this design actually works. Ie. If i defeat the 'lazy' part somewhere and my execution happens early then the whole thing is a waste of time. I'd like to be able to TDD this somehow. I don't know a lot about delegates or thread safety as it applies to delegates acting on the same source. I'd like to be able to mock the DataContext and somehow trace both methods of deferring (IQueryable<T>'s SQL and the delegates) the loading so that i can have tests that prove that both functions are working at different levels/layers of the app/stack. As it's crucial that the deferring works for the design to be of any value, i'd like to see tests fail when i break the design at a given level (separate from the live implementation). Is this possible?

    Read the article

  • How to handle duplicate values in d3.js

    - by Mario
    First I'm a d3.js noob :) How you can see from the title I've got a problem with duplicated data and aggregate the values is no option, because the name represent different bus stops. In this example maybe the stops are on the fron side and the back side of a building. And of course I like to show the names on the x-axis. If i created an example and the result is a bloody mess, see jsFiddel. x = index name = bus stop name n = value I've got a json e.g.: [{ "x": 0, "name": "Corniche St / Abu Dhabi Police GHQ", "n": 113 }, { "x": 1, "name": "Corniche St / Nation Towers", "n": 116 }, { "x": 2, "name": "Zayed 1st St / Al Khalidiya Public Garden", "n": 146 }, ... { "x": 49, "name": "Hamdan St / Tariq Bin Zeyad Mosque", "n": 55 }] The problem: It is possible that the name could appear more then once e.g. { "x": 1, "name": "Corniche St / Nation Towers", "n": 116 } and { "x": 4, "name": "Corniche St / Nation Towers", "n": 105 } I like to know is there a way to tell d3.js not to use distinct names and instead just show all names in sequence with their values. Any ideas or suggestions are very welcome :) If you need more information let me know. Thanks in advanced Mario

    Read the article

  • How to control virtual memory management in linux?

    - by chmike
    I'm writing a program that uses an mmap file to hold a huge buffer organized as an array of 64 MB blocks. The blocks are used to aggregate data received from different hosts through the network. As a consequence the total data size written in each block is not known in advance. Most of the time it is only 2MB but in some cases it can be up to 20MB or more. The data doesn't stay long in the buffer. 90% is deleted after less than a second and the rest is transmitted to another host. I would like to know if there is a way to tell the virtual memory manager that ram pages are not dirty anymore when data is deleted. Should I use mmap and munmap when a block is used and released to control the virtual memory ? What would be the overhead of doing this ? Also, some colleagues expressed concerns about the performance impact of allocating such a big mmap space. I expect it to behave like a swap file so that only dirty pages are to be considered.

    Read the article

  • Why won't EF4 generate a method to support my Function Import?

    - by Deane
    I have a stored proc in my database which returns an integer. I added a Function Import to my model. This appears in the EDMX file: <Function Name="GetTotalEntityCount" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo" /> However, no method actually gets generated for this. It should be top level, right? using (MyContext context = new MyContext()) { context.MyMethodShouldBeRightHere(); } Nothing appears in Intellisense, I've gone through the designer.cs file and there's nothing in there, and reflected the DLL...nothing. The code generator is just not generating any code to support this stored proc. I added another table to my database and updated the model, and that came in, so the model will update, it's just specifically ignoring this stored proc. I've tried everything I can think of, and consulted every resource I can find, and as near as I can tell, I'm doing everything right. I'm using EF4, database-first. (I'm pretty sure on the version, anyway. This shows up in the generated file: Runtime Version:4.0.30319.1 )

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31  | Next Page >