Search Results

Search found 17867 results on 715 pages for 'delete row'.

Page 626/715 | < Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >

  • Update data table on ASMX Service side

    - by Paul
    Hi, I need advice. On Web Service side a I hav this method : public DataSet GetDs(string id) { SqlConnection conn = null; SqlDataAdapter da = null; DataSet ds; try { string sql = "SELECT * FROM Tab1"; string connStr = WebConfigurationManager.ConnectionStrings["Employees"].ConnectionString; conn = new SqlConnection(connStr); conn.Open(); da = new SqlDataAdapter(sql, conn); ds = new DataSet(); da.Fill(ds, "Tab1"); return ds; } catch (Exception ex) { throw ex; } finally { if (conn != null) conn.Close(); if (da != null) da.Dispose(); } } It return dataset to client app. I client application is dataset binding in datagridview. Client can insert,update,delete row from table. If client finish his work, I want accept change in data table on web service side. I can sent clients all dataset na update table on web service side, but I want sent only changed data. Any advice? Thank u.

    Read the article

  • Entity Framework and WCf

    - by Nihilist
    Hi I am little confused on designing WCf services with EF. When using WCf and EF, where do we draw this line on what properties to return and what not to with the entity. Here is my scenario I have User. Here are the relations. User [1 to many] Address, User [ 1 to many] Email, User [ 1 to many] Phone So now on the webform, on page1 I can edit user information. say I can edit few properties on the user entity and can also edit address, phone, email entities[ like add / delete and update any] On page2, i can only update user properties and nothing related to navigation properties [ address, email, phone]. So when I return the User Entity [ OR DTO] should i be returning the navigation properties too? Or should the client make multiple calls to get navigation properites. Also, how does it go with Save? Like should the client make multiple calls to save user and related entites or just one call to save the graph? Lets say, if I just have a Save(User user) [ where user has all the related entities too] both page1 and page2 will call save and pass me the user. but one page1 i will need a lot more information. but on page2 i just need the user primitive properties. So my question is, where do we draw this line, how do we design theses services ? Is the WCF operation designed on the page and the fields it has ? I am hoping i explained my problem well enough.

    Read the article

  • how do I deconstruct COUNT()?

    - by user151841
    I have a view with some joins in it. I'm doing a select from that view with COUNT(*) as one of the columns of the select. I'm surprised by the number it's returning. Note that there is no GROUP BY nor aggregate column statement in the source view that the query is drawing from. How can I take it apart to see how it arrives at this number? I have three columns in the GROUP BY clause. SELECT column1, column2, column3, COUNT(*) FROM View GROUP BY column1, column2, column3 I get a result like +---------+---------+---------+----------+ | column1 | column2 | column3 | COUNT(*) | +---------+---------+---------+----------+ | value1 | valueA | value_a | 103 | +---------+---------+---------+----------+ | value2 | valueB | value_b | 56 | +---------+---------+---------+----------+ etc. I'd like to see how it arrives at that 103, 26, etc. In other words, I want to run a query that returns 103 rows of something, so that I know that I've expressed the query properly. I'm double-checking my work. I'm not saying that I think COUNT(*) doesn't work ( I know that "SELECT is not broken" ), what I want to double-check is exactly what I'm expressing in my query, because I think I've expressed the wrong thing, which would be why I'm getting unexpected values. I need to see more what I'm actually directing MySQL to count. So should I take them one by one, and try out each value in a WHERE clause? In other words, should I do SELECT column1 FROM View WHERE column1 = 'first_grouped_value' SELECT column1 FROM View WHERE column1 = 'second_grouped_value' SELECT column2 FROM View WHERE column1 = 'first_grouped_value' SELECT column2 FROM View WHERE column1 = 'second_grouped_value' and see the row count returned matches the COUNT(*) value in the grouped results? Because of confidentiality, I won't be able to post any of the query or database structure. All I'm asking for is a general technique to see what COUNT(*) is actually counting.

    Read the article

  • [MySQL] Efficiently store last X records per item

    - by Saif Bechan
    I want to store the last X records in an MySQL database in an efficient way. So when the 4th record is stored the first should be deleted. The way I do this not is first run a query getting the items. Than check what I should do then insert/delete. There has to be a better way to do this. Any suggestions? Edit I think I should add that the records stored do not have a unique number. They have a mixed par. For example article_id and user_id. Then I want to make a table with the last X items for user_x. Just selecting the article from the table grouped by user and sorted by time is not an option for me. The table where I do the sort and group on has millions of records and gets hit a lot for no reason. So making a table in between with the last X records is way more effient. PS. I am not using this for articles and users.

    Read the article

  • LDAP/AD Integrated Group/Membership Management Package suitable for embedding in an application

    - by Ernest
    In several web applications, it is often necessary to define groups of users for purposes of membership as well as role management. For example, in one of our applications we would like to user a group of "Network Engineers" and another group that consists of "Managers" of such Network Engineers. The information we need is contact details of members of each group. So far, we have written our own tools to allow the administrator of the application to add/delete/move groups and their memberships and either store them in a XML file or a database. Increasingly, companies already have the groups we want defined in LDAP/AD, so it would be best to create a pointer in our application to the correspoding group in LDAP. Although there are a number of LDAP libraries and LDAP browsers available and we could code this and provide a web front end to get a list of available groups and their members, we are wondering if there is already a "component framework" available that would readily provide this LDAP browsing functionality that we could just embed this into our application. Something between a library and a full LDAP browser product ? (To clarify, the use case is for an admin of our web application to create a locally relevant group name and then map it to an exiting LDAP group. To enable this in the UI, we would like to present a way for the admin to browse available groups in the company LDAP server, view their membership, and select the LDAP group they would like to map to the locally relevant group name. In a second step, we would then synchronize the members of that LDAP group and their contact details to a store in our application ) Appreciate any pointers.

    Read the article

  • XPath _relative_ to given element in HTMLUnit/Groovy?

    - by Misha Koshelev
    Dear All: I would like to evaluate an XPath expression relative to a given element. I have been reading here: http://www.w3schools.com/xpath/default.asp And it seems like one of the syntaxes below should work (esp no leading slash or descendant:) However, none seem to work in HTMLUnit. Any help much appreciated (oh this is a groovy script btw). Thank you! http://htmlunit.sourceforge.net/ http://groovy.codehaus.org/ Misha #!/usr/bin/env groovy import com.gargoylesoftware.htmlunit.WebClient def html=""" <html><head><title>Test</title></head> <body> <div class='levelone'> <div class='leveltwo'> <div class='levelthree' /> </div> <div class='leveltwo'> <div class='levelthree' /> <div class='levelthree' /> </div> </div> </body> </html> """ def f=new File('/tmp/test.html') if (f.exists()) { f.delete() } def fos=new FileOutputStream(f) fos<<html def webClient=new WebClient() def page=webClient.getPage('file:///tmp/test.html') def element=page.getByXPath("//div[@class='levelone']") assert element.size()==1 element=page.getByXPath("div[@class='levelone']") assert element.size()==0 element=page.getByXPath("/div[@class='levelone']") assert element.size()==0 element=page.getByXPath("descendant:div[@class='levelone']") // this gives namespace error assert element.size()==0 Thank you!!!

    Read the article

  • SQL Structure of DB table with different types of columns

    - by Dmitry Dvornikov
    I have a problem with the optimization of the structure of the database. I'll try to explain it exactly. I create a project, where we can add different values??, but this values must have different types of the columns in the database (eg, int, double , varchar). What is the best way to store the different types of values ??in the database. In the project I'm using Propel 1.6. The point is availability to add value with 'int', 'varchar' and other columns types, to search the table was efficient. In total, I have two ideas. The first is to create a table of "value", which will have columns: "id ", "value_int", "value_double", "value_varchar", etc - with the corresponding column types. Depending on the type of values??, records will be saved with the value in the appropriate column (the rest will be NULL). The second solution is to create separate tables such as "value_int", "value_varchar" etc. There would be columns: "id", "value", which correspond to the relevant types of "value" (ie, such as int, varchar, etc). I must admit that I do not believe any of the above solutions, originally I was thinking about one table "value", where the column would be a "text" type - but this solution would probably be even worse. I would like to know your opinion on this topic, maybe something else would be better. Thanks in advance. EDIT: For example : We have three tables: USER: [table of users] * id * name FIELD: [table of profile fields - where the column 'type' is the type of field, eg int or varchar) * id * type * name VALUE : * id * User_id - ( FK user.id ) * Field_id - ( FK field.id ) * value So we have in each row an user in USER table, and the profile is stored in the VALUE table. Bit each profile field may have a different type (column 'type' in the FIELD table), and based on that I would want this value to add to the appropriate column of the appropriate type.

    Read the article

  • JavaFX MouseEvent continues when I remove the object it happened on

    - by Kyle
    It took me a while to realize what was going on with mouse events going through my blocking dialog boxes when I closed them, but I finally figured out why. I still don't know any good way to fix it. I have a custom dialog box (that blocks the mouse) with a close button. When I click the close button, I remove the dialog box from the scene, but JavaFx is still processing the MouseEvent and now it finds that there is nothing blocking the screen behind where the cancel button was, so that component receives a MouseEvent. How do I make the mouseEvent stop processing when I see that they pressed cancel and remove the dialog box? Or, is there a way to make the removing of the dialog box not happen until after it is done processing the MouseEvent? Example Code for the problem: import javafx.stage.Stage; import javafx.scene.Scene; import javafx.scene.shape.Rectangle; import javafx.scene.input.MouseEvent; import javafx.scene.control.Button; var theScene:Scene; var btn:Button; Stage { title: "Application title" scene: theScene= Scene { width: 500 height: 200 content: [ Rectangle{ width: bind theScene.width height: bind theScene.height onMouseClicked: function(e:MouseEvent):Void{ println("Rectangle");} }, Button{ layoutX: 20 layoutY: 50 blocksMouse: true text: "JustPrint" action:function():Void{ println("JustPrint");} }, btn = Button{ layoutX: 20 layoutY: 20 blocksMouse: true text: "Cancel" action:function():Void{ println("Cancel"); delete btn from theScene.content;} }, ] } } When you press "JustPrint" you get: JustPrint When you press "Cancel" you get: Cancel Rectangle

    Read the article

  • What is the procedure for debugging a production-only error?

    - by Lord Torgamus
    Let me say upfront that I'm so ignorant on this topic that I don't even know whether this question has objective answers or not. If it ends up being "not," I'll delete or vote to close the post. Here's the scenario: I just wrote a little web service. It works on my machine. It works on my team lead's machine. It works, as far as I can tell, on every machine except for the production server. The exception that the production server spits out upon failure originates from a third-party JAR file, and is skimpy on information. I search the web for hours, but don't come up with anything useful. So what's the procedure for tracking down an issue that occurs only on production machines? Is there a standard methodology, or perhaps category/family of tools, for this? The error that inspired this question has already been fixed, but that was due more to good fortune than a solid approach to debugging. I'm asking this question for future reference. Some related questions: Test accounts and products in a production system Running test on Production Code/Server

    Read the article

  • Jquery Append() and remove() element

    - by BandonRandon
    Hi, I have a form where I'm dynamically adding the ability to upload files with the append function but I would also like to be able to remove unused fields. Here is the html markup <span class="inputname">Project Images: <a href="#" class="add_project_file"><img src="images/add_small.gif" border="0"></a></span> <span class="project_images"> <input name="upload_project_images[]" type="file" /><br/> </span> Right now if they click on the "add" gif a new row is added with this jquery $('a.add_project_file').click(function() { $(".project_images").append('<input name="upload_project_images[]" type="file" class="new_project_image" /> <a href="#" class="remove_project_file" border="2"><img src="images/delete.gif"></a><br/>'); return false; }); To remove the input box i've tried to add the class "remove_project_file" then add this function. $('a.remove_project_file').click(function() { $('.project_images').remove(); return false;}); I think there should be a much easier way to do this. Maybe i need to use the $(this) function for the remove. Another possible solution would be to expand the "add project file" to do both adding and removing fields. Any of you JQuery wizards have any ideas that would be great

    Read the article

  • How to disable the button based on Grid rows count using jquery

    - by kumar
    Hello friends, I have a button on the view.. I need to make disable based row count in the jquery grid.. that is if Rows are comming lessthen 100 I need to make Disable that button? any idea about this? here is my code.. <script type="text/javascript"> $(document).ready(function() { var RegisterGridEvents = function(excGrid) { //Register column chooser $(excGrid).jqGrid('navButtonAdd', excGrid + '_pager', { caption: "Columns", title: "Reorder Columns", onClickButton: function() { $(excGrid).jqGrid('columnChooser'); } }); $(".ui-pg-selbox").hide(); $('.ui-jqgrid-htable th:first').prepend('Select All').attr('style', 'font-size:7px'); //Register grid resize $(excGrid).jqGrid('gridResize', { minWidth: 350, maxWidth: 1500, minHeight: 400, maxHeight: 12000 }); }; $('#specialist-tab').tabs("option", "disabled", [2, 3, 4]); $('.button').button(); RegisterButtonEvents(); RegisterGridEvents("#ExceptionsGrid") }); **var a = $("#ExceptionsGrid").jqGrid('getGridParam', 'records').toString(); alert(a);** </script> is this right where I am doing this? thanks

    Read the article

  • Updating section in ConfigParser (or an alternative)

    - by lyrae
    I am making a plugin for another program and so I am trying to make thing as lightweight as possible. What i need to do is be able to update the name of a section in the ConfigParser's config file. [project name] author:john doe email: [email protected] year: 2010 I then have text fields where user can edit project's name, author, email and year. I don't think changing [project name] is possible, so I have thought of two solutions: 1 -Have my config file like this: [0] projectname: foobar author:john doe email: [email protected] year: 2010 that way i can change project's name just like another option. But the problem is, i would need the section # to be auto incremented. And to do this i would have to get every section, sort of, and figure out what the next number should be. The other option would be to delete the entire section and its value, and re-add it with the updated values which would require a little more work as well, such as passing a variable that holds the old section name through functions, etc, but i wouldn't mind if it's faster. Which of the two is best? or is there another way? I am willing to go with the fastest/lightweight solution possible, doesn't matter if it requires more work or not.

    Read the article

  • How can I cluster short messages [Tweets] based on topic ? [Topic Based Clustering]

    - by Jagira
    Hello, I am planning an application which will make clusters of short messages/tweets based on topics. The number of topics will be limited like Sports [ NBA, NFL, Cricket, Soccer ], Entertainment [ movies, music ] and so on... I can think of two approaches to this Ask for users to tag questions like Stackoverflow does. Users can select tags from a predefined list of tags. Then on server side I will cluster them on based of tags. Pros:- Simple design. Less complexity in code. Cons:- Choices for users will be restricted. Clusters will not be dynamic. If a new event occurs, the predefined tags will miss it. Take the message, delete the stopwords [ predefined in a dictionary ] and apply some clustering algorithm to make a cluster and depending on its popularity, display the cluster. The cluster will be maintained according to its sustained popularity. New messages will be skimmed and assigned to corresponding clusters. Pros:- Dynamic clustering based on the popularity of the event/accident. Cons:- Increased complexity. More server resources required. I would like to know whether there are any other approaches to this problem. Or are there any ways of improving the above mentioned methods? Also suggest some good clustering algorithms.I think "K-Nearest Clustering" algorithm is apt for this situation.

    Read the article

  • Mysql random rows

    - by n00b
    please read the whole question... 90% of you dont seem to do that and some of you only read the title obviously... and if you dont know the solution, dont answer - i wont have to downvote you -.-'' im entertaining the idea of getting random rows directly from mysql. what i found was SELECT * FROM tablename WHERE somefield='something' ORDER BY RAND() LIMIT 5 but even i see how slow that would be.. is the only way to do this doing something like SELECT * FROM tablename WHERE somefield='something' LIMIT RAND(aincrementvalue-5), 1 5 times? or is there a way that i with my little knowlege of databases cant come up with ? (no i dont want random indexes. i hate the idea of them...) @commenters - please first look, then think, then look again, think again and then post. i wont point fingers but i dislike stupid comments and why i think random indexes are a nasty hack ? it doesnt give you random results. it gives you x results from a random index in a predefined order its like a gapless id only in the wrong order if you fetch by 1 row and get true randomness you fall back to my method but with an additional junk field finally the reason the field exists is only to serve as a helper to something that can be done without it with almost same performance (but the quality (randomness) is better), so it is a nasty hack ;) i solved it, look @ my answer... if you think its incorrect please tell me :)

    Read the article

  • edmx - The operation could not be completed - When adding Inheritance

    - by vdh_ant
    Hey guys I have an edmx model which I have draged 2 tables onto - One called 'File' and the other 'ApplicaitonFile'. These two tables have a 1 to 1 relationship in the database. If I stop here everything works fine. But in my model, I want 'ApplicaitonFile' to inherit from 'File'. So I delete the 1 to 1 relationship then configure 'ApplicaitonFile' from 'File' and then remove the FileId from 'ApplicaitonFile' which was the primary key. (Note I am following the instructions from here). If I leave the model open at this point everything is fine, but as soon as I close it, if I try and reopen it again I get the following error "The operation could not be completed". I have been searching for a solution and found this - http://stackoverflow.com/questions/944050/entity-model-does-not-load but as far as I can tell I don't have a duplicate InheritanceConnectors (although I don't know exactly what I'm looking for but I can't see anything out of the ordinary - like 2 connectors with the same name) and the relationship I originally have is a 1 to 1 not a 1 to 0..1 Any ideas??? this is driving me crazy...

    Read the article

  • QSqlQuery UPDATE/INSERT DateTime with server's time (eg CURRENT_TIMESTAMP)

    - by Skinniest Man
    I am using QSqlQuery to insert data into a MySQL database. Currently all I care about is getting this to work with MySQL, but ideally I'd like to keep this as platform-independent as possible. What I'm after, in the context of MySQL, is to end up with code that effectively executes something like the following query: UPDATE table SET time_field=CURRENT_TIMESTAMP() WHERE id='5' The following code is what I have attempted, but it fails: QSqlQuery query; query.prepare("INSERT INTO table SET time_field=? WHERE id=?"); query.addBindValue("CURRENT_TIMESTAMP()"); query.addBindValue(5); query.exec(); The error I get is: Incorrect datetime value: 'CURRENT_TIMESTAMP()' for column 'time_field' at row 1 QMYSQL3: Unable to execute statement. I am not surprised as I assume Qt is doing some type checking when it binds values. I have dug through the Qt documentation as well as I know how, but I can't find anything in the API designed specifically for supporting MySQL's CURRENT_TIMESTAMP() function, or that of any other DBMS. Any suggestions?

    Read the article

  • SSRS code variable resetting on new page

    - by edmicman
    In SSRS 2008 I am trying to maintain a SUM of SUMs on a group using custom Code. The reason is that I have a table of data, grouped and returning SUMs of the data. I have a filter on the group to remove lines where group sums are zero. Everything works except I'm running into problems with the group totals - it should be summing the visible group totals but is instead summing the entire dataset. There's tons of articles about how to work around this, usually using custom code. I've made custom functions and variables to maintain a counter: Public Dim GroupMedTotal as Integer Public Dim GrandMedTotal as Integer Public Function CalcMedTotal(ThisValue as Integer) as Integer GroupMedTotal = GroupMedTotal + ThisValue GrandMedTotal = GrandMedTotal + ThisValue Return ThisValue End Function Public Function ReturnMedSubtotal() as Integer Dim ThisValue as Integer = GroupMedTotal GroupMedTotal = 0 Return ThisValue End Function Basically CalcMedTotal is fed a SUM of a group, and maintains a running total of that sum. Then in the group total line I output ReturnMedSubtotal which is supposed to give me the accumulated total and reset it for the next group. This actually works great, EXCEPT - it is resetting the GroupMedTotal value on each page break. I don't have page breaks explicitly set, it's just the natural break in the SSRS viewer. And if I export the results to Excel everything works and looks correctly. If I output Code.GroupMedTotal on each group row, I see it count correctly, and then if a group spans multiple pages on the next page GroupMedTotal is reset and begins counting from zero again. Any help in what's going on or how to work around this? Thanks!

    Read the article

  • Modify SQL result set before returning from stored procedure

    - by m0sa
    I have a simple table in my SQL Server 2008 DB: Tasks_Table -id -task_complete -task_active -column_1 -.. -column_N The table stores instructions for uncompleted tasks that have to be executed by a service. I want to be able to scale my system in future. Until now only 1 service on 1 computer read from the table. I have a stored procedure, that selects all uncompleted and inactive tasks. As the service begins to process tasks it updates the task_active flag in all the returned rows. To enable scaleing of the system I want to enable deployment of the service on more machines. Because I want to prevent a task being returned to more than 1 service I have to update the stored procedure that returns uncompleted and inactive tasks. I figured that i have to lock the table (only 1 reader at a time - I know I have to use an apropriate ISOLATION LEVEL), and updates the task_active flag in each row of the result set before returning the result set. So my question is how to modify the SELECT result set iin the stored procedure before returning it?

    Read the article

  • Is www.example.com/post/21/edit a RESTful URI? I think I know the answer, but have another question.

    - by tmadsen
    I'm almost afraid to post this question, there has to be an obvious answer I've overlooked, but here I go: Context: I am creating a blog for educational purposes (want to learn python and web.py). I've decided that my blog have posts, so I've created a Post class. I've also decided that posts can be created, read, updated, or deleted (so CRUD). So in my Post class, I've created methods that respond to POST, GET, PUT, and DELETE HTTP methods). So far so good. The current problem I'm having is a conceptual one, I know that sending a PUT HTTP message (with an edited Post) to, e.g., /post/52 should update post with id 52 with the body contents of the HTTP message. What I do not know is how to conceptually correctly serve the (HTML) edit page. Will doing it like this: /post/52/edit violate the idea of URI, as 'edit' is not a resource, but an action? On the other side though, could it be considered a resource since all that URI will respond to is a GET method, that will only return an HTML page? So my ultimate question is this: How do I serve an HTML page intended for user editing in a RESTful manner?

    Read the article

  • Why is the main method not covered?

    - by Mike.Huang
    main method: public static void main(String[] args) throws Exception { if (args.length != EXPECTED_NUMBER_OF_ARGUMENTS) { System.err.println("Usage - java XFRCompiler ConfigXML PackageXML XFR"); } String configXML = args[0]; String packageXML = args[1]; String xfr = args[2]; AutoConfigCompiler compiler = new AutoConfigCompiler(); compiler.setConfigDocument(loadDocument(configXML)); compiler.setPackageInfoDoc(loadDocument(packageXML)); // compiler.setVisiblityDoc(loadDocument("VisibilityFilter.xml")); compiler.compileModel(xfr); } private static Document loadDocument(String fileName) throws Exception { TXDOMParser parser = (TXDOMParser) ParserFactory.makeParser(TXDOMParser.class.getName()); InputSource source = new InputSource(new FileInputStream(fileName)); parser.parse(source); return parser.getDocument(); } testcase: @Test public void testCompileModel() throws Exception { // construct parameters URL configFile = Thread.currentThread().getContextClassLoader().getResource("Ford_2008_Mustang_Config.xml"); URL packageFile = Thread.currentThread().getContextClassLoader().getResource("Ford_2008_Mustang_Package.xml"); File tmpFile = new File("Ford_2008_Mustang_tmp.xfr"); if(!tmpFile.exists()) { tmpFile.createNewFile(); } String[] args = new String[]{configFile.getPath(),packageFile.getPath(),tmpFile.getPath()}; try { // test main method XFRCompiler.main(args); } catch (Exception e) { assertTrue(true); } try { // test args length is less than 3 XFRCompiler.main(new String[]{"",""}); } catch (Exception e) { assertTrue(true); } tmpFile.delete(); } Coverage outputs displayed as the lines from String configXML = args[0]; in main method are not covered.

    Read the article

  • display multiple errors via bool flag c++

    - by igor
    Been a long night, but stuck on this and now am getting "segmentation fault" in my compiler.. Basically I'm trying to display all the errors (the cout) needed. If there is more than one error, I am to display all of them. bool validMove(const Square board[BOARD_SIZE][BOARD_SIZE], int x, int y, int value) { int index; bool moveError = true; const int row_conflict(0), column_conflict(1), grid_conflict(2); int v_subgrid=x/3; int h_subgrid=y/3; getCoords(x,y); for(index=0;index<9;index++) if(board[x][index].number==value){ cout<<"That value is in conflict in this row\n"; moveError=false; } for(index=0;index<9;index++) if(board[index][y].number==value){ cout<<"That value is in conflict in this column\n"; moveError=false; } for(int i=v_subgrid*3;i<(v_subgrid*3 +3);i++){ for(int j=h_subgrid*3;j<(h_subgrid*3+3);j++){ if(board[i][j].number==value){ cout<<"That value is in conflict in this subgrid\n"; moveError=false; } } } return true; }

    Read the article

  • What is the purpose of unit testing an interface repository

    - by ahsteele
    I am unit testing an ICustomerRepository interface used for retrieving objects of type Customer. As a unit test what value am I gaining by testing the ICustomerRepository in this manner? Under what conditions would the below test fail? For tests of this nature is it advisable to do tests that I know should fail? i.e. look for id 4 when I know I've only placed 5 in the repository I am probably missing something obvious but it seems the integration tests of the class that implements ICustomerRepository will be of more value. [TestClass] public class CustomerTests : TestClassBase { private Customer SetUpCustomerForRepository() { return new Customer() { CustId = 5, DifId = "55", CustLookupName = "The Dude", LoginList = new[] { new Login { LoginCustId = 5, LoginName = "tdude" }, new Login { LoginCustId = 5, LoginName = "tdude2" } } }; } [TestMethod] public void CanGetCustomerById() { // arrange var customer = SetUpCustomerForRepository(); var repository = Stub<ICustomerRepository>(); // act repository.Stub(rep => rep.GetById(5)).Return(customer); // assert Assert.AreEqual(customer, repository.GetById(5)); } } Test Base Class public class TestClassBase { protected T Stub<T>() where T : class { return MockRepository.GenerateStub<T>(); } } ICustomerRepository and IRepository public interface ICustomerRepository : IRepository<Customer> { IList<Customer> FindCustomers(string q); Customer GetCustomerByDifID(string difId); Customer GetCustomerByLogin(string loginName); } public interface IRepository<T> { void Save(T entity); void Save(List<T> entity); bool Save(T entity, out string message); void Delete(T entity); T GetById(int id); ICollection<T> FindAll(); }

    Read the article

  • .save puts NULL in id field in Rails

    - by mathee
    Here's the model file: class ProfileTag < ActiveRecord::Base def self.create_or_update(options = {}) id = options.delete(:id) record = find_by_id(id) || new record.id = id record.attributes = options puts "record.profile_id is" puts record.profile_id record.save! record end end This gives me the correct print out in my log. But it also says that there's a call to UPDATE that sets profile_id to NULL. Here's some of the output in the log file: Processing ProfilesController#update (for 127.0.0.1 at 2010-05-28 18:20:54) [PUT] Parameter: {"commit"=>"Save", ...} ?[4;36;1mProfileTag Create (0.0ms)?[0m ?[0;1mINSERT INTO `profile_tags` (`reputation_value`, `updated_at`, `tag_id`, `id`, `profile_id`, `created_at`) VALUES(0, '2010-05-29 01:20:54', 1, NULL, 4, '2010-05-29 01:20:54')?[0m ?[4;35;1mSQL (2.0ms)?[0m ?[0mCOMMIT?[0m ?[4;36;1mSQL (0.0ms)?[0m ?[0;1mBEGIN?[0m ?[4;35;1mSQL (0.0ms)?[0m ?[0mCOMMIT?[0m ?[4;36;1mProfileTag Load (0.0ms)?[0m ?[0;1mSELECT * FROM `profile_tags` WHERE (`profile_tags`.profile_id = 4) ?[0m ?[4;35;1mSQL (1.0ms)?[0m ?[0mBEGIN?[0m ?[4;36;1mProfileTag Update (0.0ms)?[0m ?[0;1mUPDATE `profile_tags` SET profile_id = NULL WHERE (profile_id = 4 AND id IN (35)) ?[0m I'm not sure I understand why the INSERT puts the value into profile_id properly, but then it sets it to NULL on an UPDATE. If you need more specifics, please let me know. I'm thinking that the save functionality does many things other than INSERTs into the database, but I don't know what I need to specify so that it will properly set profile_id.

    Read the article

  • Drawing and filling different polygons at the same time in MATLAB

    - by Hossein
    Hi,I have the code below. It load a CSV file into memory. This file contains the coordinates for different polygons.Each row of this file has X,Y coordinates and a string which tells that to which polygon this datapoint belongs. for example a polygone named "Poly1" with 100 data points has 100 rows in this file like : Poly1,X1,Y1 Poly1,X2,Y2 ... Poly1,X100,Y100 Poly2,X1,Y1 ..... The index.csv file has the number of datapoint(number of rows) for each polygon in file Polygons.csv. These details are not important. The thing is: I can successfully extract the datapoints for each polygon using the code below. However, When I plot the lines of different polygons are connected to each other and the plot looks crappy. I need the polygons to be separated(they are connected and overlapping the some areas though). I thought by using "fill" I can actually see them better. But "fill" just filles every polygon that it can find and that is not desirable. I only want to fill inside the polygons. Can someone help me? I can also send you my datapoint if necessary, they are less than 200Kb. Thanks [coordinates,routeNames,polygonData] = xlsread('Polygons.csv'); index = dlmread('Index.csv'); firstPointer = 0 lastPointer = index(1) for Counter=2:size(index) firstPointer = firstPointer + index(Counter) + 1 hold on plot(coordinates(firstPointer:lastPointer,2),coordinates(firstPointer:lastPointer,1),'r-') lastPointer = lastPointer + index(Counter) end

    Read the article

  • Loading Dimension Tables - Methodologies

    - by Nev_Rahd
    Hello, Recently I been working on project, where need to populated Dim Tables from EDW Tables. EDW Tables are of type II which does maintain historical data. When comes to load Dim Table, for which source may be multiple EDW Tables or would be single table with multi level pivoting (on attributes). Mean: There would be 10 records - one for each attribute which need to be pivoted on domain_code to make a single row in Dim. Out of these 10 records there would be some attributes with same domain_code but with different sub_domain_code, which needs further pivoting on subdomain code. Ex: if i got domain code: 01,02, 03 = which are straight pivot on domain code I would also have domain code: 10 with subdomain code / version as 2006,2007,2008,2009 That means I need to split my source table with above attributes into two = one for domain code and other for domain_code + version. so far so good. When it comes to load Dim Table: As per design specs for Dimensions (originally written by third party), what they want is: for every single change in EDW (attribute), it should assemble all the related records (for that NK) mean new one with other attribute values which are current = process them to create a new dim record and insert it. That mean if a single extract contains 100 records updated (one for each NK), it should assemble 100 + (100*9) records to insert / update dim table. How good is this approach. Other way I tried to do is just do a lookup into dim table for that NK get the value's of recent records (attributes which not changed) and insert it and update the current one. What would be the better approach assembling records at source side for one attribute change or looking into dim table's recent record and process it. If this doesn't make sense, would like to elaborate it further. Thanks

    Read the article

< Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >