Search Results

Search found 6017 results on 241 pages for 'universal records managem'.

Page 79/241 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • WPF passing the Int argument to a data object provider method

    - by SAD
    Hi, I'm trying to produce a master/detail datagrid view. I'm using object data providers. Now I have seen many examples when the argument of a method for returning the records for the detail view is a string, like in this example: <!-- the orders datasource --> <ObjectDataProvider x:Key="OrdersDataProvider" ObjectType="{x:Type local:OrdersDataProvider}"/> <ObjectDataProvider x:Key="Orders" MethodName="GetOrdersByCustomer" ObjectInstance="{StaticResource OrdersDataProvider}" > <ObjectDataProvider.MethodParameters> <x:Static Member="system:String.Empty"/> </ObjectDataProvider.MethodParameters> </ObjectDataProvider> But in my case, the argument that I want to pass is the int ID not the string. Can I still somehow use the object data provider to return all the records from the database for the detail view if the argument passed to a method has to be an int? How could it be implemented in MVVM? Thanks a lot for any advice1

    Read the article

  • Stored Procedure to create Insert statements in MySql ??

    - by karthik
    I need a storedprocedure to get the records of a Table and return the value as Insert Statements for the selected records. For Instance, The stored procedure should have three Input parameters... 1- Table Name 2- Column Name 3- Column Value If 1- Table Name = "EMP" 2- Column Name = "EMPID" 3- Column Value = "15" Then the output should be, select all the values of EMP where EMPID is 15 Once the values are selected for above condition, the stored procedure must return the script for inserting the selected values. The purpose of this is to take backup of selected values. when the SP returns a value {Insert statements}, c# will just write them to a .sql file. I have no idea about writing this SP, any samples of code is appreicated. Thanks..

    Read the article

  • Paging in ActiveRecord

    - by Alex
    Does CastleProject ActiveRecord support paging? I need to load only data which is now seen on the screen. If I use [HasMany], it will be loaded as a whole either immediately or at the first call (if lazy attribute is true). However I only need something like first 100 records (then maybe 100 next records). Another question is how to load only 100 items. If the collection is too big, memory can reach its limit if we constantly load more and more items.

    Read the article

  • Java NullPointerException when traversing a non-null recordset

    - by Tim
    Hello again - I am running a query on Sybase ASE that produces a ResultSet that I then traverse and write the contents out to a file. Sometimes, this will throw a NullPointerException, stating that the ResultSet is null. However, it will do this after printing out one or two records. Other times, with the same exact input, I will receive no errors. I have been unable to consistently produce this error. The error message is pointing to a line: output.print(rs.getString(1)); It appears to happen when the query takes a little longer to run, for some reason. The recordset returns thus far have been very small (4 to 7 records). Sometimes I'll have to run the app 3 or 4 times, then the errors will just stop, as though the query was getting "warmed up". I've run the query manually and there doesn't appear to be any performance problems. Thanks again!

    Read the article

  • Large number of UPDATE queries slowing down page

    - by Bryan Lewis
    I am reading and validating large fixed-width text files (range from 10-50K lines) that are submitted via our ASP.net website (coded in VB.Net). I do an initial scan of the file to check for basic issues (line length, etc). Then I import each row into a MS SQL table. Each DB rows basically consists of a record_ID (Primary, auto-incrementing) and about 50 varchar fields. After the insert is done, I run a validation function on the file that checks each field in each row based on a bunch of criteria (trimmed length, isnumeric, range checks, etc). If it finds an error in any field, it inserts a record into the Errors table, which has an error_ID, the record_ID and an error message. In addition, if the field fails in a particular way, I have to do a "reset" on that field. A reset might consist of blanking the entire field, or simply replacing the value with another value (e.g. replacing the string with a new one that has all illegals chars taken out). I have a 5,000 line test file. The upload, initial check, and import takes about 5-6 seconds. The detailed error check and insert into the Errors table takes about 5-8 seconds (this file has about 1200 errors in it). However, the "resets" part takes about 40-45 seconds for 750 fields that need to be reset. When I comment out the resets function (returning immediately without actually calling the UPDATE stored proc), the process is very fast. With the resets turned on, the pages take 50 seconds to return. My UPDATE stored proc is using some recommended code from http://sommarskog.se/dynamic_sql.html, whereby it uses CASE instead of dynamic SQL: UPDATE dbo.Records SET dbo.Records.file_ID = CASE @field_name WHEN 'file_ID' THEN @field_value ELSE file_ID END, . . (all 50 varchar field CASE statements here) . WHERE dbo.Records.record_ID = @record_ID Is there any way I can help my performance here. Can I somehow group all of these UPDATE calls into a single transaction? Should I be reworking the UPDATE query somehow? Or is it just sheer quantity of 750+ UPDATEs and things are just slow (it's a quad proc server with 8GB ram). Any suggestions appreciated.

    Read the article

  • Dynamic filling WrapPanel buttons from DB, setting the event handlers

    - by Anry
    I have a table: I'm using NHibernate. The class of entity: public class PurchasedItem { public virtual int Id { get; set; } public virtual Product Product { get; set; } public virtual int SortSale { get; set; } } I want to get all the records table PurchasedItems (my method returns an IList ). Records are sorted in descending order (column SortSale). How to fill WrapPanel buttons from the list IList ? For each button assign the event handler. By pressing the button display a message with the name of the product.

    Read the article

  • Low cost way to host a large table yet keep the performance scalable?

    - by Leo Liang
    I have a growing table storing time series data, 500M entries now, and 200K new records every day. The total size is around 15GB for now. My clients are querying the table via a PHP script mostly, and the size of the result set is around 10K records (not very large). select * from T where timestamp > X and timestamp < Y and additionFilters And I want this operation cheap. Currently my table is hosting in Postgres 7, on a single 16G memory Box, and I would love to see some good suggestion for me to host this in low cost and also allow me to scale up for performance if needed. The table serves: 1. Query: 90% 2. Insert: 9.9% 2. Update: 0.1% <-- very rare.

    Read the article

  • iPhone: Add background button to view when UITableView has no cells

    - by Nic Hubbard
    I have a UITableViewController, when there is no data to populate the UITableView, I want to add a button, which uses an image. So, rather than the user seeing a tableview with no records, they will see an image that says, "No records have been added, Tap to add one", then they click and we create a new one. I assumed I would just hide the UITableView, then create the button, but I never see the button. Here I am using: if ([[fetchedResultsController sections] count] == 0) { self.tableView.hidden = YES; // Create button w/ image UIButton * btn = [UIButton buttonWithType:UIButtonTypeRoundedRect]; btn.frame = CGRectMake(0, 0, 100, 50); [btn setImage:[UIImage imageNamed:@"no-rides.png"] forState:UIControlStateNormal]; [self.view addSubview:btn]; } Ideas on why I would never see the button? When I show this view, it seems to have a transparent background for a second, then changes white...

    Read the article

  • Implement a WebDAV server in C#?

    - by HaukurHaf
    We've got a CMS system written in .NET C#. This system has editing facilities for templates (essentially HTML files) and various other support files such as CSS and javascript files. These "files" are not really files, but database records and they are edited using plain old textareas within the CMS system. To make editing these "files" easier, on idea was to implement WebDAV support in the CMS system for these files, so users could use some WebDAV client software to connect to the CMS and then open these in VS 2008 for example. Firstly, is this a feasible idea? Secondly, if so ... where do start? Any good articles out there about implementing a WebDAV server in C# to provide access to either physical documents or "pseudo" documents which are in reality just records in a database? Any input appreciated ....

    Read the article

  • NHibernate Many to Many delete all my data in the table

    - by Daoming Yang
    I would love to thank @Stefan Steinegger and @David helped me out yesterday with many-to-many mapping. I have 3 tables which are "News", "Tags" and "News_Tags" with Many-To-Many relationship and the "News_Tags" is the link table. If I delete one of the news records, the following mappings will delete all my news records which have the same tags. One thing I need to notice, I only allowed unique tag stored in the "Tag" table. This mapping make sense for me, it will delete the tag and related News records, but how can I implement a tagging system with NHibernate? Can anyone give me some suggestion? Many thanks. Daoming. News Mapping: <class name="New" table="News" lazy="false"> <id name="NewID"> <generator class="identity" /> </id> <property name="Title" type="String"></property> <property name="Description" type="String"></property> <set name="TagsList" table="New_Tags" lazy="false" inverse="true" cascade="all"> <key column="NewID" /> <many-to-many class="Tag" column="TagID" /> </set> </class> Tag Mapping: <class name="Tag" table="Tags" lazy="false"> <id name="TagID"> <generator class="identity" /> </id> <property name="TagName" type="String"></property> <property name="DateCreated" type="DateTime"></property> <!--inverse="true" has been defined in the "News mapping"--> <set name="NewsList" table="New_Tags" lazy="false" cascade="all"> <key column="TagID" /> <many-to-many class="New" column="NewID" /> </set> </class>

    Read the article

  • Appengine (python) returns empty for valid queries

    - by Grant
    I've got an app with around half a million 'records', each of which only stores three fields. I'd like to look up records by a string field with a query, but I'm running into problems. If I visit the console page, manually view a record and save it (without making changes) it shows up in a query: SELECT * FROM wordEntry WHERE wordStr = 'SomeString' If I don't do this, I get 'no results'. Does appengine need time to update? If so, how much? (I was also having trouble batch deleting and modifying data, but I was able to break the problem up into smaller chunks.)

    Read the article

  • data type mismatch when comparing dates in MS-Access

    - by jonos
    I have dates stored in an MS-Access table in the 'General Date' format. I'm trying to create a query that returns records between a specific date range (all records from March 2010) however I encounter a 'data type mismatch in critera expression' message. Here is my statement; SELECT Loan.loan_datetimeLeant, product_name, [product_artist/director], product_category, loanItem_cost FROM Loan INNER JOIN ((Product INNER JOIN Item ON Product.[product_id] = Item.[product_id]) INNER JOIN Loan_Items ON Item.[item_id] = Loan_Items.[item_id]) ON (Loan.[cust_id] = Loan_Items.[cust_id]) AND (Loan.[loan_datetimeLeant] = Loan_Items.[loan_datetimeLeant]) WHERE Loan.loan_datetimeLeant = '01/03/2010' AND Loan.loan_datetimeLeant <= '31/03/2010' ORDER BY Loan.loan_datetimeLeant ; I have tried variations on the date format (mm/dd/yyyy, dd/mm/yyyy 00:00:00)

    Read the article

  • Is there a way to rewrite the SQL query efficiently

    - by user320587
    hi, I have two tables with following definition TableA TableB ID1 ID2 ID3 Value1 Value ID1 Value1 C1 P1 S1 S1 C1 P1 S2 S2 C1 P1 S3 S3 C1 P1 S5 S4 S5 The values are just examples in the table. TableA has a clustered primary key ID1, ID2 & ID3 and TableB has p.k. ID1 I need to create a table that has the missing records in TableA based on TableB The select query I am trying to create should give the following output C1 P1 S4 To do this, I have the following SQL query SELECT DISTINCT TableA.ID1, TableA.ID2, TableB.ID1 FROM TableA a, TableB b WHERE TableB.ID1 NOT IN ( SELECT DISTINCT [ID3] FROM TableA aa WHERE a.ID1 == aa.ID1 AND a.ID2 == aa.ID2 ) Though this query works, it performs poorly and my final TableA may have upto 1M records. is there a way to rewrite this more efficiently. Thanks for any help, Javid

    Read the article

  • Access VBA remove CR & LF only from the beginning of a text string by searching for them

    - by uZI
    Hi there I need to remove line breaks from the beginning of a memo type records. I dont want to use the replace function as it would remove all line breaks from the record which is not desired. Its only the line breaks at the beginning of the field that I am interested in removing. Furthermore, the my records do not always begin with a line break so I cant really use text positioning, the solution would be to look for line break at the beginning instead of always expecting it at the beginning. click here for a sample of what my data looks like any help will be greatly appreciated

    Read the article

  • Data Access from single table in sql server 2005 is too slow

    - by Muhammad Kashif Nadeem
    Following is the script of table. Accessing data from this table is too slow. SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[Emails]( [id] [int] IDENTITY(1,1) NOT NULL, [datecreated] [datetime] NULL CONSTRAINT [DF_Emails_datecreated] DEFAULT (getdate()), [UID] [nvarchar](250) COLLATE Latin1_General_CI_AS NULL, [From] [nvarchar](100) COLLATE Latin1_General_CI_AS NULL, [To] [nvarchar](100) COLLATE Latin1_General_CI_AS NULL, [Subject] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [Body] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [HTML] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [AttachmentCount] [int] NULL, [Dated] [datetime] NULL ) ON [PRIMARY] Following query takes 50 seconds to fetch data. select id, datecreated, UID, [From], [To], Subject, AttachmentCount, Dated from emails If I include Body and Html in select then time is event worse. indexes are on: id unique clustered From Non unique non clustered To Non unique non clustered Tabls has currently 180000+ records. There might be 100,000 records each month so this will become more slow as time will pass. Does splitting data into two table will solve the problem? What other indexes should be there?

    Read the article

  • Recreation of DB using "mysql mydb < mydb.sql" is really slow when the table has tens of millions of

    - by Jian Lin
    It seems that a MySQL database that has a table with tens of millions of records will get a big INSERT INTO statement when the following mysqldump some_db > some_db.sql is done to back up the database. (is it 1 insert statement that handles all the records?) So when reconstructing the DB using mysql some_db < some_db.sql then the CPU is hardly busy (about 1.8% usage by the mysql process... I don't see a mysqld either?) and also the hard disk doesn't seem to be too busy... Last time, the whole restore process took 5 hours. Is there a way to make it faster? Such as, when doing mysqldump, can it break the INSERT statement into shorter ones, so that the mysql doesn't need to parse the line so hard when restoring the DB?

    Read the article

  • About enumerations in Delphi and c++ in 64-bit environments

    - by sum1stolemyname
    I recently had to work around the different default sizes used for enumerations in Delphi and c++ since i have to use a c++ dll from a delphi application. On function call returns an array of structs (or records in delphi), the first element of which is an enum. To make this work, I use packed records (or aligned(1)-structs). However, since delphi selects the size of an enum-variable dynamically by default and uses the smallest datatype possible (it was a byte in my case), but C++ uses an int for enums, my data was not interpreted correctly. Delphi offers a compiler switch to work around this, so the declaration of the enum becomes {$Z4} TTypeofLight = ( V3d_AMBIENT, V3d_DIRECTIONAL, V3d_POSITIONAL, V3d_SPOT ); {$Z1} My Questions are: What will become of my structs when they are compiled on/for a 64-bit environment? Does the default c++ integer grow to 8 Bytes? Are there other memory alignment / data type size modifications (other than pointers)?

    Read the article

  • How to create an array from database?

    - by Sofyan
    Hi, Please help me to create an array from a field of my DB. That field has records separated by comma. Below is the illustration: ID | article_title_fld | article_tags_fld | ---------------------------------------------------------------------- 1 | Learn PHP | PHP, coding, scripting | 3 | Javascript Tutorial | Javascript, scripting, tutorial | 4 | Styling with CSS | CSS, tutorial, web design | I want to collect all records in the article_tags_fld then put it into 1 array. Perhaps I named it $array1, and the print out as below: Array ( [0] => PHP [1] => coding [2] => scripting [3] => Javascript [4] => scripting [5] => tutorial [6] => CSS [7] => tutorial [8] => web design )

    Read the article

  • How I can make Recycle Bin for Database ?Application?

    - by Wael Dalloul
    Hi, I have database application, I want to allow the user to restore the deleted records from the database, like in windows we have Recycle bin for files I want to do the same thing but for database records, Assume that I have a lot of related tables that have a lot of fields. Edit: let's say that I have the following structures: Reports table RepName primary key ReportData Users table ID primary key Name UserReports table RepName primary key UserID primary key IsDeleted now if I put isdeleted field in UserReports table, the user can't add same record again if it marked as deleted, because the record is already and this will make duplication.

    Read the article

  • iPhone UI: No edit button for UITableView, bad idea?

    - by Nic Hubbard
    I have a UITableViewController which lets the user drill down into different records. On the second level/view, the user can add and edit new records. But, I am not sure what to do, since the back button is on the top left, and I need to put the "Add" button on the top right, so there is no room (keeping to HIG) for the edit button, which would normally go where the back button is. (I am using a tab bar, so can't put it at the bottom.) Do you think that it is logical, to expect users to know to swipe to delete a record? Or, do I need to have an edit button? If I DO need an edit button, where should I put it if I am following HIG?

    Read the article

  • ResultSet and aggregation

    - by kachanov
    Ok, I admit my situation is special There is a data system that supports SQL-92 and JDBC interface However the SQL requets are pretty expensive, and in my application I need to retreive the same data multiple times and aggregate it ("group by") on different fields to show different dimensions of the same data. For example on one screen I have three tables that show the same set or records but aggregated by City (1st grid), by Population (2nd grid), by number of babies (3rd grid) This amounts to 3 SQL queries (which is very slow), UNLESS anyone of you can suggest any idea any library from apache commons or from google code, so that I can select all records into ResultSet and get 3 arrays of data group by different fields from this single ResultSet. Am I'm missing some obvious and unexpected solution to this problem?

    Read the article

  • Can't append to array using string field name [$] when performing update on array fields

    - by Haraldo
    rowsI am attempting to perform a mongodb update on each field in an array of records. An example schema is below: { "_id" : ObjectId("508710f16dc636ec07000022"), "summary" : "", "uid" : "ABCDEF", "username" : "bigcheese", "name" : "Name of this document", "status_id" : 0, "rows" : [ { "score" : 12, "status_id" : 0, "uid" : 1 }, { "score" : 51, "status_id" : 0, "uid" : 2 } ] } So far I have been able to perform single updates like this: db.mycollection.update({"uid":"ABCDEF","rows.uid":1}, {$set:{"rows.$.status_id":1}},false,false) However, I am struggling as to how to perform an update that will update all array records to a status_id of 1 (for instance). Below is how I imagine it should work: db.mycollection.update({"uid":"ABCDEF"}, {$set:{"rows.$.status_id":1}},false,true) However I get the error: can't append to array using string field name [$] I have tried for quite a while with no luck. Any pointers?

    Read the article

  • Hierarchical Hibernate, how many queries are executed?

    - by ghost1
    So I've been dealing with a home brew DB framework that has some seriously flaws, the justification for use being that not using an ORM will save on the number of queries executed. If I'm selecting all possibile records from the top level of a joinable object hierarchy, how many separate calls to the DB will be made when using an ORM (such as Hibernate)? I feel like calling bullshit on this, as joinable entities should be brought down in one query , right? Am I missing something here? note: lazy initialization doesn't matter in this scenario as all records will be used.

    Read the article

  • .Net Remote Log Querying

    - by jlafay
    I have a Win Service that I'm working on that consists of the service, WF Service (using WorkflowServiceHost), a Workflow (WorkflowApplication) that queries/processes data from a SQL Server DB, and a Comm Marshall class that handles data flow between the service and the WF. The WF does a lot of heavy data processing and the original app (early VB6) logged all the processing and displayed the results on the screen of the host machine. Critical events will be committed to eventlog because I strongly believe that should be common practice because admins naturally will look there and because it already has support for remote viewing. The workflow will also need to write logging events as it processes and iterates according to our business logic. Such as: records queried, records returned, processed records, etc. The data is very critical and we need to log actions as they occur. The logs are currently kept as text files on disk and I think that is ok. Ideally I would like to record log events in XML so it's easier to query and because it is less costly than a DB, especially since our DB servers do a lot of heavy processing anyways. Since we are replacing essentially a VB6 application with a robust windows service (taking advantage of WF 4.0), it has been requested that a remote client also be created. It receives callbacks from the service after subscribing to it and being added to a collection of subscribers. Basic statistics and summaries are updated client side after receiving basic monitoring data of what is going on with the service. We would like to also provide a way to provide details when we need to examine what is going on further because this is a long running data processing service and issues need to be addressed immediately. What is the best way to implement some type of query from the client that is sent to the service and returned to the client? Would it be efficient to implement another method to expose on the service and then have that pass that off to some querying class/object to examine the XML files by whichever specification and then return it to the client? That's the main concern. I don't want the service to processing to bottleneck much while this occurs. It seems that WF already auto-magically threads well for the most part but I want to make sure this is the right way to go about it. Any suggestions/recommendations on how to architect and implement a small log querying framework for a remote service would be awesome.

    Read the article

  • Simulating an identity column within an insert trigger

    - by William Jens
    I have a table for logging that needs a log ID but I can't use an identity column because the log ID is part of a combo key. create table StuffLogs { StuffID int LogID int Note varchar(255) } There is a combo key for StuffID & LogID. I want to build an insert trigger that computes the next LogID when inserting log records. I can do it for one record at a time (see below to see how LogID is computed), but that's not really effective, and I'm hoping there's a way to do this without cursors. select @NextLogID = isnull(max(LogID),0)+1 from StuffLogs where StuffID = (select StuffID from inserted) The net result should allow me to insert any number of records into StuffLogs with the LogID column auto computed. StuffID LogID Note 123 1 foo 123 2 bar 456 1 boo 789 1 hoo Inserting another record using StuffID: 123, Note: bop will result in the following record: StuffID LogID Note 123 3 bop

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >