Search Results

Search found 5233 results on 210 pages for 'a records'.

Page 177/210 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • Format for storing contacts in a database

    - by Gart
    I'm thinking of the best way to store personal contacts in a database for a business application. The traditional and straightforward approach would be to create a table with columns for each element, i.e. Name, Telephone Number, Job title, Address, etc... However, there are known industry standards for this kind of data, like for example vCard, or hCard, or vCard-RDF/XML or even Windows Contacts XML Schema. Utilizing an standard format would offer some benefits, like inter-operablilty with other systems. But how can I decide which method to use? The requirements are mainly to store the data. Search and ordering queries are highly unlikely but possible. The volume of the data is 100,000 records at maximum. My database engine supports native XML columns. I have been thinking to use some XML-based format to store the personal contacts. Then it will be possible to utilize XML indexes on this data, if searching and ordering is needed. Is this a good approach? Which contacts format and schema would you recommend for this?

    Read the article

  • Dumping an ADODB recordset to XML, then back to a recordset, then saving to the db

    - by Mark Biek
    I've created an XML file using the .Save() method of an ADODB recordset in the following manner. dim res dim objXML: Set objXML = Server.CreateObject("MSXML2.DOMDocument") 'This returns an ADODB recordset set res = ExecuteReader("SELECT * from some_table) With res Call .Save(objXML, 1) Call .Close() End With Set res = nothing Let's assume that the XML generated above then gets saved to a file. I'm able to read the XML back into a recordset like this: dim res : set res = Server.CreateObject("ADODB.recordset") res.open server.mappath("/admin/tbl_some_table.xml") And I can loop over the records without any problem. However what I really want to do is save all of the data in res to a table in a completely different database. We can assume that some_table already exists in this other database and has the exact same structure as the table I originally queried to make the XML. I started by creating a new recordset and using AddNew to add all of the rows from res to the new recordset dim outRes : set outRes = Server.CreateObject("ADODB.recordset") dim outConn : set outConn = Server.CreateObject("ADODB.Connection") dim testConnStr : testConnStr = "DRIVER={SQL Server};SERVER=dev-windows\sql2000;UID=myuser;PWD=mypass;DATABASE=Testing" outConn.open testConnStr outRes.activeconnection = outConn outRes.cursortype = adOpenDynamic outRes.locktype = adLockOptimistic outRes.source = "product_accessories" outRes.open while not res.eof outRes.addnew for i=0 to res.fields.count-1 outRes(res.fields(i).name) = res(res.fields(i).name) next outRes.movefirst res.movenext wend outRes.updatebatch But this bombs the first time I try to assign the value from res to outRes. Microsoft OLE DB Provider for ODBC Drivers error '80040e21' Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done. Can someone tell me what I'm doing wrong or suggest a better way for me to copy the data loaded from XML to a different database?

    Read the article

  • Codeigniter Pagination: Run the Query Twice?

    - by Frank
    I'm using codeigniter and the pagination class. This is such a basic question, but I need to make sure I'm not missing something. In order to get the config items necessary to paginate results getting them from a MySQL database it's basically necessary to run the query twice is that right? In other words, you have to run the query to determine the total number of records before you can paginate. So I'm doing it like: Do this query to get number of results $this->db->where('something', $something); $query = $this->db->get('the_table_name'); $num_rows = $query->num_rows(); Then I'll have to do it again to get the results with the limit and offset. Something like: $this->db->where('something', $something); $this->db->limit($limit, $offset); $query = $this->db->get('the_table_name'); if($query->num_rows()){ foreach($query->result_array() as $row){ ## get the results here } } I just wonder if I'm actually doing this right in that the query always needs to be run twice? The queries I'm using are much more complex than what is shown above.

    Read the article

  • Saving and retrieving image in SQL database from C# problem

    - by Mobin
    I used this code for inserting records in a person table in my DB System.IO.MemoryStream ms = new System.IO.MemoryStream(); Image img = Image_Box.Image; img.Save(ms, System.Drawing.Imaging.ImageFormat.Bmp); this.personTableAdapter1.Insert(NIC_Box.Text.Trim(), Application_Box.Text.Trim(), Name_Box.Text.Trim(), Father_Name_Box.Text.Trim(), DOB_TimePicker.Value.Date, Address_Box.Text.Trim(), City_Box.Text.Trim(), Country_Box.Text.Trim(), ms.GetBuffer()); but when i retrieve this with this code byte[] image = (byte[])Person_On_Application.Rows[0][8]; MemoryStream Stream = new MemoryStream(); Stream.Write(image, 0, image.Length); Bitmap Display_Picture = new Bitmap(Stream); Image_Box.Image = Display_Picture; it works perfectly fine but if i update this with my own generated Query like System.IO.MemoryStream ms = new System.IO.MemoryStream(); Image img = Image_Box.Image; img.Save(ms, System.Drawing.Imaging.ImageFormat.Bmp); Query = "UPDATE Person SET Person_Image ='"+ms.GetBuffer()+"' WHERE (Person_NIC = '"+NIC_Box.Text.Trim()+"')"; the next time i use the same code for retrieving the image and displaying it as used above . Program throws an exception

    Read the article

  • Perl Regex - Condensing groups of find/replace

    - by brydgesk
    I'm using Perl to perform some file cleansing, and am running into some performance issues. One of the major parts of my code involves standardizing name fields. I have several sections that look like this: sub substitute_titles { my ($inStr) = @_; ${$inStr} =~ s/ PHD./ PHD /; ${$inStr} =~ s/ P H D / PHD /; ${$inStr} =~ s/ PROF./ PROF /; ${$inStr} =~ s/ P R O F / PROF /; ${$inStr} =~ s/ DR./ DR /; ${$inStr} =~ s/ D.R./ DR /; ${$inStr} =~ s/ HON./ HON /; ${$inStr} =~ s/ H O N / HON /; ${$inStr} =~ s/ MR./ MR /; ${$inStr} =~ s/ MRS./ MRS /; ${$inStr} =~ s/ M R S / MRS /; ${$inStr} =~ s/ MS./ MS /; ${$inStr} =~ s/ MISS./ MISS /; } I'm passing by reference to try and get at least a little speed, but I fear that running so many (literally hundreds) of specific string replaces on tens of thousands (likely hundreds of thousands eventually) of records is going to hurt the performance. Is there a better way to implement this kind of logic than what I'm doing currently? Thanks Edit: Quick note, not all the replace functions are just removing periods and spaces. There are string deletions, soundex groups, etc.

    Read the article

  • Canonical resource for forms-based design in ASP.NET MVC?

    - by Robert Harvey
    Is there a resource on the web that describes various form scenarios in ASP.NET MVC, and gives example solutions within a sensible, consistent design philosophy? Examples of such scenarios might be: One-to-many forms, like invoice data-entry forms. Foreign-table forms such as Add New User in a form that requires specifying a user Forms that require dynamic interaction, using Ajax or JSON. Popup forms Forms requiring multiple data records to be input, without postbacks. Note that there is considerable conceptual and technological overlap among these example scenarios. I am aware that there is a vast patchwork quilt of available technologies and examples out there that provide partial solutions and pieces of solutions, such as jQuery Ajax, CSS, and so forth. But I would like guidance in using these technologies in more effective and consistent ways. I am not considering web forms integration with an ASP.NET MVC application; I would still like my applications to be pure MVC. Nor am I, at the moment, considering a paid solution like Telerik. But I would like to know if someone has already done some of the work combining these technologies into a consistent, cohesive whole, that follows a sensible design philosophy. (an open source framework, perhaps?)

    Read the article

  • Take Current Snapshot of DB and send it to FTP in same PHP Scripts: Advice Needed

    - by Rachel
    Not sure if I can do it this way. I want to get current snapshot of the database and send it via FTP Server, both of this functionality should be implemented in PHP scripts. Here are the steps I am thinking on right now. In my php scripts(basically am extending an PDO into my Dao class and then preparing the query), $qry = SELECT * FROM MyTablename; $stmt = $this->prepare($qry); $stmt = $this->execute(); Now I will store $stmt in csv file using fputcsv or I will execute the sql command from the script itself and than try to store the result in the $file(csv file) note here that I do not have any csv file with me at this point to basically I will have to create one and let's say its $file, so then $file = fputcsv($stmt); or $file = exec("Select * from MyTablename"); Will this put all records in the file ? If yes, then I will use FTP Functionality to transfer file to the FTP Folder. I am not sure if this approach would work and also have concerns regarding the need of preparing the $qry Any suggestions or different approach advised would be highly appreciated. Thanks !!!

    Read the article

  • ASP.NET membership db using integrated security problem

    - by rem
    I published ASP.NET MVC web site to a server on a virtual machine (Hyper-V). SQL Server Express installed on the same server. The problem is that ASP.Net Membership system doesn't work in integrated mode. When Web.config file contains records as follows: <connectionStrings> <remove name="LocalSqlServer" /> <add name="MyDBConnectionString" connectionString="data source=vm-1\SQLEXPRESS;Initial Catalog=testdb;Integrated Security=SSPI;" providerName="System.Data.SqlClient"/> </connectionStrings> I get an error when trying to register and login to the site. If I change connection string this way: <connectionStrings> <remove name="LocalSqlServer" /> <add name="MyDBConnectionString" connectionString="data source=vm-1\SQLEXPRESS;Initial Catalog=testdb;User ID=XX;Password=XXXXXXX;" providerName="System.Data.SqlClient"/> </connectionStrings> I could register and login without any problem. What could cause the problem with using ASP.NET membership database in integrated security mode?

    Read the article

  • PDO Database Connections Problem

    - by Metropolis
    Hey Everyone, Over a year ago I created my own database classes which use PDO, and handle all preparing, executing, and closing connections. These classes have been working great up until now. There are two different database severs I am grabbing from, MySQL, and MS SQL Express. I am retrieving an employee id from the MySQL server and using it to get that employees information from the MS SQL server. There are about 11k records coming from the MySQL server and my program is only making it through 1200 before crashing with an error like the following. Connection failed (odbc:Driver=FreeTDS;Servername=MSSQLExpress;Database=SMDINC) Class (PDOException) SQLSTATE[08001] SQLDriverConnect: 0 [unixODBC][FreeTDS][SQL Server]Unable to connect to data source It seems like the program is not able to connect to the data source, but it is running the exact same query about 30 times before this and having no problem. Also, I have thoroughly checked all of the data coming into the query and it all looks fine. I believe the issue may be that there are to many connections being created, but I have tried to close all connections in many different places, and nothing seems to be fixing the problem. Any debugging help, or suggestions would be appreciated! Craig Metrolis

    Read the article

  • Adaptive user interface/environment algorithm

    - by WowtaH
    Hi all, I'm working on an information system (in C#) that (while my users use it) gathers statistical data on what pieces of information (tables & records) each user is requesting the most, and what parts of the interface he/she uses most. I'm using this statistical data to make the application adaptive to the user's needs, both in the way the interface presents itself (eg: tab/pane-ordering) as in the way of using the frequently viewed information to (eg:) show higher in search results/suggestion-lists. What i'm looking for is an algorithm/formula to determine the current 'hotness'/relevance of these objects for a specific user. A simple 'hitcounter' for each object won't be sufficient because the user might view some information quite frequently for a period of time, and then moving on to the next, making the old information less relevant. So i think my algorithm also needs some sort of sliding/historical principle to account for the changing popularity of the objects in the application over time. So, the question is: Does anybody have some sort of algorithm that accounts for that 'popularity over time' ? Preferably with some explanation on the parameters :) Thanks! PS I've looked at other posts like http://stackoverflow.com/questions/32397/popularity-algorithm but i could't quite port it to my specific case. Any help is appreciated.

    Read the article

  • NSArrayController not working with NSMutableDictionary for NSTableView

    - by Miraaj
    Hi all, I am trying to display content in NSTableView using NSMutableArrayController of NSMutableDictionary records. I followed steps written below: In application delegate class, I created an NSMutableArray object with name 'geniuses' and stored some NSMutableDictionary objects with keys: 'geniusName' and 'domain'. I took an NSArrayController object in IB, binded its controller content property to application delegate class, and set its model key path to 'geniuses'. In attribute inspector pane set mode as class and class name as NSMutableDictionary. Added keys: 'geniusName' and 'domain' to it. In IB I took a table view object. Binded its content property to array controller, controller key path set as arranged objects. Binded value property of its first column to array controller, controller key path set as arranged objects, model key path set as 'geniusName'. Binded value property of its second column to array controller, controller key path set as arranged objects, model key path set as 'geniusName'. After following these steps when I tried to build and run the project I found un-populated table view. You can find my code here. Can anyone suggest me where I may be wrong? Thanks, Miraaj

    Read the article

  • SQL Query Returning Duplicate Results

    - by Jesse Bunch
    Hi, I've been working out this query now for a while and I thought I had it where I wanted it, but apparently not. There are two records in the database (orders). The query should return two different rows, but instead returns two rows that have exactly the same values. I think it may be something to do with the GROUP BY or derived tables I'm using but my eyes are tired and not seeing the problem. Can any of you help? Thanks in advance. SELECT orders.billerID, orders.invoiceDate, orders.txnID, orders.bName, orders.bStreet1, orders.bStreet2, orders.bCity, orders.bState, orders.bZip, orders.bCountry, orders.sName, orders.sStreet1, orders.sStreet2, orders.sCity, orders.sState, orders.sZip, orders.sCountry, orders.paymentType, orders.invoiceNotes, orders.pFee, orders.shipping, orders.tax, orders.reasonCode, orders.txnType, orders.customerID, customers.firstName AS firstName, customers.lastName AS lastName, customers.businessName AS businessName, orderStatus.statusName AS orderStatus, IFNULL(orderItems.itemTotal, 0.00) + orders.shipping + orders.tax AS orderTotal, IFNULL(orderItems.itemTotal, 0.00) + orders.shipping + orders.tax - IFNULL(payments.totalPayments, 0.00) AS orderBalance FROM orders LEFT JOIN customers ON orders.customerID = customers.id LEFT JOIN orderStatus ON orders.orderStatus = orderStatus.id LEFT JOIN ( SELECT orderItems.orderID, SUM(orderItems.itemPrice * orderItems.itemQuantity) as itemTotal FROM orderItems GROUP BY orderItems.orderID ) orderItems ON orderItems.orderID = orders.id LEFT JOIN ( SELECT payments.orderID, SUM(payments.amount) as totalPayments FROM payments GROUP BY payments.orderID ) payments ON payments.orderID = orders.id

    Read the article

  • Record Disappeared from Mysql Table, How Can I Find Out What Happened?

    - by Jascha
    I got the fire alarm phone call, AIM messages and email today from a client stating "The site is down!, WTF happened?!" Well, after a little digging, it turns out one of the records in a table had been wiped clean, but without removing the row itself. So, I had the representation of data, but a bunch of empty fields. (needless to day I need to write into my code a catch for this.) What my real question is, where can I figure out what happened? I've got access to phpmyadmin and that's about it. I found some access logs on in the root directory of my server, but that just tells me the client was in the admin area I built editing that record, I'd like to know specifically what they did that made all of the data go away. (what query was run etc...) is it possible without real server admin access? is there a neat little php to mysql class that returns data like this? Thanks in advance. -Jascha

    Read the article

  • Drill through table does not show correct count when used with a dimension having parent child hiera

    - by Arun Singhal
    Hi All, I have a dimension with parent child hierarchy as shown in code block. The issue i am facing is if i have a filter on parent child dimension then drill through table does not show filtered data instead it shows all the data for that dimension. Here is an example. <Dimension type="StandardDimension" name="page_type_d" caption="Page Type"> <Hierarchy name="page_type_h" hasAll="true" allMemberName="all_page_types" allMemberCaption="All Page Types" primaryKey="id"> <Table name="npg_page_type_view" alias="pt"> </Table> <Level name="Page Type" column="id" nameColumn="display_name" parentColumn="parent_id" nullParentValue="0" type="Integer" uniqueMembers="true" levelType="Regular" hideMemberIf="Never" caption="Page Type"> <Closure parentColumn="parent_id" childColumn="page_type_id"> <Table name="dim_page_types_closure"> </Table> </Closure> </Level> </Hierarchy> Now suppose i have 4 rows in npg_page_type_view table id display_name parent_id 19 HTML 100 20 PDF 100 21 XML 0 100 Total 0 Now suppose in my fact table i have following records id count 19 2 20 3 21 1 Following is my analysis view. Total (HTML and PDF) - 5 HTML - 2 PDF - 3 XML - 1 Now if i add filter(say Total) on this analysis view using OLAP cube. Then my analysis view shows the following. Total (HTML and PDF) - 5 Upto this point everything works fine. Now if i click on 5 (to view drill through table) It shows me data against all page type i.e. HTML, PDF, XML but as per filter it should show only HTML and PDF. Is it an exciting issue or am i doing something wrong here? Please help me.

    Read the article

  • Re-use of database object in sub-sonic

    - by cantabilesoftware
    Yet another newbie SubSonic/ActiveRecord question. Suppose I want to insert a couple of records, currently I'm doing this: using (var scope = new System.Transactions.TransactionScope()) { // Insert company company c = new company(); c.name = "ACME"; c.Save(); // Insert some options company_option o = new company_option(); o.name = "ColorScheme"; o.value = "Red"; o.company_id = c.company_id; o.Save(); o = new company_option(); o.name = "PreferredMode"; o.value = "Fast"; o.company_id = c.company_id; o.Save(); scope.Complete(); } Stepping through this code however, each of the company/company_option constructors go off and create a new myappDB object which just seems wasteful. Is this the recommended approach or should I be trying to re-use a single DB object - and if so, what's the easiest way to do this?

    Read the article

  • Undefined method `add' on a cucumber step that usually works.

    - by Josiah Kiehl
    I have a path defined: when /the admin home\s?page/ "/admin/" I have scenario that is passing: Scenario: Let admins see the admin homepage Given "pojo" is logged in And "pojo" is an "admin" And I am on the admin home page Then I should see "Hi there." And I have a scenario that is failing: Scenario: Review flagged photo Given "pojo" is logged in And "pojo" is an "admin" ...bunch of steps that create stuff in the database... And I am on the admin home page Then ... the rest of the steps The step that fails in the second one is "And I am on the admin home page" which passes just fine in the first scenario. Here's the error I get: And I am on the admin home page # features/step_definitions/web_steps.rb:18 undefined method `add' for {}:Hash (NoMethodError) ./app/controllers/admin_controller.rb:13:in `index' ./app/controllers/admin_controller.rb:11:in `each' ./app/controllers/admin_controller.rb:11:in `index' /usr/lib/ruby/1.8/benchmark.rb:308:in `realtime' ./features/step_definitions/web_steps.rb:19:in `/^(?:|I )am on (.+)$/' features/admin.feature:52:in `And I am on the admin home page' This is very odd... why would it be fine in the first case, and not in the second where the only difference are a bunch of steps that create records in the db? [edit] Here's the add stuff to database step: Given /^there is a "([^\"]*)" with the following:$/ do |model, table| model.constantize.create!(table.rows_hash) end

    Read the article

  • Javascript: Error 'Object Required.'

    - by javascripthelp
    The following is the error popup message I get when I click "Finalize" button on my website: "Line: 298 Char: 5 Error: Object required: 'lobi_c_selected(...)' Code: 0 URL: http://10.128.23.50/i-prostage/AP/w_ap_check_reconciliation.asp?" Normally, when I click "Finalize" button, it'll generate and show a report in a popup window. However, I'd get this error message instead. Can any of you help me locate the error in the following source code for the page that I'm running in IE 6? sub cb_finalize dim ll_loop, ll_found, lobj_c_selected of_SetHourGlass(True) rpt_link.innerHTML = "" rpt_link.href = "" 'Process only if at least one record was selected if rds1.Recordset.Recordcount 0 then lb_found = false if rds1.Recordset.Recordcount = 1 then if c_selected.checked then lb_found = true else Set lobj_c_selected = document.all.item("c_selected") for ll_loop = 1 to rds1.Recordset.Recordcount if lobj_c_selected(ll_loop - 1).checked then lb_found = true exit for end if next end if if not lb_found then msgbox "Please select a record to be posted.", vbInformation, "ProStage Accounting" of_SetHourGlass(False) else window.setTimeout "ue_process()", 100, vbscript 'Post Event end if else msgbox "There's no record to be posted." + vbcrlf + "Please select a record to be posted.", vbInformation, "ProStage Accounting" of_SetHourGlass(False) end if end sub Sub ue_process dim ll_loop, ll_count, ls_ret, ls_trxid, ls_r1 'Get only selected records redim ls_trxid(rds1.Recordset.Recordcount) for ll_loop = 1 to rds1.Recordset.Recordcount rds1.Recordset.AbsolutePosition = ll_loop if not isnull(rds1.Recordset("clrdt")) then 'Add to TRXID array if selected ll_count = ll_count + 1 ls_trxid(ll_count) = rds1.Recordset("trxid") end if next 'Process reconciliation rds1.Recordset.MarshalOptions = 1 ls_ret = iBO_Update.of_update_1(is_dbsrc, rds1.Recordset, "GLTRX", is_sql) if ls_ret < "1" then msgbox "Update Failed ! " + ls_ret, vbExclamation + vbOKonly, document.title of_SetHourGlass(False) else 'Display Posting Journal & clear screen ue_posting_journal ls_trxid, ll_count Set rds1.SourceRecordset = iBO_Company.of_validate(is_dbsrc, "SELECT 1 FROM DUAL WHERE 1 = 2") ib_query = false 'Not to process RetrieveEnd end if End Sub Sub ue_posting_journal(as_trxid, al_count) dim ll_argseq, ls_argtyp, ls_argmnt, ll_sargseq of_setreport() ' Start service 'Prepare arguments for report in RPTMSTR table for ll_loop = 1 to al_count + 1 select case ll_loop case 1 'Range displayed as report title ll_argseq = 800 ls_argtyp = null ls_argmnt = "st_title.text = 'Bank: " + bnkid_name.value + ", As of Date: " + _ of_date_stringtodate(id_trxdt) + "'" ll_sargseq = 0 case else 'TRXID array ll_argseq = 1 ll_sargseq = ll_loop - 1 ls_argtyp = "S" ls_argmnt = as_trxid(ll_loop - 1) end select of_report_register_array "d_rpt_ap_check_reconciliation_register", ll_argseq, ls_argtyp, ls_argmnt, ll_sargseq next of_report_process "d_rpt_ap_check_reconciliation_register", true, true 'Display report of_sethourglass(False) End Sub

    Read the article

  • date comparisons in Rails

    - by aressidi
    Hi there, I'm having trouble with a date comparison in a named scope. I'm trying to determine if an event is current based on its start and end date. Here's the named scope I'm using which kind of works, though not for events that have the same start and end date. named_scope :date_current, :conditions => ["Date(start_date) <= ? AND Date(end_date) >= ?", Time.now, Time.now] This returns the following record, though it should return two records, not one... >> Event.date_current => [#<Event id: 2161, start_date: "2010-02-15 00:00:00", end_date: "2010-02-21 00:00:00", ...] What it's not returning is this as well >> Event.find(:last) => #<Event id: 2671, start_date: "2010-02-16 00:00:00", end_date: "2010-02-16 00:00:00", ...> The server time seems to be in UTC and I presume that the entries are being stored in the DB in UTC. Any ideas as to what I'm doing wrong or what to try? Thanks!

    Read the article

  • Optimizing MySQL for ALTER TABLE of InnoDB

    - by schuilr
    Sometime soon we will need to make schema changes to our production database. We need to minimize downtime for this effort, however, the ALTER TABLE statements are going to run for quite a while. Our largest tables have 150 million records, largest table file is 50G. All tables are InnoDB, and it was set up as one big data file (instead of a file-per-table). We're running MySQL 5.0.46 on an 8 core machine, 16G memory and a RAID10 config. I have some experience with MySQL tuning, but this usually focusses on reads or writes from multiple clients. There is lots of info to be found on the Internet on this subject, however, there seems to be very little information available on best practices for (temporarily) tuning your MySQL server to speed up ALTER TABLE on InnoDB tables, or for INSERT INTO .. SELECT FROM (we will probably use this instead of ALTER TABLE to have some more opportunities to speed things up a bit). The schema changes we are planning to do is adding a integer column to all tables and make it the primary key, instead of the current primary key. We need to keep the 'old' column as well so overwriting the existing values is not an option. What would be the ideal settings to get this task done as quick as possible?

    Read the article

  • ado.net Concurrency violation

    - by Bicubic
    My first time using ADO.net. Trying to make database of Users. First I populate my DataSet: adapter.AcceptChangesDuringFill = true; adapter.AcceptChangesDuringUpdate = true; adapter.Fill(dataset); To create a user: User user = new User(); user.datarow = dataset.Users.NewUsersRow(); user.Name = username; user.PasswordHash = GetHash(password); user.Rights = UserRights.None; users.Add(user); dataset.Users.AddUsersRow(user.datarow); adapter.Update(dataset); When a user property is modified: adapter.Update(dataset); Creation by itself is fine. If I take an existing user and make multiple changes, fine. Multiple creations in a row, fine. Creation followed by a property change, I get this: "Concurrency violation: the UpdateCommand affected 0 of the expected 1 records." Any ideas?

    Read the article

  • Data Modeling of Entity with Attributes

    - by StackOverflowNewbie
    I'm storing some very basic information "data sources" coming into my application. These data sources can be in the form of a document (e.g. PDF, etc.), audio (e.g. MP3, etc.) or video (e.g. AVI, etc.). Say, for example, I am only interested in the filename of the data source. Thus, I have the following table: DataSource Id (PK) Filename For each data source, I also need to store some of its attributes. Example for a PDF would be "numbe of pages." Example for audio would be "bit rate." Example for video would be "duration." Each DataSource will have different requirements for the attributes that need to be stored. So, I have modeled "data source attribute" this way: DataSourceAttribute Id (PK) DataSourceId (FK) Name Value Thus, I would have records like these: DataSource->Id = 1 DataSource->Filename = 'mydoc.pdf' DataSource->Id = 2 DataSource->Filename = 'mysong.mp3' DataSource->Id = 3 DataSource->Filename = 'myvideo.avi' DataSourceAttribute->Id = 1 DataSourceAttribute->DataSourceId = 1 DataSourceAttribute->Name = 'TotalPages' DataSourceAttribute->Value = '10' DataSourceAttribute->Id = 2 DataSourceAttribute->DataSourceId = 2 DataSourceAttribute->Name = 'BitRate' DataSourceAttribute->Value '16' DataSourceAttribute->Id = 3 DataSourceAttribute->DataSourceId = 3 DataSourceAttribute->Name = 'Duration' DataSourceAttribute->Value = '1:32' My problem is that this doesn't seem to scale. For example, say I need to query for all the PDF documents along with thier total number of pages: Filename, TotalPages 'mydoc.pdf', '10' 'myotherdoc.pdf', '23' ... The JOINs needed to produce the above result is just too costly. How should I address this problem?

    Read the article

  • error C2512 in precompiled header file?

    - by SoloMael
    I'm having a ridiculously strange problem. When I try to run the program below, there's an error message that says: "error C2512: 'Record' : no appropriate default constructor available". And when I double-click it, it directs me to a precompiled read-only header file named "xmemory0". Do they expect me to change a read-only file? Here's the segment of code in the file it directs me to: void construct(_Ty *_Ptr) { // default construct object at _Ptr ::new ((void *)_Ptr) _Ty(); // directs me to this line } Here's the program: #include <iostream> #include <vector> #include <string> using namespace std; const int NG = 4; // number of scores struct Record { string name; // student name int scores[NG]; double average; // Calculate the average // when the scores are known Record(int s[], double a) { double sum = 0; for(int count = 0; count != NG; count++) { scores[count] = s[count]; sum += scores[count]; } average = a; average = sum / NG; } }; int main() { // Names of the class string names[] = {"Amy Adams", "Bob Barr", "Carla Carr", "Dan Dobbs", "Elena Evans"}; // exam scores according to each student int exams[][NG]= { {98, 87, 93, 88}, {78, 86, 82, 91}, {66, 71, 85, 94}, {72, 63, 77, 69}, {91, 83, 76, 60}}; vector<Record> records(5); return 0; }

    Read the article

  • Please help optimizing a long running query (left outer join, with 2 subqueries)

    - by 46and2
    Hi all. The query I need help with is: SELECT d.bn, d.4700, d.4500, ... , p.`Activity Description` FROM ( SELECT temp.bn, temp.4700, temp.4500, .... FROM `tdata` temp GROUP BY temp.bn HAVING (COUNT(temp.bn) = 1) ) d LEFT OUTER JOIN ( SELECT temp2.bn, max(temp2.FPE) AS max_fpe, temp2.`Activity Description` FROM `pdata` temp2 GROUP BY temp2.bn ) p ON p.bn = d.bn; The ... represents other fields that aren't really important to solving this problem. The issue is on the the second subquery - it is not using the index I have created and I am not sure why, it seems to be because of the way TEXT fields are handled. The first subquery uses the index I have created and runs quite snappy, however an explain on the second shows a 'Using temporary; Using filesort'. Please see the indexes I have created in the below table create statements. Can anyone help me optimize this? By way of quick explanation the first subquery is meant to only select records that have unique bn's, the second, while it looks a bit wacky (with the max function there which is not being used in the result set) is making sure that only one record from the right part of the join is included in the result set. My table create statements are CREATE TABLE `tdata` ( `BN` varchar(15) DEFAULT NULL, `4000` varchar(3) DEFAULT NULL, `5800` varchar(3) DEFAULT NULL, .... KEY `BN` (`BN`), KEY `idx_t3010`(`BN`,`4700`,`4500`,`4510`,`4520`,`4530`,`4570`,`4950`,`5000`,`5010`,`5020`,`5050`,`5060`,`5070`,`5100`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 CREATE TABLE `pdata` ( `BN` varchar(15) DEFAULT NULL, `FPE` datetime DEFAULT NULL, `Activity Description` text, .... KEY `BN` (`BN`), KEY `idx_programs_2009` (`BN`,`FPE`,`Activity Description`(100)) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 Thanks!

    Read the article

  • Addresses stored in SQL server have many small variations(errors)

    - by MAW74656
    I have a table in my database which stores packing slips and their information. I'm trying to query that table and get each unique address. I've come close, but I still have many near misses and I'm looking for a way to exclude these near duplicates from my select. Sample Data CompanyCode CompanyName Addr1 City State Zip 10033 UNITED DIE CUTTING & FINISHIN 3610 HAMILTON AVE CLEVELAND Ohio 44114 10033 UNITED DIE CUTTING & FINISHING 3610 HAMILTON AVE CLEVELAND Ohio 44114 10033 UNITED DIE CUTTING & FINISHING 3610 HAMILTON AVE. CLEVELAND Ohio 44114 10033 UNITED DIE CUTTING & FINISHING 3610 HAMILTON AVENUE CLEVELAND Ohio 44114 10033 UNITED DIECUTTING & FINISHING 3610 HAMILTON AVE CLEVELAND Ohio 44144 10033 UNITED FINISHING 3610 HAMILTON AVE CLEVLAND Ohio 44114 10033 UNITED FINISHING & DIE CUTTING 3610 HAMILTON AVE CLEVELAND Ohio 44114 And all I want is 1 record. Is there some way I can get the "Average" record? Meaning, if most of the records say CLEVELAND instead of CLEVLAND, I want my 1 record to say CLEVELAND. Is there any way to par this data down to what I'm looking for? Desired Output CompanyCode CompanyName Addr1 City State Zip 10033 UNITED DIE CUTTING & FINISHING 3610 HAMILTON AVE CLEVELAND Ohio 44114

    Read the article

  • Is there a standard for storing normalized phone numbers in a database?

    - by Eric Z Beard
    What is a good data structure for storing phone numbers in database fields? I'm looking for something that is flexible enough to handle international numbers, and also something that allows the various parts of the number to be queried efficiently. [Edit] Just to clarify the use case here: I currently store numbers in a single varchar field, and I leave them just as the customer entered them. Then, when the number is needed by code, I normalize it. The problem is that if I want to query a few million rows to find matching phone numbers, it involves a function, like where dbo.f_normalizenum(num1) = dbo.f_normalizenum(num2) which is terribly inefficient. Also queries that are looking for things like the area code become extremely tricky when it's just a single varchar field. [Edit] People have made lots of good suggestions here, thanks! As an update, here is what I'm doing now: I still store numbers exactly as they were entered, in a varchar field, but instead of normalizing things at query time, I have a trigger that does all that work as records are inserted or updated. So I have ints or bigints for any parts that I need to query, and those fields are indexed to make queries run faster.

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >