Search Results

Search found 6399 results on 256 pages for 'record'.

Page 85/256 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • How listview delete data in database

    - by Bud33
    I have a problem to delete data in listview, I was able to delete data in listview select record, but data which selected is not deleted in the database, I have a source code Private _updateinputalltrans As Boolean Private Sub btndelete_Click(sender As System.Object, e As System.EventArgs) Handles btndelete.Click With Me.listviewpos.SelectedItem .Remove() End With MessageBox.Show("Are you sure delete this record?", "Confirmation", MessageBoxButtons.YesNo, MessageBoxIcon.Exclamation, New EventHandler(AddressOf DeleteData)) End Sub Private Sub DeleteData(ByVal sender As Object, ByVal e As EventArgs) Dim conn As New Connection(Connectiondb) If Me.updateinputalltrans = False Then If Me.listviewpos.Items.Count > 0 Then For Each del As ListViewItem In listviewpos.Items conn.delete_dtpospart(del.Text) Next End If End If End Sub delete_dtpospart a declare which connection to the database using a stored procedure

    Read the article

  • How to handle un-assigned records

    - by Mico
    I have this PHP page where the user can select and un-select items. The interface looks like this: Now I'm using these code, when the user hit the save changes button: foreach( $value as $al_id ){ //al_id is actually location id //check if a record exists //if location were assigned and leave it as is $assigned_count = $this->AssignedLoc->checkIfAssigned( $tab_user_id, $al_id ); if( $assigned_count == 0 ){ //else if not, insert this new record $this->insertAssigned( $tab_user_id, $company_id, $al_id ); } } Now my question is, how do I delete the un assigned locations? For example in the screenshot above, there are 4 assigned locations, if I'm gonna remove (or unassign) "Mercury Morong" and "GP Hagonoy" from the assigned locations, only two must remain. What are the possible solutions using PHP? Thanks for any help!

    Read the article

  • Reverse Data With BIT TYPE for MS SQL

    - by Milacay
    I have a column using a BIT type (1/0). I have some records are set to 1 and some are set to 0. Those are record flag needs to be reversed. So basically, I want all records with 1 set 0, and all records with 0 set to 1. If I run "Update Table1 Set Flag = 1 Where Flag = 0" first, then I am afraid all record flags will be 1 now, and will not able to know which ones are flag = 0. any suggestions, Thanks!

    Read the article

  • Removing groups of similar records in MySQL query

    - by user1182155
    I'm trying to wrap my head around this... (it may be simple, been a long day!) I have a database with sometimes multiple similar records... ie. Apples 2008-09-03 Apples 2012-01-01 Apples 2013-10-24 Oranges 2012-01-04 What I need to do is do a query that will show only records that haven't been updated today... So in this case, since Apples has an entry that was updated today, none of the records for the Apples should appear in the results. Oranges should be the only record it returns. I have a query similar to this... SELECT fruit FROM fruitnames where date < CURDATE() Which works to remove the record that was updated today... But it keeps the other records for Apples (obviously)... How would I remove those results as well?

    Read the article

  • Need help with jquery sorting

    - by Klerk
    I have a column within which are multiple 'records' (each a div). Each record has a bunch of fields (each a span whose id is the fieldname). I want to allow the user to sort all the records based on a field. I also want, the field that has been sorted to be moved to the beginning of the record. So I came up with this. But its really slow for a large sets. Not sure whats the best way to do this. Any ideas? $(".col1 div").sort( function (a,b) { if($(a).children("."+field).text() > $(b).children("."+field).text()) return -1; else return 1; }).appendTo(".col1");

    Read the article

  • dropdownlist format and then convert

    - by dinra
    i need a dropdownlist to show current month and year (January 2010) till January 2011, and an additional record of January 2011 +. But I want to save this in the database as 01/01/2010 format. also if the user selects current month then the record should be getdate() to go in database, else for any other month it should be 02/01/2010 (date = 01, first day of month). how do i do this in aspx.vb .net. i wrote a function to populate the dorpdownlist - Public Sub Load_dates(ByRef DDL As System.Web.UI.WebControls.DropDownList) Try Dim i As Integer Dim j As Integer For i = Now.Year To Now.Year For j = Now.Month To Now.Month + 11 DDL.Items.Add((j.ToString) + " " + (i.ToString)) Next Next Catch ex As Exception ReportError(ex) End Try End Sub this function only shows number like 01 2010 and 02 2010. how can i format this to show january 2010 and february 2010 and so on. please advice

    Read the article

  • Model association changes in production environment, specifically converting a model to polymorphic?

    - by dustmoo
    Hi everyone, I was hoping I could get feedback on major changes to how a model works in an app that is in production already. In my case I have a model Record, that has_many PhoneNumbers. Currently it is a typical has_many belongs_to association with a record having many PhoneNumbers. Of course, I now have a feature of adding temporary, user generated records and these records will have PhoneNumbers too. I 'could' just add the user_record_id to the PhoneNumber model, but wouldn't it be better for this to be a polymorphic association? And if so, if you change how a model associates, how in the heck would I update the production database without breaking everything? .< Anyway, just looking for best practices in a situation like this. Thanks!

    Read the article

  • What's in-memory database technology that do realtime materialized view?

    - by KA100
    What I'm looking for is something like materialized views in front-end that shows my data in diffident ways without full recalculation. let's say I have stock watcher with many front-end views and dashborads some based on aggregation, order by or just filter with different criteria defined realtime by user. Now, I receive online record updates from some webservice and it's not like "data warehouse" every single record can be updated any time and it actually happens every second. Is there any technology can help me in such I create something like materialized view and it's update it without doing full recalculation every time data changed. Thank you.

    Read the article

  • Trying to link a domain to a IP

    - by user248959
    Hi, I have registered mydomain.com now i want to redirect to my IP hosting account. The DNS editor shows two fields: Name and Address. In Address i wrote the IP i want the domain redirects. And in Name I wrote mydomain.com. After submitting the form, the page shows this line: Name Type Record mydomain.mydomain.com. A 173.203.58.251 I expected it shows this: Name Type Record mydomain.com. A 173.203.58.251 Is that OK? or am i doing some wrong? Regards Javi

    Read the article

  • Pattern matching against Scala Map type

    - by Tom Morris
    Imagine I have a Map[String, String] in Scala. I want to match against the full set of key–value pairings in the map. Something like this ought to be possible val record = Map("amenity" -> "restaurant", "cuisine" -> "chinese", "name" -> "Golden Palace") record match { case Map("amenity" -> "restaurant", "cuisine" -> "chinese") => "a Chinese restaurant" case Map("amenity" -> "restaurant", "cuisine" -> "italian") => "an Italian restaurant" case Map("amenity" -> "restaurant") => "some other restaurant" case _ => "something else entirely" } The compiler complains thulsy: error: value Map is not a case class constructor, nor does it have an unapply/unapplySeq method What currently is the best way to pattern match for key–value combinations in a Map?

    Read the article

  • Where to store users visited pages?

    - by kofto4ka
    Hi there. I have a project, where I have posts for example. The task is next: I must show to user his last posts visit. This is my solution: every time user visits new (for him) topic, I create a new record in table visits. Table visits has next structure: id, user_id, post_id, last_visit. Now my tables visits has ~14,000,000 records and its still growing every day.. May be my solution isnt optimal and exists another way how to store users visits? Its important to save every visit as standalone record, because I also have feature to select and use users visits. And I cant purge this table, because data could be needed later month, year. How I could optimize this situation?

    Read the article

  • Problem with Nulls and an UPDATE statement

    - by Dave
    UPDATE TableA SET Value = a.Value * b.AnotherValue FROM TableA AS a INNER JOIN TableB AS b WHERE (Condition is true); Here is the problem. The Value field for TableA does not allow nulls. If the calculation of a.Value * b.AnotherValue yields a null, an error is thrown. Now the question. Is there any way to tell the UPDATE to ignore the SET phase when the result of the calculation is a null and delete the record rather than updating it. This UPDATE is intended to update hundreds of records at a time but will fail if a single null is encountered. Also, please note that using the ISNULL() function and setting the Value to zero is not acceptable. I would like the record to be dropped if a null is encountered. Many thanks in advance for any help rendered.

    Read the article

  • Physical storage of data in Access 2007

    - by ste
    I've been trying to estimate the size of an Access table with a certain number of records. It has 4 Longs (4 bytes each), and a Currency (8 bytes). In theory: 1 Record = 24 bytes, 500,000 = ~11.5MB However, the accdb file (even after compacting) increases by almost 30MB (~61 bytes per record). A few extra bytes for padding wouldn't be so bad, but 2.5X seems a bit excessive - even for Microsoft bloat. What's with the discrepancy? The four longs are compound keys, would that matter?

    Read the article

  • Locking a detail view if a user is editing the item...

    - by BenTheDesigner
    Hi All I am developing a user manager which must control access to the detail view of editable items. At present, when a user clicks 'edit', the application queries the link table to check if a user is currently editing that page, if not, it allows access to the page and then inserts a record into the link table preventing another user from editing the same page at the same time. My question is what would the best way to handle the removal of records if say a user exists the browser without saving etc, therefore no action to remove the record. I have a couple of ideas but would like other input before I decide. BenTheDesigner

    Read the article

  • Outputting audio stream into microphone

    - by Brap
    Hey everyone. Is there a way of outputting audio from my program and redirecting that stream to the system's microphone input 'layer'? I understand this might require some low-level calls being 'Pinvoked', but are there any articles that might help me. For example, if I was to run the output audio stream of my application into Window's Sound Recorder program, it would think that the audio is coming from a microphone and thus record that. I don't want to record a stream, just output it to the device's micrphone input. Thanks for any ideas.

    Read the article

  • The big last_insert_id() problem, again.

    - by wretrOvian
    Note - this follows my question here: http://stackoverflow.com/questions/2983685/jdbc-does-the-connection-break-if-i-lose-reference-to-the-connection-object Now i have a created a class so i can deal with JDBC easily for the rest of my code - public class Functions { private String DB_SERVER = ""; private String DB_NAME = "test"; private String DB_USERNAME = "root"; private String DB_PASSWORD = "password"; public Connection con; public PreparedStatement ps; public ResultSet rs; public ResultSetMetaData rsmd; public void connect() throws java.io.FileNotFoundException, java.io.IOException, SQLException, Exception { String[] dbParms = Parameters.load(); DB_SERVER = dbParms[0]; DB_NAME = dbParms[1]; DB_USERNAME = dbParms[2]; DB_PASSWORD = dbParms[3]; // Connect. Class.forName("com.mysql.jdbc.Driver").newInstance(); con = DriverManager.getConnection("jdbc:mysql://" + DB_SERVER + "/" + DB_NAME, DB_USERNAME, DB_PASSWORD); } public void disconnect() throws SQLException { // Close. con.close(); } } As seen Parameters.load() refreshes the connection parameters from a file every-time, so that any changes to the same may be applied on the next immediate connection. An example of this class in action - public static void add(String NAME) throws java.io.FileNotFoundException, java.io.IOException, SQLException, Exception { Functions dbf = new Functions(); dbf.connect(); String query = "INSERT INTO " + TABLE_NAME + "(" + "NAME" + ") VALUES(?)"; PreparedStatement ps = dbf.con.prepareStatement(query); ps.setString(1, NAME); ps.executeUpdate(); dbf.disconnect(); } Now here is the problem - for adding a record to the table above, the add() method will open a connection, add the record - and then call disconnect() . What if i want to get the ID of the inserted record after i call add() -like this : Department.add("new dept"); int ID = getlastID(); Isn't it possible that another add() was called between those two statements?

    Read the article

  • How to find the real problem line in my code with Application Verifier ?

    - by Newbie
    I am now trying to use this Application Verifier debugging tool, but i am stuck, first of all: it breaks the program at a line that is simple variable set line (s = 1; for example) Secondly, now when i run this program under debugger, my program seems to have changed its behaviour: i am drawing image, and now one of the colors has changed o_O, all those parts of the image that i dont draw on, has changed the color to #CDCDCD when it should be #000000, and i already set the default color to zero, still it becomes to #CDCDCD. How do i make any sense to this? Here is the output AV gave me: VERIFIER STOP 00000002: pid 0x8C0: Access violation exception. 14873000 : Invalid address causing the exception 004E422C : Code address executing the invalid access 0012EB08 : Exception record 0012EB24 : Context record AVRF: Noncontinuable verifier stop 00000002 encountered. Terminating process ... The program '[2240] test.exe: Native' has exited with code -1073741823 (0xc0000001).

    Read the article

  • Save response from certain WEB resources while recording scenario

    - by jdevelop
    I need to create scenario for user interaction with single-page WEB application. The application does lots of AJAX calls in order to authenticate user and get user data. So I created simple scenario with HTTP Test Script Recorder and tried to record my script. Everything went well, however I noticed that whilst request data is recorder properly, the response data is not recorder at all. I tried to enable Add assertions and Regex matching - but that didn't work as well. Can you please advice how do I record response texts as well?

    Read the article

  • Some tables mixed together

    - by DJPython
    Hello. I have 2 different tables in my database. They have some variables common and some different. For example: Table1: ID Date Name Address Fax Table2: ID Date Name e-mail Telephone number I want to display data together sorted by date & ID but from both tables. For example, first displayed will be the newest record from first table, but the second one will be the record from another table posted right after first one. Hope everybody understand, sorry for my English. Cheers.

    Read the article

  • MySQL some columns Distinct

    - by Adam
    I have the following query that works well. SELECT DISTINCT city,region1,region2 from static_geo_world where country='AU' AND (city LIKE '%geel%' OR region1 LIKE '%geel%' OR region2 LIKE '%geel%' OR region3 LIKE '%geel%' OR zip LIKE 'geel%') ORDER BY city; I need to also extract a column named 'id' but this messes up the DISTINCT as each ID is different. How can I get the same UNIQUE set of records as above but also get the 'id' for each record? Note: sometimes I can return a few thousand records so a query for each record isn't possible. Any ideas would be very welcome...

    Read the article

  • More efficient SQL than using "A UNION (B in A)"?

    - by machinatus
    Edit 1 (clarification): Thank you for the answers so far! The response is gratifying. I want to clarify the question a little because based on the answers I think I did not describe one aspect of the problem correctly (and I'm sure that's my fault as I was having a difficult time defining it even for myself). Here's the rub: The result set should contain ONLY the records with tstamp BETWEEN '2010-01-03' AND '2010-01-09', AND the one record where the tstamp IS NULL for each order_num in the first set (there will always be one with null tstamp for each order_num). The answers given so far appear to include all records for a certain order_num if there are any with tstamp BETWEEN '2010-01-03' AND '2010-01-09'. For example, if there were another record with order_num = 2 and tstamp = 2010-01-12 00:00:00 it should not be included in the result. Original question: Consider an orders table containing id (unique), order_num, tstamp (a timestamp), and item_id (the single item included in an order). tstamp is null, unless the order has been modified, in which case there is another record with identical order_num and tstamp then contains the timestamp of when the change occurred. Example... id order_num tstamp item_id __ _________ ___________________ _______ 0 1 100 1 2 101 2 2 2010-01-05 12:34:56 102 3 3 113 4 4 124 5 5 135 6 5 2010-01-07 01:23:45 136 7 5 2010-01-07 02:46:00 137 8 6 100 9 6 2010-01-13 08:33:55 105 What is the most efficient SQL statement to retrieve all of the orders (based on order_num) which have been modified one or more times during a certain date range? In other words, for each order we need all of the records with the same order_num (including the one with NULL tstamp), for each order_num WHERE at least one of the order_num's has tstamp NOT NULL AND tstamp BETWEEN '2010-01-03' AND '2010-01-09'. It's the "WHERE at least one of the order_num's has tstamp NOT NULL" that I'm having difficulty with. The result set should look like this: id order_num tstamp item_id __ _________ ___________________ _______ 1 2 101 2 2 2010-01-05 12:34:56 102 5 5 135 6 5 2010-01-07 01:23:45 136 7 5 2010-01-07 02:46:00 137 The SQL that I came up with is this, which is essentially "A UNION (B in A)", but it executes slowly and I hope there is a more efficient solution: SELECT history_orders.order_id, history_orders.tstamp, history_orders.item_id FROM (SELECT orders.order_id, orders.tstamp, orders.item_id FROM orders WHERE orders.tstamp BETWEEN '2010-01-03' AND '2010-01-09') AS history_orders UNION SELECT current_orders.order_id, current_orders.tstamp, current_orders.item_id FROM (SELECT orders.order_id, orders.tstamp, orders.item_id FROM orders WHERE orders.tstamp IS NULL) AS current_orders WHERE current_orders.order_id IN (SELECT orders.order_id FROM orders WHERE orders.tstamp BETWEEN '2010-01-03' AND '2010-01-09');

    Read the article

  • String manipulation appears to be inefficient

    - by user2964780
    I think my code is too inefficient. I'm guessing it has something to do with using strings, though I'm unsure. Here is the code: genome = FASTAdata[1] genomeLength = len(genome); # Hash table holding all the k-mers we will come across kmers = dict() # We go through all the possible k-mers by index for outer in range (0, genomeLength-1): for inner in range (outer+2, outer+22): substring = genome[outer:inner] if substring in kmers: # if we already have this substring on record, increase its value (count of num of appearances) by 1 kmers[substring] += 1 else: kmers[substring] = 1 # otherwise record that it's here once This is to search through all substrings of length at most 20. Now this code seems to take pretty forever and never terminate, so something has to be wrong here. Is using [:] on strings causing the huge overhead? And if so, what can I replace it with? And for clarity the file in question is nearly 200mb, so pretty big.

    Read the article

  • mysql query for change in values in a logging table

    - by kiasectomondo
    I have a table like this: Index , PersonID , ItemCount , UnixTimeStamp 1 , 1 , 1 , 1296000000 2 , 1 , 2 , 1296000100 3 , 2 , 4 , 1296003230 4 , 2 , 6 , 1296093949 5 , 1 , 0 , 1296093295 Time and index always go up. Its basically a logging table to log the itemcount each time it changes. I get the most recent ItemCount for each Person like this: SELECT * FROM table a INNER JOIN ( SELECT MAX(index) as i FROM table GROUP BY PersonID) b ON a.index = b.i; What I want to do is get get the most recent record for each PersonID that is at least 24 hours older than the most recent record for each Person ID. Then I want to take the difference in ItemCount between these two to get a change in itemcount for each person over the last 24 hours: personID ChangeInItemCountOverAtLeast24Hours 1 3 2 -11 3 6 Im sort of stuck with what to do next. How can I join another itemcount based on latest adjusted timestamp of individual rows?

    Read the article

  • Oracle Data Mining a Star Schema: Telco Churn Case Study

    - by charlie.berger
    There is a complete and detailed Telco Churn case study "How to" Blog Series just posted by Ari Mozes, ODM Dev. Manager.  In it, Ari provides detailed guidance in how to leverage various strengths of Oracle Data Mining including the ability to: mine Star Schemas and join tables and views together to obtain a complete 360 degree view of a customer combine transactional data e.g. call record detail (CDR) data, etc. define complex data transformation, model build and model deploy analytical methodologies inside the Database  His blog is posted in a multi-part series.  Below are some opening excerpts for the first 3 blog entries.  This is an excellent resource for any novice to skilled data miner who wants to gain competitive advantage by mining their data inside the Oracle Database.  Many thanks Ari! Mining a Star Schema: Telco Churn Case Study (1 of 3) One of the strengths of Oracle Data Mining is the ability to mine star schemas with minimal effort.  Star schemas are commonly used in relational databases, and they often contain rich data with interesting patterns.  While dimension tables may contain interesting demographics, fact tables will often contain user behavior, such as phone usage or purchase patterns.  Both of these aspects - demographics and usage patterns - can provide insight into behavior.Churn is a critical problem in the telecommunications industry, and companies go to great lengths to reduce the churn of their customer base.  One case study1 describes a telecommunications scenario involving understanding, and identification of, churn, where the underlying data is present in a star schema.  That case study is a good example for demonstrating just how natural it is for Oracle Data Mining to analyze a star schema, so it will be used as the basis for this series of posts...... Mining a Star Schema: Telco Churn Case Study (2 of 3) This post will follow the transformation steps as described in the case study, but will use Oracle SQL as the means for preparing data.  Please see the previous post for background material, including links to the case study and to scripts that can be used to replicate the stages in these posts.1) Handling missing values for call data recordsThe CDR_T table records the number of phone minutes used by a customer per month and per call type (tariff).  For example, the table may contain one record corresponding to the number of peak (call type) minutes in January for a specific customer, and another record associated with international calls in March for the same customer.  This table is likely to be fairly dense (most type-month combinations for a given customer will be present) due to the coarse level of aggregation, but there may be some missing values.  Missing entries may occur for a number of reasons: the customer made no calls of a particular type in a particular month, the customer switched providers during the timeframe, or perhaps there is a data entry problem.  In the first situation, the correct interpretation of a missing entry would be to assume that the number of minutes for the type-month combination is zero.  In the other situations, it is not appropriate to assume zero, but rather derive some representative value to replace the missing entries.  The referenced case study takes the latter approach.  The data is segmented by customer and call type, and within a given customer-call type combination, an average number of minutes is computed and used as a replacement value.In SQL, we need to generate additional rows for the missing entries and populate those rows with appropriate values.  To generate the missing rows, Oracle's partition outer join feature is a perfect fit.  select cust_id, cdre.tariff, cdre.month, minsfrom cdr_t cdr partition by (cust_id) right outer join     (select distinct tariff, month from cdr_t) cdre     on (cdr.month = cdre.month and cdr.tariff = cdre.tariff);   ....... Mining a Star Schema: Telco Churn Case Study (3 of 3) Now that the "difficult" work is complete - preparing the data - we can move to building a predictive model to help identify and understand churn.The case study suggests that separate models be built for different customer segments (high, medium, low, and very low value customer groups).  To reduce the data to a single segment, a filter can be applied: create or replace view churn_data_high asselect * from churn_prep where value_band = 'HIGH'; It is simple to take a quick look at the predictive aspects of the data on a univariate basis.  While this does not capture the more complex multi-variate effects as would occur with the full-blown data mining algorithms, it can give a quick feel as to the predictive aspects of the data as well as validate the data preparation steps.  Oracle Data Mining includes a predictive analytics package which enables quick analysis. begin  dbms_predictive_analytics.explain(   'churn_data_high','churn_m6','expl_churn_tab'); end; /select * from expl_churn_tab where rank <= 5 order by rank; ATTRIBUTE_NAME       ATTRIBUTE_SUBNAME EXPLANATORY_VALUE RANK-------------------- ----------------- ----------------- ----------LOS_BAND                                      .069167052          1MINS_PER_TARIFF_MON  PEAK-5                   .034881648          2REV_PER_MON          REV-5                    .034527798          3DROPPED_CALLS                                 .028110322          4MINS_PER_TARIFF_MON  PEAK-4                   .024698149          5From the above results, it is clear that some predictors do contain information to help identify churn (explanatory value > 0).  The strongest uni-variate predictor of churn appears to be the customer's (binned) length of service.  The second strongest churn indicator appears to be the number of peak minutes used in the most recent month.  The subname column contains the interior piece of the DM_NESTED_NUMERICALS column described in the previous post.  By using the object relational approach, many related predictors are included within a single top-level column. .....   NOTE:  These are just EXCERPTS.  Click here to start reading the Oracle Data Mining a Star Schema: Telco Churn Case Study from the beginning.    

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >