Search Results

Search found 5233 results on 210 pages for 'a records'.

Page 82/210 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Oracle - UPSERT with update not executed for unmodified values

    - by Buthrakaur
    I'm using following update or insert Oracle statement at the moment: BEGIN UPDATE DSMS SET SURNAME = :SURNAME, FIRSTNAME = :FIRSTNAME, VALID = :VALID WHERE DSM = :DSM; IF (SQL%ROWCOUNT = 0) THEN INSERT INTO DSMS (DSM, SURNAME, FIRSTNAME, VALID) VALUES (:DSM, :SURNAME, :FIRSTNAME, :VALID); END IF; END; This runs fine except that the update statement performs dummy update if the data is same as the parameter values provided. I would not mind the dummy update in normal situation, but there's a replication/synchronization system build over this table using triggers on tables to capture updated records and executing this statement frequently for many records simply means that I'd cause huge traffic in triggers and the sync system. Is there any simple method how to reformulate this code that the update statement wouldn't update record if not necessary without using following IF-EXISTS check code which I find not sleek enough and maybe also not most efficient for this task? DECLARE CNT NUMBER; BEGIN SELECT COUNT(1) INTO CNT FROM DSMS WHERE DSM = :DSM; IF SQL%FOUND THEN UPDATE DSMS SET SURNAME = :SURNAME, FIRSTNAME = :FIRSTNAME, VALID = :VALID WHERE DSM = :DSM AND (SURNAME != :SURNAME OR FIRSTNAME != :FIRSTNAME OR VALID != :VALID); ELSE INSERT INTO DSMS (DSM, SURNAME, FIRSTNAME, VALID) VALUES (:DSM, :SURNAME, :FIRSTNAME, :VALID); END IF; END;

    Read the article

  • query structure - ignoring entries for the same event from multiple users?

    - by Andrew Heath
    One table in my MySQL database tracks game plays. It has the following structure: SCENARIO_VICTORIES [ID] [scenario_id] [game] [timestamp] [user_id] [winning_side] [play_date] ID is the autoincremented primary key. timestamp records the moment of submission for the record. winning_side has one of three possible values: 1, 2, or 0 (meaning a draw) One of the queries done on this table calculates the victory percentage for each scenario, when that scenario's page is viewed. The output is expressed as: Side 1 win % Side 2 win % Draw % and queried with: SELECT winning_side, COUNT(scenario_id) FROM scenario_victories WHERE scenario_id='$scenID' GROUP BY winning_side ORDER BY winning_side ASC and then processed into the percentages and such. Sorry for the long setup. My problem is this: several of my users play each other, and record their mutual results. So these battles are being doubly represented in the victory percentages and result counts. Though this happens infrequently, the userbase isn't large and the double entries do have a noticeable effect on the data. Given the table and query above - does anyone have any suggestions for how I can "collapse" records that have the same play_date & game & scenario_id & winning_side so that they're only counted once?

    Read the article

  • Is it possible to replace values in a queryset before sending it to your template?

    - by Issy
    Hi Guys, Wondering if it's possible to change a value returned from a queryset before sending it off to the template. Say for example you have a bunch of records Date | Time | Description 10/05/2010 | 13:30 | Testing... etc... However, based on the day of the week the time may change. However this is static. For example on a monday the time is ALWAYS 15:00. Now you could add another table to configure special cases but to me it seems overkill, as this is a rule. How would you replace that value before sending it to the template? I thought about using the new if tags (if day=1), but this is more of business logic rather then presentation. Tested this in a custom template tag def render(self, context): result = self.model._default_manager.filter(from_date__lte=self.now).filter(to_date__gte=self.now) if self.day == 4: result = result.exclude(type__exact=2).order_by('time') else: result = result.order_by('type') result[0].time = '23:23:23' context[self.varname] = result return '' However it still displays the results from the DB, is this some how related to 'lazy' evaluation of templates? Thanks! Update Responding to comments below: It's not stored wrong in the DB, its stored Correctly However there is a small side case where the value needs to change. So for example I have a From Date & To date, my query checks if todays date is between those. Now with this they could setup a from date - to date for an entire year, and the special cases (like mondays as an example) is taken care off. However if you want to store in the DB you would have to capture several more records to cater for the side case. I.e you would be capturing the same information just to cater for that 1 day when the time changes. (And the time always changes on the same day, and is always the same)

    Read the article

  • Can I force the auto-generated Linq-to-SQL classes to use an OUTER JOIN?

    - by Gary McGill
    Let's say I have an Order table which has a FirstSalesPersonId field and a SecondSalesPersonId field. Both of these are foreign keys that reference the SalesPerson table. For any given order, either one or two salespersons may be credited with the order. In other words, FirstSalesPersonId can never be NULL, but SecondSalesPersonId can be NULL. When I drop my Order and SalesPerson tables onto the "Linq to SQL Classes" design surface, the class builder spots the two FK relationships from the Order table to the SalesPerson table, and so the generated Order class has a SalesPerson field and a SalesPerson1 field (which I can rename to SalesPerson1 and SalesPerson2 to avoid confusion). Because I always want to have the salesperson data available whenever I process an order, I am using DataLoadOptions.LoadWith to specify that the two salesperson fields are populated when the order instance is populated, as follows: dataLoadOptions.LoadWith<Order>(o => o.SalesPerson1); dataLoadOptions.LoadWith<Order>(o => o.SalesPerson2); The problem I'm having is that Linq to SQL is using something like the following SQL to load an order: SELECT ... FROM Order O INNER JOIN SalesPerson SP1 ON SP1.salesPersonId = O.firstSalesPersonId INNER JOIN SalesPerson SP2 ON SP2.salesPersonId = O.secondSalesPersonId This would make sense if there were always two salesperson records, but because there is sometimes no second salesperson (secondSalesPersonId is NULL), the INNER JOIN causes the query to return no records in that case. What I effectively want here is to change the second INNER JOIN into a LEFT OUTER JOIN. Is there a way to do that through the UI for the class generator? If not, how else can I achieve this? (Note that because I'm using the generated classes almost exclusively, I'd rather not have something tacked on the side for this one case if I can avoid it).

    Read the article

  • Java Inheritance - Getting a Parameter from Parent Class

    - by Aaron
    I'm trying to take one parameter from the parent class of Car and add it to my array (carsParked), how can i do this? Parent Class public class Car { protected String regNo; //Car registration number protected String owner; //Name of the owner protected String carColor; /** Creates a Car object * @param rNo - registration number * @param own - name of the owner **/ public Car (String rNo, String own, String carColour) { regNo = rNo; owner = own; carColor = carColour; } /** @return The car registration number **/ public String getRegNo() { return regNo; } /** @return A String representation of the car details **/ public String getAsString() { return "Car: " + regNo + "\nColor: " + carColor; } public String getColor() { return carColor; } } Child Class public class Carpark extends Car { private String location; // Location of the Car Park private int capacity; // Capacity of the Car Park - how many cars it can hold private int carsIn; // Number of cars currently in the Car Park private String[] carsParked; /** Constructor for Carparks * @param loc - the Location of the Carpark * @param cap - the Capacity of the Carpark */ public Carpark (String locations, int room) { location = locations; capacity = room; } /** Records entry of a car into the car park */ public void driveIn() { carsIn = carsIn + 1; } /** Records the departure of a car from the car park */ public void driveOut() { carsIn = carsIn - 1; } /** Returns a String representation of information about the carpark */ public String getAsString() { return location + "\nCapacity: " + capacity + " Currently parked: " + carsIn + "\n*************************\n"; } }

    Read the article

  • how to remove subsets form given text file

    - by user324887
    i have a problem like this 10 20 30 40 70 20 30 70 30 40 10 20 29 70 80 90 20 30 40 40 45 65 10 20 80 45 65 20 I want to remove all subset transaction from this file. output file should be like follows 10 20 30 40 70 29 70 80 90 20 30 40 40 45 65 10 20 80 Where records like 20 30 70 30 40 10 20 45 65 20 are removed because of they are subset of other records. i AM using set for this but i am not able to create one set for one line can anybody know how to do this please help me here i am sending you my code include include include using namespace std; using namespace std; set s1; int main() { FILE fp = fopen ( "abc.txt", "r" ); if ( fp != NULL ) { char line [ 128 ]; / or other suitable maximum line size */ while ( fgets ( line, sizeof line, fp ) != NULL ) /* read a line */ { istringstream iss(line); do { string sub; iss >> sub; s1.insert(sub); } while (iss); for (set<string>::const_iterator p = s1.begin( );p != s1.end( ); ++p) cout << *p << endl; } } }

    Read the article

  • SQL different joins not making any difference to result

    - by Chrissi
    I'm trying to write a quick (ha!) program to organise some of my financial information. What I ideally want is a query that will return all records with financial information in them from TableA. There should be one row for each month, but in instances where there were no transactions for a month there will be no record. I get results like this: SELECT Period,Year,TotalValue FROM TableA WHERE Year='1997' Result: Period Year TotalValue 1 1997 298.16 2 1997 435.25 4 1997 338.37 8 1997 336.07 9 1997 578.97 11 1997 361.23 By joining on a table (well a View in this instance) which just contains a field Period with values from 1 to 12, I expect to get something like this: SELECT p.Period,a.Year,a.TotalValue FROM Periods AS p LEFT JOIN TableA AS a ON p.Period = a.Period WHERE Year='1997' Result: Period Year TotalValue 1 1997 298.16 2 1997 435.25 3 NULL NULL 4 1997 338.37 5 NULL NULL 6 NULL NULL 7 NULL NULL 8 1997 336.07 9 1997 578.97 10 NULL NULL 11 1997 361.23 12 NULL NULL What I'm actually getting though is the same result no matter how I join it (except CROSS JOIN which goes nuts, but it's really not what I wanted anyway, it was just to see if different joins are even doing anything). LEFT JOIN, RIGHT JOIN, INNER JOIN all fail to provide the NULL records I am expecting. Is there something obvious that I'm doing wrong in the JOIN? Does it matter that I'm joining onto a View?

    Read the article

  • Queuing using table or MSMQ?

    - by Lieven Cardoen
    A part of the application I'm working on is an swf that shows a test with some 80 questions. Each question is saved to a sql server through WebORB and asp.net. If a candidate finisheds the test, the session needs to be validated. Problem now is that sometimes 350 candidates finish their test at the same moment, and cpu on webserver and sql server explodes (350 validations concurrently). Now, how would I implement queuing here? In the database, there's a table that has a record for each session. One column holds the status. 1 is finished, 2 is validated. I could implement queuing in two ways (as I see it, maybe you have other propositions): A process that checks the table for records with status 1. If it finds one, it validates the session. So, sessions are validated one after one. If a candidate finishes its session, a message is sent to a MSMQ queue. Another process listens to the queue and validates sessions one after one. Now, What would be the best approach? Where do you start the process that will validate sessions? In your global.asax (application_start)? As a windows service? As an exe on the root of the website that is started in application_start? To me, using the table and looking for records with status 1 seems the easiest way.

    Read the article

  • MySQL nested CASE error I need help with?

    - by AK
    What I am trying to do here is: IF the records in table todo as identified in $done have a value in the column recurinterval then THEN reset date_scheduled column ELSE just set status_id column to 6 for those records. This is the error I get from mysql_error() ... You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'CASE recurinterval != 0 AND recurinterval IS NOT NULL THEN SET date_sche' at line 2 How can I make this statement work? UPDATE todo CASE recurinterval != 0 AND recurinterval IS NOT NULL THEN SET date_scheduled = CASE recurunit WHEN 'DAY' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval DAY) WHEN 'WEEK' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval WEEK) WHEN 'MONTH' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval MONTH) WHEN 'YEAR' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval YEAR) END WHERE todo_id IN ($done) ELSE SET status_id = 6 WHERE todo_id IN ($done) END The following mySQL statement worked just fine before I revised like above. UPDATE todo SET date_scheduled = CASE recurunit WHEN 'DAY' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval DAY) WHEN 'WEEK' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval WEEK) WHEN 'MONTH' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval MONTH) WHEN 'YEAR' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval YEAR) END WHERE todo_id IN ($done) AND recurinterval != 0 AND recurinterval IS NOT NULL

    Read the article

  • MySQL 5.5.8 Gets Periodic Lag

    - by CYREX
    Am using MySQL 5.5.8 on an Ubuntu system and every X amount of time it creates a huge lag that lasts a couple of seconds. Then all goes back to normal until the next lag. The time period varies but it looks like it happen periodically. Am using InnoDB. It is like hiccups in the MySQL. What could be creating this sort of periodic problem. Do not have any cron jobs or process running every time the X period happens. The X period could be between 30 minutes to 2 hours. So for example it could happen every 30 minutes for the next 12 hours or it could happen every 2 hours for the next 8 hours. key_buffer_size = 256M max_allowed_packet = 1M table_cache = 1024 table_open_cache = 1024 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 32M thread_cache_size = 128 query_cache_size= 128M log-slow-queries = slow.log long_query_time = 5 log-queries-not-using-indexes # Try number of CPU's*2 for thread_concurrency thread_concurrency = 4 max_connections=512 #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /usr/local/mysql/data # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 1G #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 64M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 0 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 64M sort_buffer_size = 64M read_buffer = 2M write_buffer = 2M There are about 200+ tables divided in 3 databases. The most written too is in InnoDB. The other ones are more read. Several of the tables in the InnoDB have more than 2 million records. The other databases top at about 400 thousand records and do not change so often. The PC is a Core 2 Duo 8400 with 4GB RAM, 32Bit Ubuntu.

    Read the article

  • Should I pass a SqlDataReader by reference or not when passing it out to multiple threads.

    - by deroby
    Hi all, being new to c# I've run into this 'conundrum' when passing around a SqlDataReader between different threads. Without going into too much detail, the idea is to have a main thread fetching data from the database (a large recordset) and then have a helper-task run through this record by record and doing some stuff based upon the contents of this. There is no feedback to the recordset, it simply wades through until no records are left. This works fine, but given the nature of the job at hand it should be possible to have this job spread over different threads (CPUs) to maximize throughput (the order of execution is of no significance). The question then becomes, when I pass this recordset in a SqlDataReader, do I have to use ref or not ? It kind of boils down to the question : if I pass the object around without specifying ref, won't it create new copies in memory and have records processed n times ? Or, don't I risk having the record-position being moved forward while not all fields have been fully read yet ? The latter seems more like a 'data racing' issue and probably is covered by the lock()ing mechanism (or not?). My initial take on the problem was that it doesn't really hurt passing the variable using ref, yet as a colleague put it : "you only need ref when you're doing something wrong" =) Additionally using ref restricts me from applying a Using() construction too which isn't very nice either. I thus create a "basic" project that tackles the same approach but without the ref notation. Tests so far show that it works flawlessly on a Core2Duo (2cpu) using any number of threads, yet I'm still a bit wary... What do you experts think about this ? Use ref or not ? You can find the test-project here as it seems I can't upload it to this question directly ?!? ps: it's just a test-project and I'm new to c#, so please be gentle on me when breaking down the code =P

    Read the article

  • Can not issue data manipulation statements with executeQuery in java

    - by user225269
    I'm trying to insert records in mysql database using java, What do I place in this code so that I could insert records: String id; String name; String school; String gender; String lang; Scanner inputs = new Scanner(System.in); System.out.println("Input id:"); id=inputs.next(); System.out.println("Input name:"); name=inputs.next(); System.out.println("Input school:"); school= inputs.next(); System.out.println("Input gender:"); gender= inputs.next(); System.out.println("Input lang:"); lang=inputs.next(); Class.forName("com.mysql.jdbc.Driver"); Connection con = DriverManager.getConnection("jdbc:mysql://localhost:3306/employee_record", "root", "MyPassword"); PreparedStatement statement = con.prepareStatement("insert into employee values('id', 'name', 'school', 'gender', 'lang');"); statement.executeUpdate();

    Read the article

  • How Optimize sql query make it faster

    - by user502083
    Hello every one : I have a very simple small database, 2 of tables are: Node (Node_ID, Node_name, Node_Date) : Node_ID is primary key Citation (Origin_Id, Target_Id) : PRIMARY KEY (Origin_Id, Target_Id) each is FK in Node Now I write a query that first find all citations that their Origin_Id has a specific date and then I want to know what are the target dates of these records. I'm using sqlite in python the Node table has 3000 record and Citation has 9000 records, and my query is like this in a function: def cited_years_list(self, date): c=self.cur try: c.execute("""select n.Node_Date,count(*) from Node n INNER JOIN (select c.Origin_Id AS Origin_Id, c.Target_Id AS Target_Id, n.Node_Date AS Date from CITATION c INNER JOIN NODE n ON c.Origin_Id=n.Node_Id where CAST(n.Node_Date as INT)={0}) VW ON VW.Target_Id=n.Node_Id GROUP BY n.Node_Date;""".format(date)) cited_years=c.fetchall() self.conn.commit() print('Cited Years are : \n ',str(cited_years)) except Exception as e: print('Cited Years retrival failed ',e) return cited_years Then I call this function for some specific years, But it's crazy slowwwwwwwww :( (around 1 min for a specific year) Although my query works fine, it is slow. would you please give me a suggestion to make it faster? I'd appreciate any idea about optimizing this query :) I also should mention that I have indices on Origin_Id and Target_Id, so the inner join should be pretty fast, but it's not!!!

    Read the article

  • Using Memcached in Python/Django - questions.

    - by Thomas
    I am starting use Memcached to make my website faster. For constant data in my database I use this: from django.core.cache import cache cache_key = 'regions' regions = cache.get(cache_key) if result is None: """Not Found in Cache""" regions = Regions.objects.all() cache.set(cache_key, regions, 2592000) #(2592000sekund = 30 dni) return regions For seldom changed data I use signals: from django.core.cache import cache from django.db.models import signals def nuke_social_network_cache(self, instance, **kwargs): cache_key = 'networks_for_%s' % (self.instance.user_id,) cache.delete(cache_key) signals.post_save.connect(nuke_social_network_cache, sender=SocialNetworkProfile) signals.post_delete.connect(nuke_social_network_cache, sender=SocialNetworkProfile) Is it correct way? I installed django-memcached-0.1.2, which show me: Memcached Server Stats Server Keys Hits Gets Hit_Rate Traffic_In Traffic_Out Usage Uptime 127.0.0.1 15 220 276 79% 83.1 KB 364.1 KB 18.4 KB 22:21:25 Can sombody explain what columns means? And last question. I have templates where I am getting much records from a few table (relationships). So in my view I get records from one table and in templates show it and related info from others. Generating page last a few seconds for very small table (<100records). Is it some easy way to cache queries from templates? Have I to do some big structure in my view (with all related tables), cache it and send to template?

    Read the article

  • Optimize master-detail insert statements

    - by Dave Jarvis
    Quest After a day of running (against nearly 1 GB of data), a set of statements are tumbling down to 40 inserts per second. I am looking to increase that by an order of magnitude or two. SQL Code The code to insert the information comes in two parts: a master record and detail records. The master record: INSERT INTO MONTH_REF (DISTRICT_ID, STATION_ID, CATEGORY_ID, YEAR, MONTH) VALUES ('101', '0066', '010', 1984, 07); The detail records: INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0, ' ', 1); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0.5, ' ', 2); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0, 'T', 3); Proposed Solution INSERT INTO MONTH_REF (DISTRICT_ID, STATION_ID, CATEGORY_ID, YEAR, MONTH) VALUES ('101', '0066', '010', 1984, 07); SET @month_ref_id := (SELECT LAST_INSERT_ID()); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0, ' ', 1); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0.5, ' ', 2); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0, 'T', 3); Constraints The MONTH_REF table has an AUTO_INCREMENT primary key and is indexed on it. The DAILY table has no index and no primary key. A primary key can be added to the DAILY table, if it would help. Question Is there a more efficient way to execute the (billion or so) insert statements than the proposed solution? Thank you!

    Read the article

  • using spring, hibernate and scala, is there a better way to load test data than dbunit?

    - by egervari
    Here are some things I really dislike about dbunit: 1) You cannot specify the exact ordering the inserts because dbunit likes to group your inserts by table name, and not by the order you define them in the XML file. This is a problem when you have records depending on other records in other tables, so you have to disable foreign key constraints during your tests... which actually sucks because these foreign key constraints will get fired in production while your tests won't be aware of them! 2) They seem hellbent on forcing you to use an xml namespace to define your xml... and I honestly can't be bothered to do this. I like the data.xml without any namespace. It works. But they are so hellbent on deprecating it. 3) Creating different xml files is hard on a per test basis, so it actually encourages creating data for your entire app. Unfortunately, this process is a little bloated too once the data grows in size and things get inter tangled. There has got to be a better way to split up your test data into chunks without having to copy/paste a lot of the test data across all of your tests. 4) Keeping track of id references in a big xml file is just impossible. If you have 130 domain classes, it just gets bewildering. This model simply does not scale. Is there something less bloated and better in the Spring/Hibernate space? db unit has worn out its welcome and I'm really looking for something better.

    Read the article

  • What should I do with an over-bloated select-box/drop-down

    - by Tristan Havelick
    All web developers run into this problem when the amount of data in their project grows, and I have yet to see a definitive, intuitive best practice for solving it. When you start a project, you often create forms with tags to help pick related objects for one-to-many relationships. For instance, I might have a system with Neighbors and each Neighbor belongs to a Neighborhood. In version 1 of the application I create an edit user form that has a drop down for selecting users, that simply lists the 5 possible neighborhoods in my geographically limited application. In the beginning, this works great. So long as I have maybe 100 records or less, my select box will load quickly, and be fairly easy to use. However, lets say my application takes off and goes national. Instead of 5 neighborhoods I have 10,000. Suddenly my little drop-down takes forever to load, and once it loads, its hard to find your neighborhood in the massive alphabetically sorted list. Now, in this particular situation, having hierarchical data, and letting users drill down using several dynamically generated drop downs would probably work okay. However, what is the best solution when the objects/records being selected are not hierarchical in nature? In the past, of done this with a popup with a search box, and a list, but this seems clunky and dated. In today's web 2.0 world, what is a good way to find one object amongst many for ones forms? I've considered using an Ajaxifed search box, but this seems to work best for free text, and falls apart a little when the data to be saved is just a reference to another object or record. Feel free to cite specific libraries with generic solutions to this problem, or simply share what you have done in your projects in a more general way

    Read the article

  • Table index design

    - by Swoosh
    I would like to add index(s) to my table. I am looking for general ideas how to add more indexes to a table. Other than the PK clustered. I would like to know what to look for when I am doing this. So, my example: This table (let's call it TASK table) is going to be the biggest table of the whole application. Expecting millions records. IMPORTANT: massive bulk-insert is adding data in this table table has 27 columns: (so far, and counting :D ) int x 9 columns = id-s varchar x 10 columns bit x 2 columns datetime x 5 columns INT COLUMNS all of these are INT ID-s but from tables that are usually smaller than Task table (10-50 records max), example: Status table (with values like "open", "closed") or Priority table (with values like "important", "not so important", "normal") there is also a column like "parent-ID" (self - ID) join: all the "small" tables have PK, the usual way ... clustered STRING COLUMNS there is a (Company) column (string!) that is something like "5 characters long all the time" and every user will be restricted using this one. If in Task there are 15 different "Companies" the logged in user would only see one. So there's always a filter on this one. Might be a good idea to add an index to this column? DATE COLUMNS I think they don't index these ... right? Or can / should be?

    Read the article

  • Uncommitted reads in SSIS

    - by OldBoy
    I'm trying to debug some legacy Integration Services code, and really want some confirmation on what I think the problem is: We have a very large data task inside a control flow container. This control flow container is set up with TransactionOption = supported - i.e. it will 'inherit' transactions from parent containers, but none are set up here. Inside the data flow there is a call to a stored proc that writes to a table with pseudo code something like: "If a record doesn't exist that matches these parameters then write it" Now, the issue is that there are three records being passed into this proc all with the same parameters, so logically the first record doesn't find a match and a record is created. The second record (with the same parameters) also doesn't find a match and another record is created. My understanding is that the first 'record' passed to the proc in the dataflow is uncommitted and therefore can't be 'read' by the second call. The upshot being that all three records create a row, when logically only the first should. In this scenario am I right in thinking that it is the uncommitted transaction that stops the second call from seeing the first? Even setting the isolation level on the container doesn't help because it's not being wrapped in a transaction anyway.... Hope that makes sense, and any advice gratefully received. Work-arounds confer god-like status on you.

    Read the article

  • MySQL query against pseudo-key-value pair data in WordPress custom query

    - by andrevr
    I'm writing a custom WordPress query to use some of the data which the Woothemes Diarise theme creates. Diarise is an event planner theme with calendar blah, blah... and uses custom fields to store the event start and end dates in WP custom fields in the *wp_postmeta* table, which implements a key-value store. So for each post in the "event" category, there are 2 records in *wp_postmeta*, named *event_start_date* and *event_end_date* that I'm interested in. The task is to compare a tourist's arrival and departure dates with the start and end dates of events, yielding a what's on list of events available. We thought we'd killed it with a grand flash of logic, that goes like this: Disregard any event that ends before the tourist arrives, and any that begin after the departure date. I wrote this query: SELECT wposts.* FROM wp_posts wposts LEFT JOIN wp_postmeta wpostmeta ON wposts.ID = wpostmeta.post_id LEFT JOIN wp_term_relationships ON (wposts.ID = wp_term_relationships.object_id) LEFT JOIN wp_term_taxonomy ON (wp_term_relationships.term_taxonomy_id = wp_term_taxonomy.term_taxonomy_id) WHERE wp_term_taxonomy.taxonomy = 'category' AND wp_term_taxonomy.term_id IN(3,4) AND ( wpostmeta.meta_key = 'event_start_date' AND NOT ( concat(subst(wpostmeta.meta_value,7,4),'-',subst(wpostmeta.meta_value,4,2),'-',subst(wpostmeta.meta_value,1,2) > '2010-07-31' ) ) AND ( wpostmeta.meta_key = 'event_end_date' AND NOT ( concat(subst(wpostmeta.meta_value,7,4),'-',subst(wpostmeta.meta_value,4,2),'-',subst(wpostmeta.meta_value,1,2) < '2010-05-01' ) ) ) ORDER BY wpostmeta.meta_value ASC And, of course it returns no records. The problem I believe is in the dual reference to wpostmeta.meta_key, but how to get around that?

    Read the article

  • Sql Server Replication: Snapshot vs Merge

    - by Zyphrax
    Background information Let's say I have two database servers, both SQL Server 2008. One is in my LAN (ServerLocal), the other one is on a remote hosting environment (ServerRemote). I have created a database on ServerLocal and have an exact copy of that database on ServerRemote. The database on ServerRemote is part of a web application and I would like to keep it's data up-to-date with the data in the database ServerLocal. ServerLocal is able to communicate with ServerRemote, this is one-way traffic. Communication from ServerRemote to ServerLocal isn't available. Current solution I thought it would be a nice solution to use replication. So I've made ServerLocal a publisher and subscriptions are pushed to the ServerRemote. This works fine, when a snapshot is transfered to ServerRemote the existing data will be purged and the ServerRemote database is once again an exact replica of the database on ServerLocal. The problem Records that exist on ServerRemote that don't exist on ServerLocal are removed. This doesn't matter for most of my tables but in some of my tables I'd like to keep the existing data (aspnet_users for instance), and update the records if necessary. What kind of replication fits my problem?

    Read the article

  • Excel - Best Way to Connect With Access Data

    - by gamerzfuse
    Hello there, Here is the situation we have: a) I have an Access database / application that records a significant amount of data. Significant fields would be hours, # of sales, # of unreturned calls, etc b) I have an Excel document that connects to the Access database and pulls data in to visualize it As it stands now, the Excel file has a Refresh button that loads new data. The data is loaded into a large PivotTable. The main 'visual form' then uses VLOOKUP to get the results from the form, based on the related hours. This operation is slow (~10 seconds) and seems to be redundant and inefficient. Is there a better way to do this? I am willing to go just about any route - just need directions. Thanks in advance! Update: I have confirmed (due to helpful comments/responses) that the problem is with the data loading itself. removing all the VLOOKUPs only took a second or two out of the load time. So, the questions stands as how I can rapidly and reliably get the data without so much time involvement (it loads around 3000 records into the PivotTables).

    Read the article

  • AWK: compare apache dates without using regular expression

    - by smallmeans
    I'm writing a loganalysis application and wanted to grab apache log records between two certain dates. Assume that a date is formated as such: 22/Dec/2009:00:19 (day/month/year:hour:minute) Currently, I'm using a regular expression to replace the month name with its numeric value, remove the separators, so the above date is converted to: 221220090019 making a date comparison trivial.. but.. Running a regex on each record for large files, say, one containing a quarter million records, is extremely costly.. is there any other method not involving regex substitution? Thanks in advance Edit: here's the function doing the convertion/comparison function dateInRange(t, from, to) { sub(/[[]/, "", t); split(t, a, "[/:]"); match("JanFebMarAprMayJunJulAugSepOctNovDec", a[2]); a[2] = sprintf("%02d", (RSTART + 2) / 3); s = a[3] a[2] a[1] a[4] a[5]; return s >= from && s <= to; } "from" and "to" are the intervals in the aforementioned format, and "t" is the raw apache log date/time field (e.g [22/Dec/2009:00:19:36)

    Read the article

  • Linq. Help me tune this!

    - by dtrick
    I have a linq query that is causing some timeout issues. Basically, I have a query that is returning the top 100 results from a table that has approximately 500,000 records. Here is the query: using (var dc = CreateContext()) { var accounts = string.IsNullOrEmpty(searchText) ? dc.Genealogy_Accounts .Where(a => a.Genealogy_AccountClass.Searchable) .OrderByDescending(a => a.ID) .Take(100) : dc.Genealogy_Accounts .Where(a => (a.Code.StartsWith(searchText) || a.Name.StartsWith(searchText)) && a.Genealogy_AccountClass.Searchable) .OrderBy(a => a.Code) .Take(100); return accounts.Select(a => } } Oddly enough it is the first linq query that is causing the timeout. I thought that by doing a 'Take' we wouldn't need to scan all 500k of records. However, that must be what is happening. I'm guessing that the join to find what is 'searchable' is causing the issue. I'm not able to denormalize the tables... so I'm wondering if there is a way to rewrite the linq query to get it to return quicker... or if I should just write this query as a Stored Procedure (and if so, what might it look like). Thanks.

    Read the article

  • Improve Efficiency in Array comparison in Ruby

    - by user2985025
    Hi I am working on Ruby /cucumber and have an requirement to develop a comparison module/program to compare two files. Below are the requirements The project is a migration project . Data from one application is moved to another Need to compare the data from the existing application against the new ones. Solution : I have developed a comparison engine in Ruby for the above requirement. a) Get the data, de duplicated and sorted from both the DB's b) Put the data in a text file with "||" as delimiter c) Use the key columns (number) that provides a unique record in the db to compare the two files For ex File1 has 1,2,3,4,5,6 and file2 has 1,2,3,4,5,7 and the columns 1,2,3,4,5 are key columns. I use these key columns and compare 6 and 7 which results in a fail. Issue : The major issue we are facing here is if the mismatches are more than 70% for 100,000 records or more the comparison time is large. If the mismatches are less than 40% then comparison time is ok. Diff and Diff -LCS will not work in this case because we need key columns to arrive at accurate data comparison between two applications. Is there any other method to efficiently reduce the time if the mismatches are more thatn 70% for 100,000 records or more. Thanks

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >