Search Results

Search found 80052 results on 3203 pages for 'data load performance'.

Page 441/3203 | < Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >

  • Having to insert a record, then update the same record warrants 1:1 relationship design?

    - by dianovich
    Let's say an Order has many Line items and we're storing the total cost of an order (based on the sum of prices on order lines) in the orders table. -------------- orders -------------- id ref total_cost -------------- -------------- lines -------------- id order_id price -------------- In a simple application, the order and line are created during the same step of the checkout process. So this means INSERT INTO orders .... -- Get ID of inserted order record INSERT into lines VALUES(null, order_id, ...), ... where we get the order ID after creating the order record. The problem I'm having is trying to figure out the best way to store the total cost of an order. I don't want to have to create an order create lines on an order calculate cost on order based on lines then update record created in 1. in orders table This would mean a nullable total_cost field on orders for starters... My solution thus far is to have an order_totals table with a 1:1 relationship to the orders table. But I think it's redundant. Ideally, since everything required to calculate total costs (lines on an order) is in the database, I would work out the value every time I need it, but this is very expensive. What are your thoughts?

    Read the article

  • Quickest way to compare a buch of array or list of values.

    - by zapping
    Can you please let me know on the quickest and efficient way to compare a large set of values. Its like there are a list of parent codes(string) and each code has a series of child values(string). The child lists have to be compared with each other and find out duplicates and count how many times they repeat. code1(code1_value1, code1_value2, code3_value3, ..., code1_valueN); code2(code2_value1, code1_value2, code2_value3, ..., code2_valueN); code3(code2_value1, code3_value2, code3_value3, ..., code3_valueN); . . . codeN(codeN_value1, codeN_value2, codeN_value3, ..., codeN_valueN); The lists are huge say like there are 100 parent codes and each has about 250 values in them. There will not be duplicates within a code list. Doing it in java and the solution i could figure out is. Store the values of first set of code in as codeMap.put(codeValue, duplicateCount). The count initialized to 0. Then compare the rest of the values with this. If its in the map then increment the count otherwise append it to the map. The downfall of this is to get the duplicates. Another iteration needs to be performed on a very large list. An alternative is to maintain another hashmap for duplicates like duplicateCodeMap.put(codeValue, duplicateCount) and change the initial hashmap to codeMap.put(codeValue, codeValue). Speed is what is requirement. Hope one of you can help me with it.

    Read the article

  • Problem mapping data located in a NSMutableArray

    - by Graeme
    I have an NSMutableArray which contains some addresses which I need to map using Apple's MapKit SDK which I can't seem to get to load. The NSLog keeps telling me that the data source is (null) and a 0x0 error displays when I attempt to print out the array. Any ideas? The data is parsed and stored from another class, perhaps I'm not linking it across properly? The data is originally gathered from an RSS feed, bought into the app with an IMporter class, and then displayed in a table view. I want to be able to connect into that data with my mapping class, but am struggling to do so.

    Read the article

  • Pulling $_GET data and creating multidimensional array using loop

    - by Chris J
    I'm creating a checkout for customers and the data about what's in their cart is being sent to a page (for just now) via $_GET. I want to extract that data and then populate a multidimensional array with it using a loop. Here's how I'm naming the data: $itemCount = $_GET['itemCount']; $i = 1; while ($i <= $itemCount) { ${'item_name_'.$i} = $_GET["item_name_{$i}"]; ${'item_quantity_'.$i} = $_GET["item_quantity_{$i}"]; ${'item_price_'.$i} = $_GET["item_price_{$i}"]; //echo "<br />Name: " .${'item_name_'.$i}. " - Quantity: " .${'item_quantity_'.$i}. " - Price: ".${'item_price_'.$i}; $i++; } From here I'd like to create a multidimensional array like such: Array ( [Item_1] => Array ( [item_name] => Shoe [item_quantity] => 2 [item_price] => 40.00 ) [Item_2] => Array ( [item_name] => Bag [item_quantity] => 1 [item_price] => 60.00 ) [Item_3] => Array ( [item_name] => Parrot [item_quantity] => 4 [item_price] => 90.00 ) . . . ) What I'd like to know is if there is a way I can create this array in the existing while loop? I'm aware of being able to add data to an array like $data = [] after delacring an empty array but the actual syntax eludes me. Maybe I'm completely off the right track and there is a better way of doing it? Thanks

    Read the article

  • What are the suggested alternatives for Class<T>.isAssignableFrom(Class<?> cls)?

    - by Wing C. Chen
    Currently I am doing the profiling to a piece of code. During the profiling, I discovered that this very method call, Class<T>.isAssignableFrom(Class<?> cls) takes up to quite amount of the entire time. Because this is a method from reflection, it takes a lot of time compared to normal keywords or method calls. I am wondering if there are some good alternatives for this method calls?

    Read the article

  • Java program runs smoothly in Netbeans but slowly in Eclipse and as an executed jar. WTF?

    - by comp sci balla
    A java program that does frequent swing/awt painting animation (but nothing more advanced than g.fillOval(...)) runs at a consistent 60fps in Netbeans, and at about 6fps when ran in Eclipse or executed as a jar file from a unix terminal. The program was developed in Netbeans and is run-of-the-mill desktop application (not webstart or japplet or ...). This is occurring in Ubuntu 10 with java 1.6. How is this possible? The universe no longer makes sense to me.

    Read the article

  • How to interpret Google's "Avg. Page Load Time"?

    - by hawbsl
    Is there any industry rule of thumb for what's considered an unacceptable load time v. an OK one v. a blistering fast one? We're just reviewing some Google Analytics data and getting 0.74 Avg. Page Load Time reported. I guess that's OK. However it would be good if some meatier comparison data were available, or a blog post, or somewhere where there's some analysis of what speeds are generally being achieved by various kinds of sites. Any useful links to help someone interpret these speeds? If you Google it you just get a lot of results dealing with how to improve your speed. We're not at that stage yet.

    Read the article

  • Converting json data to an HTML table

    - by dnaluz
    I have an array of data in php and I need to display this data in a HTML table. Here is what an example data set looks like. Array( Array ( [comparisonFeatureId] => 1182 [comparisonFeatureType] => Category [comparisonValues] => Array ( [0] => Not Available [1] => Standard [2] => Not Available [3] => Not Available ) [featureDescription] => Rear Seat Heat Ducts ),) The dataset is a comparison of 3 items (shown in the comparisonValues index of the array) In the end I need the row to look similar to this <tr class="alt2 section_1"> <td><strong>$record['featureDescription']</strong></td> <td>$record['comparisonValues'][0]</td> <td>$record['comparisonValues'][1]</td> <td>$record['comparisonValues'][2]</td> <td>$record['comparisonValues'][3]</td> </tr> The problem I am coming across is how to best do this. Should I create the entire table HTML on the server side pass it over an ajax call and just dump pre-rendered HTML data into a div or pass the json data and render the table client side. Any elegant suggestions? Thanks in advanced.

    Read the article

  • Better understanding of my SQL transactions

    - by Slew Poke
    I just realized that my application was needlessly making 50+ database calls per user request due to some hidden coding -- hidden in the sense that between LINQ, persistence frameworks and events it just so turned out that a huge number of calls were being made without me being aware. Is there a recommended way to analyze individual transactions going to my SQL 2008 database, preferably with some integration to my Visual Studio 2010 environment? I want to be able to 'spy' on individual transactions being made, but only for certain pieces of my code, and without making serious changes to either the code or database.

    Read the article

  • move data from one table to another, postgresql edition

    - by IggShaman
    Hi All, I'd like to move some data from one table to another (with a possibly different schema). Straightforward solution that comes into mind is - start a transaction with serializable isolation level; INSERT INTO dest_table SELECT data FROM orig_table,other-tables WHERE <condition>; DELETE FROM orig_table USING other-tables WHERE <condition>; COMMIT; Now what if the amount of data is rather big, and the <condition> is expensive to compute? In PostgreSQL, a RULE or a stored procedure can be used to delete data on the fly, evaluating condition only once. Which solution is better? Are there other options?

    Read the article

  • C++ code which is slower than its C equivalent?

    - by user997112
    Are there any aspects to the C++ programming language where the code is known to be slower than the equivalent C language? Obviously this would be excluding the OO features like virtual functions and vtable features etc. I am wondering whether, when you are programming in a latency-critical area (and you aren't worried about OO features) whether you could stick with basic C++ or would C be better?

    Read the article

  • SQL Profiles showing high activity

    - by Wong Chi
    I am running my application locally -- ie. No external traffic and very low number of queries, fully under my control. I see tons of 'Audit Login' and 'Audit Logout' events. What are these and where are they actually stored (ie. Where is this audit log)? Are these a hint of a problem with connections, because I have only a simple connection string within my app and thought that connections would remain active throughout the operation of my app (ie. a single login at launch, and then a single logout when terminating).

    Read the article

  • Sync data between a windows desktop app and windows mobile client app

    - by Chris W
    I need to knock up a very quick prototype/proof of concept application to demo to someone within the next couple of days so I've minimal time to research this as fully as I normally would. The set-up is a very simple database application running on a laptop - will only ever be a single user updating a couple of tables so I was thinking of knocking up a basic Win Forms app against SQL Compact. Visual Studio's auto generated data grid edit screens will be fine with a little customisation. The second aspect is to then add a windows mobile client application that can pull data from both tables stored on the laptop, edit some data and insert some extra rows before sending the changes back to the laptop copy of the database. I've not done any WinMo development so what's the best approach for me to look at. Is it easy enough to sync data between the two databases when the WinMo device is connected to the laptop with USB? Most of the samples I've looked at so far seem to be syncing SQL Compact with SQL Standard using IIS which seems a bit overkill. The volumes of data to be synced are so small that I can easily write some manual sync code if it's easy for me to query/update the Compact DB from the laptop application when the device is connected.

    Read the article

  • SQL Profiler and Tuning Advisor for Reporting Services - what events should be selected?

    - by chris
    I've used the SQL Profiler to generate a trace file, and tuning advisor to take that trace file and provide some recommendations on db updates. However, the SQL Profiler doesn't seem to track the queries when running against a Reporting Server, the profiler doesn't seem to be capturing any of the queries. I'm logging the defaults (SQL:BatchCompleted and Starting, RPC:completed, and Sessions - Existing Connections) What events should I be capturing in SQL Profiler in order to run the tuning advisor?

    Read the article

  • MVC way of handling data input

    - by korki
    I have a data input module where I add the information of my product and its sub information like: product basic info product price info product price details price info and price details are related to product and are lists In my web forms approach I would store my main product object on the view state and I would populate it's pricing info and details while doing ajax postbacks. This way I can create a compact module that is very user friendly in terms of defining a lot of data from one place without the need to enter these data from seperate modules. And when I am done I would do one product.save() and that would persist all the data to the respective tables on db. Now I am building similar app on .net mvc framework and pondering on what would be the good way of handling this on mvc. I don't resonate towards storing all this on client side till I click save. And saving to the db after each action makes me remember the days I was coding on asp. Will appreciate your inputs on ways to approach this on mvc framework

    Read the article

  • SQL Server 2000, how to automate import data from excel

    - by Stan
    Say the source data comes in excel format, below is how I import the data. Converting to csv format via MS Excel Roughly find bad rows/columns by inspecting backup the table that needs to be updated in SQL Query Analyzer truncate the table (may need to drop foreign key constraint as well) import data from the revised csv file in SQL Server Enterprise Manager If there's an error like duplicate columns, I need to check the original csv and remove them I was wondering how to make this procedure more effecient in every step? I have some idea but not complete. For step 2&6, using scripts that can check automatically and print out all error row/column data. So it's easier to remove all errors once. For step 3&5, is there any way to automatically update the table without manually go through the importing steps? Could the community advise, please? Thanks.

    Read the article

  • If there is a necessary data base

    - by Dmitry
    Hello! I have a desktop program which uses an embedded data base mechanism. For the first time a user will execute a program, it must create a database. So that, next time there is a data base and there is no need to create it. Please, tell me, how to chech if there is a necessary data base.

    Read the article

  • C# SQL Data Adapter Fill on existing typed Dataset

    - by René
    I have an option to choose between local based data storing (xml file) or SQL Server based. I already created a long time ago a typed dataset for my application to save data local in the xml file. Now, I have a bool that changes between Server based version and local version. If true my application get the data from the SQL Server. I'm not sure but It seems that Sql Adapter's Fill Method can't fill the Data in my existing schema SqlCommand cmd = new SqlCommand("Select * FROM dbo.Categories WHERE CatUserId = 1", _connection); cmd.CommandType = CommandType.Text; _sqlAdapter = new SqlDataAdapter(cmd); _sqlAdapter.TableMappings.Add("Categories", "dbo.Categories"); _sqlAdapter.Fill(Program.Dataset); This should fill my data from dbo.Categories to Categories (in my local, typed dataset). but it doesn't. It creates a new table with the name "Table". It looks like it can't handle the existing schema. I can't figure it out. Where is the problem? btw. of course the database request I do isn't very useful that way. It's just a simplified version for testing...

    Read the article

  • Faster code with another compiler

    - by Andrei
    I'm using the standard gcc compiler in math software development with C-language. I don't know that much about compilers or compiler options, and I was just wondering, is it possible to make faster executables using another compiler or choosing better options? The default Makefile sets options -ffast-math and -O3 and I think both of them have some impact in the overall calculation time. My software is using memory quite extensively, so I imagine some options related to memory management might do the trick? Any ideas?

    Read the article

  • Speed up :visible:input selector avoiding filter

    - by macca1
    I have a jQuery selector that is running way too slow on my unfortunately large page: $("#section").find(":visible:input").filter(":first").focus(); Is there a quicker way to select the first visible input without having to find ALL the visible inputs and then filtering THAT selection for the first? I want something like :visible:input:first but that doesn't seem to work.

    Read the article

  • Which method of adding items to the ASP.NET Dictionary class is more efficient?

    - by ahmd0
    I'm converting a comma separated list of strings into a dictionary using C# in ASP.NET (by omitting any duplicates): string str = "1,2, 4, 2, 4, item 3,item2, item 3"; //Just a random string for the sake of this example and I was wondering which method is more efficient? 1 - Using try/catch block: Dictionary<string, string> dic = new Dictionary<string, string>(); string[] strs = str.Split(','); foreach (string s in strs) { if (!string.IsNullOrWhiteSpace(s)) { try { string s2 = s.Trim(); dic.Add(s2, s2); } catch { } } } 2 - Or using ContainsKey() method: string[] strs = str.Split(','); foreach (string s in strs) { if (!string.IsNullOrWhiteSpace(s)) { string s2 = s.Trim(); if (!dic.ContainsKey(s2)) dic.Add(s2, s2); } }

    Read the article

  • Extracting Data Daily from MySQL to a Local MySQL DB

    - by Sunny Juneja
    I'm doing some experiments locally that require some data from a production MySQL DB that I only have read access to. The schemas are nearly identical with the exception of the omission of one column. My goal is to write a script that I can run everyday that extracts the previous day's data and imports it into my local table. The part that I'm most confused about is how to download the data. I've seen names like mysqldump be tossed around but that seems a way to replicate the entire database. I would love to avoid using php seeing as I have no experience with it. I've been creating CSVs but I'm worried about having the data integrity (what if there is a comma in a field or a \n) as well as the size of the CSV (there are several hundred thousand rows per day).

    Read the article

< Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >