Search Results

Search found 20224 results on 809 pages for 'query optimization'.

Page 183/809 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • How much faster is a database running in RAM?

    - by orokusaki
    I"m looking to run PostgreSQL in RAM for performance enhancement. The database isn't more than 1GB and shouldn't ever grow to more than 5GB. Is it worth doing? Are there any benchmarks out there? Is it buggy? My second major concern is: How easy is it to back things up when it's running purely in RAM. Is this just like using RAM as tier 1 HD, or is it much more complicated?

    Read the article

  • negative values in integer programming model

    - by Lucia
    I'm new at using the glpk tool, and after writing a model for certain integer problem and running the solver (glpsol) i get negative values in some constraint that shouldn't be negative at all: No.Row name Activity Lower bound Upper bound 8 act[1] 0 -0 9 act[2] -3 -0 10 act[2] -2 -0 That constraint is defined like this: act{j in J}: sum{i in I} d[i,j] <= y[j]*m; where the sets and variables used are like this: param m, integer, 0; param n, integer, 0; set I := 1..m; set J := 1..n; var y{j in J}, binary; As the upper bound is negative, i think the problem may be in the y[j]*m parte, of the right side of the inequality.. perhaps something with the multiplication of binarys? or that the j in that side of the constrait is undefined? i dont know... i would be greatly grateful if someone can help me with this! :) and excuse for my bad english thanks in advance!

    Read the article

  • SQL query to select distinct record with 2 or more repetition in another field

    - by kyohiros
    So I have this table of book orders, it contains 2 columns, one is the order ID(primary key) and another is the ID of the book that the customer ordered. For example: | OrderID | BookID | | 0001 | B002 | | 0002 | B005 | | 0003 | B002 | | 0004 | B003 | | 0005 | B005 | | 0006 | B002 | | 0007 | B002 | What I want is to get the IDs of the books that got 2 or more purchases/orders, for example if I run the SQL query against the above data, I would get this as the result: | BookID | | B002 | | B005 | I don't know if this can be archived in SQL or I have to built a simpler statement and repetitive run the query against all the records in another language. I need some help and thanks for reading my question.

    Read the article

  • How can I improve the below query?

    - by Newbie
    I have the following input. INPUT: TableA ID Sentences --- ---------- 1 I am a student 2 Have a nice time guys! What I need to do is to extract the words from the sentence(s) and insert each individual word in another table OUTPUT: SentenceID WordOccurance Word ---------- ------------ ----- 1 1 I 1 2 am 1 3 a 1 4 student 2 1 Have 2 2 a 2 3 nice 2 4 time 2 5 guys! I was able to get the answer by using the below query ;With numCTE As ( Select rn = 1 Union all Select rn+1 from numCTE where rn<1000) select SentenceID=id, WordOccurance=row_number()over(partition by TableA.ID order by rn), Word = substring(' '+sentences+' ', rn+1, charindex(' ',' '+sentences+' ', rn+1)-rn-1) from TableA join numCTE on rn <= len(' '+sentences+' ') where substring(' '+sentences+' ', rn,1) = ' ' order by id, rn How can I improve this query of mine.? Basically I am looking for a better solution than the one presented Thanks

    Read the article

  • Initializing a C++ vector to random values... fast

    - by Flamewires
    Hey, id like to make this as fast as possible because it gets called A LOT in a program i'm writing, so is there any faster way to initialize a C++ vector to random values than: double range;//set to the range of a particular function i want to evaluate. std::vector<double> x(30, 0.0); for (int i=0;i<x.size();i++) { x.at(i) = (rand()/(double)RAND_MAX)*range; } EDIT:Fixed x's initializer.

    Read the article

  • mysql query to concat information from 3 tables - getting incorrect result count

    - by iPfaffy
    I have 3 tables in my database. ab_contacts id first_name last_name addressbook_id ab_addressbooks name id co_comments id link_id comment I'd like to create a query that will let me select all the contacts and comments related to them in a given addressbook. To select all the people in a given addressbook, I can use: select count(*) from ab_contacts where addressbook_id = '50'; This returns 8152 people. However, when I run my query: select ab_contacts.first_name, ab_contacts.last_name, ab_contacts.email, ab_addressbooks.name, co_comments.comments from ab_contacts JOIN ab_addressbooks ON (ab_contacts.addressbook_id = ab_addressbooks.id) JOIN co_comments ON (ab_contacts.id = co_comments.link_id) WHERE ab_contacts.addressbook_id = '50';` the format works, but I only get 1045 results. I'm sure there is something I am missing, but I cannot figure it out. Any help would be greatly appreciated.

    Read the article

  • How computer multiplies 2 numbers?

    - by ckv
    How does a computer perform a multiplication on 2 numbers say 100 * 55. My guess was that the computer did repeated addition to achieve multiplication. Of course this could be the case for integer numbers. However for floating point numbers there must be some other logic. Note: This was asked in an interview.

    Read the article

  • Creating objects makes the VM faster?

    - by Sudhir Jonathan
    Look at this piece of code: MessageParser parser = new MessageParser(); for (int i = 0; i < 10000; i++) { parser.parse(plainMessage, user); } For some reason, it runs SLOWER (by about 100ms) than for (int i = 0; i < 10000; i++) { MessageParser parser = new MessageParser(); parser.parse(plainMessage, user); } Any ideas why? The tests were repeated a lot of times, so it wasn't just random. How could creating an object 10000 times be faster than creating it once?

    Read the article

  • How to make if-elif-else statement in python more space-saving?

    - by Neverland
    I have a lot of if-elif-else statements in my code if message == '0' or message == '3' or message == '5' or message == '7': ... elif message == '1' or message == '2' or message == '4' or message == '6' or message == '8': ... else: ... Is it possible to format this in a more space-saving way? I tried it this way: if message == '0' or '3' or '5' or '7': ... elif message == '1' or '2' or '4' or '6' or '8': ... else: ... But without success.

    Read the article

  • Removing groups of similar records in MySQL query

    - by user1182155
    I'm trying to wrap my head around this... (it may be simple, been a long day!) I have a database with sometimes multiple similar records... ie. Apples 2008-09-03 Apples 2012-01-01 Apples 2013-10-24 Oranges 2012-01-04 What I need to do is do a query that will show only records that haven't been updated today... So in this case, since Apples has an entry that was updated today, none of the records for the Apples should appear in the results. Oranges should be the only record it returns. I have a query similar to this... SELECT fruit FROM fruitnames where date < CURDATE() Which works to remove the record that was updated today... But it keeps the other records for Apples (obviously)... How would I remove those results as well?

    Read the article

  • Where does the compiler store methods for C++ classes?

    - by Mashmagar
    This is more a curiosity than anything else... Suppose I have a C++ class Kitty as follows: class Kitty { void Meow() { //Do stuff } } Does the compiler place the code for Meow() in every instance of Kitty? Obviously repeating the same code everywhere requires more memory. But on the other hand, branching to a relative location in nearby memory requires fewer assembly instructions than branching to an absolute location in memory on modern processors, so this is potentially faster. I suppose this is an implementation detail, so different compilers may perform differently. Keep in mind, I'm not considering static or virtual methods here.

    Read the article

  • Composite primary keys in N-M relation or not?

    - by BerggreenDK
    Lets say we have 3 tables (actually I have 2 at the moment, but this example might illustrate the thought better): [Person] ID: int, primary key Name: nvarchar(xx) [Group] ID: int, primary key Name: nvarchar(xx) [Role] ID: int, primary key Name: nvarchar(xx) [PersonGroupRole] Person_ID: int, PRIMARY COMPOSITE OR NOT? Group_ID: int, PRIMARY COMPOSITE OR NOT? Role_ID: int, PRIMARY COMPOSITE OR NOT? Should any of the 3 ID's in the relation PersonGroupRole be marked as PRIMARY key or should they all 3 be combined into one composite?? whats the real benefit of doing it or not? I can join anyways as far as I know, so Person JOIN PersonGroupRole JOIN Group gives me which persons are in which Groups etc. I will be using LINQ/C#/.NET on top of SQL-express and SQL-server, so if there is any reasons regarding language/SQL that might make the choice more clear, thats the platform I ask about. Looking forward to see what answers pops up, as I have thought of these primary keys/indexes many times when making combined ones.

    Read the article

  • SQL server 2005 query not running

    - by Aayushi
    Hi, Before posting this question, I have tried so many things but that was not helpful for me. I want to rename the column of table at sql server 2005, following query I have run at sql server2005: 1) ALTER TABLE Details RENAME COLUMN AccountID TO UID; but it gives me the error: Incorrect syntax near the keyword 'COLUMN'. 2)I have added one new column in the table by query: ALTER TABLE Details ADD BID uniqueidentifier; and then I want to set the coulmn property to not null . How can i do that? Thanks in advance AS

    Read the article

  • Execute a method less times possible - PHP

    - by serhio
    I have a site in multiple languages. I have a method that returns me the today currencies in a array. I display that currencies in a table then. // --- en/index.php <?php include_once "../exchangeRates.php"; $currencies = ReadExchangeRates(); // --- fr/index.php <?php include_once "../exchangeRates.php"; $currencies = ReadExchangeRates(); ... // somewhere in the page <td><?php echo $currencies["eur"]["today"]; ?></td> So, every time I load, en/ or fr/ or other language, I request the exchange rates from a external site. Can I optimize this behavior (reading once per day or session)? maybe to store a global variable and check the update date?

    Read the article

  • Query to update rowNum

    - by BrokeMyLegBiking
    Can anyone help me write this query more efficiently? I have a table that captures TCP traffic, and I'd like to update a column called RowNumForFlow which is simly the sequential number of the IP packet in that flow. The code below works fine, but it is slow. declare @FlowID int declare @LastRowNumInFlow int declare @counter1 int set @counter1 = 0 while (@counter1 < 1) BEGIN set @counter1 = @counter1 + 1 -- 1) select top 1 @FlowID = t.FlowID from Traffic t where t.RowNumInFlow is null if (@FlowID is null) break -- 2) set @LastRowNumInFlow = null select top 1 @LastRowNumInFlow = RowNumInFlow from Traffic where FlowID=@FlowID and RowNumInFlow is not null order by ID desc if @LastRowNumInFlow is null set @LastRowNumInFlow = 1 else set @LastRowNumInFlow = @LastRowNumInFlow + 1 update Traffic set RowNumInFlow = @LastRowNumInFlow where ID = (select top 1 ID from Traffic where flowid = @FlowID and RowNumInFlow is null) END Example table values after query has run: ID FlowID RowNumInFlow 448923 44 1 448924 44 2 448988 44 3 448989 44 4 448990 44 5 448991 44 6 448992 44 7 448993 44 8 448995 44 9 448996 44 10 449065 44 11 449063 45 1 449170 45 2 449171 45 3 449172 45 4 449187 45 5

    Read the article

  • Help with optimising SQL query

    - by user566013
    Hi i need some help with this problem. I am working web application and for database i am using sqlite. Can someone help me with one query from databse which must be optimized == fast =) I have table x: ID | ID_DISH | ID_INGREDIENT 1 | 1 | 2 2 | 1 | 3 3 | 1 | 8 4 | 1 | 12 5 | 2 | 13 6 | 2 | 5 7 | 2 | 3 8 | 3 | 5 9 | 3 | 8 10| 3 | 2 .... ID_DISH is id of different dishes, ID_INGREDIENT is ingredient which dish is made of: so in my case dish with id 1 is made with ingredients with ids 2,3 In this table a have more then 15000 rows and my question is: i need query which will fetch rows where i can find ids of dishes ordered by count of ingreedients ASC which i haven added to my algoritem. examle: foo(2,4) will rows in this order: ID_DISH | count(stillMissing) 10 | 2 1 | 3 Dish with id 10 has ingredients with id 2 and 4 and hasn't got 2 more, then is

    Read the article

  • I'm doing a lot of lists and dictionary sorting...and this is causing memory errors in Python websit

    - by alex
    I retrieved data from the log table in my database. Then I started finding unique users, comparing/sorting lists, etc. In the end I got down to this. stats = {'2010-03-19': {'date': '2010-03-19', 'unique_users': 312, 'queries': 1465}, '2010-03-18': {'date': '2010-03-18', 'unique_users': 329, 'queries': 1659}, '2010-03-17': {'date': '2010-03-17', 'unique_users': 379, 'queries': 1845}, '2010-03-16': {'date': '2010-03-16', 'unique_users': 434, 'queries': 2336}, '2010-03-15': {'date': '2010-03-15', 'unique_users': 390, 'queries': 2138}, '2010-03-14': {'date': '2010-03-14', 'unique_users': 460, 'queries': 2221}, '2010-03-13': {'date': '2010-03-13', 'unique_users': 507, 'queries': 2242}, '2010-03-12': {'date': '2010-03-12', 'unique_users': 629, 'queries': 3523}, '2010-03-11': {'date': '2010-03-11', 'unique_users': 811, 'queries': 4274}, '2010-03-10': {'date': '2010-03-10', 'unique_users': 171, 'queries': 1297}, '2010-03-26': {'date': '2010-03-26', 'unique_users': 299, 'queries': 1617}, '2010-03-27': {'date': '2010-03-27', 'unique_users': 323, 'queries': 1310}, '2010-03-24': {'date': '2010-03-24', 'unique_users': 352, 'queries': 2112}, '2010-03-25': {'date': '2010-03-25', 'unique_users': 330, 'queries': 1290}, '2010-03-22': {'date': '2010-03-22', 'unique_users': 329, 'queries': 1798}, '2010-03-23': {'date': '2010-03-23', 'unique_users': 329, 'queries': 1857}, '2010-03-20': {'date': '2010-03-20', 'unique_users': 368, 'queries': 1693}, '2010-03-21': {'date': '2010-03-21', 'unique_users': 329, 'queries': 1511}, '2010-03-29': {'date': '2010-03-29', 'unique_users': 325, 'queries': 1718}, '2010-03-28': {'date': '2010-03-28', 'unique_users': 340, 'queries': 1815}, '2010-03-30': {'date': '2010-03-30', 'unique_users': 329, 'queries': 1891}} It's not a big dictionary. But when I try to do one last thing...it craps out on me. for k, v in stats: mylist.append(v) too many values to unpack What the heck does that mean??? TOO MANY VALUES TO UNPACK.

    Read the article

  • Calculating Growth-Rates by applying log-differences

    - by mropa
    I am trying to transform my data.frame by calculating the log-differences of each column and controlling for the rows id. So basically I like to calculate the growth rates for each id's variable. So here is a random df with an id column, a time period colum p and three variable columns: df <- data.frame (id = c("a","a","a","c","c","d","d","d","d","d"), p = c(1,2,3,1,2,1,2,3,4,5), var1 = rnorm(10, 5), var2 = rnorm(10, 5), var3 = rnorm(10, 5) ) df id p var1 var2 var3 1 a 1 5.375797 4.110324 5.773473 2 a 2 4.574700 6.541862 6.116153 3 a 3 3.029428 4.931924 5.631847 4 c 1 5.375855 4.181034 5.756510 5 c 2 5.067131 6.053009 6.746442 6 d 1 3.846438 4.515268 6.920389 7 d 2 4.910792 5.525340 4.625942 8 d 3 6.410238 5.138040 7.404533 9 d 4 4.637469 3.522542 3.661668 10 d 5 5.519138 4.599829 5.566892 Now I have written a function which does exactly what I want BUT I had to take a detour which is possibly unnecessary and can be removed. However, somehow I am not able to locate the shortcut. Here is the function and the output for the posted data frame: fct.logDiff <- function (df) { df.log <- dlply (df, "code", function(x) data.frame (p = x$p, log(x[, -c(1,2)]))) list.nalog <- llply (df.log, function(x) data.frame (p = x$p, rbind(NA, sapply(x[,-1], diff)))) ldply (list.nalog, data.frame) } fct.logDiff(df) id p var1 var2 var3 1 a 1 NA NA NA 2 a 2 -0.16136569 0.46472004 0.05765945 3 a 3 -0.41216720 -0.28249264 -0.08249587 4 c 1 NA NA NA 5 c 2 -0.05914281 0.36999681 0.15868378 6 d 1 NA NA NA 7 d 2 0.24428771 0.20188025 -0.40279188 8 d 3 0.26646102 -0.07267311 0.47041227 9 d 4 -0.32372771 -0.37748866 -0.70417351 10 d 5 0.17405309 0.26683625 0.41891802 The trouble is due to the added NA-rows. I don't want to collapse the frame and reduce it, which would be automatically done by the diff() function. So I had 10 rows in my original frame and am keeping the same amount of rows after the transformation. In order to keep the same length I had to add some NAs. I have taken a detour by transforming the data.frame into a list, add the NAs, and afterwards transform the list back into a data.frame. That looks tedious. Any ideas to avoid the data.frame-list-data.frame class transformation and optimize the function?

    Read the article

  • Oracle SQL: Query results from previous X isoweeks () (where X might be > 52)

    - by tommy-o-dell
    How could I adapt this query to show the previous 61 weeks? (still exlcluding the current week). My query currently shows me the total weekly sales for 2010 grouped by ISO Week and ISO Year (exlcuding the current week). select to_char(order_date,'IYYY') as iso_year, to_char(order_date,'IW') as iso_week, sum(sale_amount) from orders where to_char(order_date,'IW') <> to_char(SYSDATE) and to_char(order_date,'IYYY') = 2010 group by to_char(order_date,'IYYY') to_char(order_date,'IW') I realize I could probably just omit the "2010" requirement, order by desc and limit results to a certain bnumber of rows. But that just doesn't seem right! Much appreciate any help pointing me in the right direction!

    Read the article

  • Scalably processing large amount of comlpicated database data in PHP, many times a day.

    - by Eph
    I'm soon to be working on a project that poses a problem for me. It's going to require, at regular intervals throughout the day, processing tens of thousands of records, potentially over a million. Processing is going to involve several (potentially complicated) formulas and the generation of several random factors, writing some new data to a separate table, and updating the original records with some results. This needs to occur for all records, ideally, every three hours. Each new user to the site will be adding between 50 and 500 records that need to be processed in such a fashion, so the number will not be steady. The code hasn't been written, yet, as I'm still in the design process, mostly because of this issue. I know I'm going to need to use cron jobs, but I'm concerned that processing records of this size may cause the site to freeze up, perform slowly, or just piss off my hosting company every three hours. I'd like to know if anyone has any experience or tips on similar subjects? I've never worked at this magnitude before, and for all I know, this will be trivial to the server and not pose much of an issue. As long as ALL records are processed before the next three hour period occurs, I don't care if they aren't processed simultaneously (though, ideally, all records belonging to a specific user should be processed in the same batch), so I've been wondering if I should process in batches every 5 minutes, 15 minutes, hour, whatever works, and how best to approach this (and make it scalable in a way that is fair to all users)?

    Read the article

  • Thin down jQuery

    - by Taylor Satula
    Hi, I have been optimizing my website but the one problem that stands in my way is all the jQuery functions that I do not use. The only ones that I use are for a smooth page scroller. It just seems like such a waste of download time. My question is: Is there any script or program that will remove the jQuery code that I do not need and leave the 1 or 2 functions that I do need.

    Read the article

  • how to avoid temporaries when copying weakly typed object

    - by Truncheon
    Hi. I'm writing a series classes that inherit from a base class using virtual. They are INT, FLOAT and STRING objects that I want to use in a scripting language. I'm trying to implement weak typing, but I don't want STRING objects to return copies of themselves when used in the following way (instead I would prefer to have a reference returned which can be used in copying): a = "hello "; b = "world"; c = a + b; I have written the following code as a mock example: #include <iostream> #include <string> #include <cstdio> #include <cstdlib> std::string dummy("<int object cannot return string reference>"); struct BaseImpl { virtual bool is_string() = 0; virtual int get_int() = 0; virtual std::string get_string_copy() = 0; virtual std::string const& get_string_ref() = 0; }; struct INT : BaseImpl { int value; INT(int i = 0) : value(i) { std::cout << "constructor called\n"; } INT(BaseImpl& that) : value(that.get_int()) { std::cout << "copy constructor called\n"; } bool is_string() { return false; } int get_int() { return value; } std::string get_string_copy() { char buf[33]; sprintf(buf, "%i", value); return buf; } std::string const& get_string_ref() { return dummy; } }; struct STRING : BaseImpl { std::string value; STRING(std::string s = "") : value(s) { std::cout << "constructor called\n"; } STRING(BaseImpl& that) { if (that.is_string()) value = that.get_string_ref(); else value = that.get_string_copy(); std::cout << "copy constructor called\n"; } bool is_string() { return true; } int get_int() { return atoi(value.c_str()); } std::string get_string_copy() { return value; } std::string const& get_string_ref() { return value; } }; struct Base { BaseImpl* impl; Base(BaseImpl* p = 0) : impl(p) {} ~Base() { delete impl; } }; int main() { Base b1(new INT(1)); Base b2(new STRING("Hello world")); Base b3(new INT(*b1.impl)); Base b4(new STRING(*b2.impl)); std::cout << "\n"; std::cout << b1.impl->get_int() << "\n"; std::cout << b2.impl->get_int() << "\n"; std::cout << b3.impl->get_int() << "\n"; std::cout << b4.impl->get_int() << "\n"; std::cout << "\n"; std::cout << b1.impl->get_string_ref() << "\n"; std::cout << b2.impl->get_string_ref() << "\n"; std::cout << b3.impl->get_string_ref() << "\n"; std::cout << b4.impl->get_string_ref() << "\n"; std::cout << "\n"; std::cout << b1.impl->get_string_copy() << "\n"; std::cout << b2.impl->get_string_copy() << "\n"; std::cout << b3.impl->get_string_copy() << "\n"; std::cout << b4.impl->get_string_copy() << "\n"; return 0; } It was necessary to add an if check in the STRING class to determine whether its safe to request a reference instead of a copy: Script code: a = "test"; b = a; c = 1; d = "" + c; /* not safe to request reference by standard */ C++ code: STRING(BaseImpl& that) { if (that.is_string()) value = that.get_string_ref(); else value = that.get_string_copy(); std::cout << "copy constructor called\n"; } If was hoping there's a way of moving that if check into compile time, rather than run time.

    Read the article

  • C++ DWORD* to BYTE*

    - by NomeSkavinski
    My issue, i am trying to convert and array of dynamic memory of type DWORD to a BYTE. Fair enough i can for loop through this and convert the DWORD into a BYTE per entry. But is their a faster way to do this? to take a pointer to DWORD data and convert the whole piece of data into a pointer to BYTE data? such as using a memcpy operation? I feel this is not possible, im not requesting an answer just an experienced opinion on my approach, as i have tried testing both approaches but seem to fail getting to a solution on my second solution. Thanks for any input, again no answers just a point in the right direction. Nor is this a homework question, i felt that had to be mentioned.

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >