Search Results

Search found 7672 results on 307 pages for 'compiler optimization'.

Page 90/307 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Is opening too many datacontexts bad?

    - by ryudice
    I've been checking my application with linq 2 sql profiler, and I noticed that it opens a lot of datacontexts, most of them are opened by the linq datasource I used, since my repositories use only the instance stored in Request.Items, is it bad to open too many datacontext? and how can I make my linqdatasource to use the datacontext that I store in Request.Items for the duration of the request? thanks for any help!

    Read the article

  • How can I optimize this subqueried and Joined MySQL Query?

    - by kevzettler
    I'm pretty green on mysql and I need some tips on cleaning up a query. It is used in several variations through out a site. Its got some subquerys derived tables and fun going on. Heres the query: # Query_time: 2 Lock_time: 0 Rows_sent: 0 Rows_examined: 0 SELECT * FROM ( SELECT products . *, categories.category_name AS category, ( SELECT COUNT( * ) FROM distros WHERE distros.product_id = products.product_id) AS distro_count, (SELECT COUNT(*) FROM downloads WHERE downloads.product_id = products.product_id AND WEEK(downloads.date) = WEEK(curdate())) AS true_downloads, (SELECT COUNT(*) FROM views WHERE views.product_id = products.product_id AND WEEK(views.date) = WEEK(curdate())) AS true_views FROM products INNER JOIN categories ON products.category_id = categories.category_id ORDER BY created_date DESC, true_views DESC ) AS count_table WHERE count_table.distro_count > 0 AND count_table.status = 'published' AND count_table.active = 1 LIMIT 0, 8 Heres the explain: +----+--------------------+------------+-------+---------------+-------------+---------+------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+------------+-------+---------------+-------------+---------+------------------------------------+------+----------------------------------------------+ | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 232 | Using where | | 2 | DERIVED | categories | index | PRIMARY | idx_name | 47 | NULL | 13 | Using index; Using temporary; Using filesort | | 2 | DERIVED | products | ref | category_id | category_id | 4 | digizald_db.categories.category_id | 9 | | | 5 | DEPENDENT SUBQUERY | views | ref | product_id | product_id | 4 | digizald_db.products.product_id | 46 | Using where | | 4 | DEPENDENT SUBQUERY | downloads | ref | product_id | product_id | 4 | digizald_db.products.product_id | 14 | Using where | | 3 | DEPENDENT SUBQUERY | distros | ref | product_id | product_id | 4 | digizald_db.products.product_id | 1 | Using index | +----+--------------------+------------+-------+---------------+-------------+---------+------------------------------------+------+----------------------------------------------+ 6 rows in set (0.04 sec) And the Tables: mysql> describe products; +---------------+--------------------------------------------------+------+-----+-------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+--------------------------------------------------+------+-----+-------------------+----------------+ | product_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | product_key | char(32) | NO | | NULL | | | title | varchar(150) | NO | | NULL | | | company | varchar(150) | NO | | NULL | | | user_id | int(10) unsigned | NO | MUL | NULL | | | description | text | NO | | NULL | | | video_code | text | NO | | NULL | | | category_id | int(10) unsigned | NO | MUL | NULL | | | price | decimal(10,2) | NO | | NULL | | | quantity | int(10) unsigned | NO | | NULL | | | downloads | int(10) unsigned | NO | | NULL | | | views | int(10) unsigned | NO | | NULL | | | status | enum('pending','published','rejected','removed') | NO | | NULL | | | active | tinyint(1) | NO | | NULL | | | deleted | tinyint(1) | NO | | NULL | | | created_date | datetime | NO | | NULL | | | modified_date | timestamp | NO | | CURRENT_TIMESTAMP | | | scrape_source | varchar(215) | YES | | NULL | | +---------------+--------------------------------------------------+------+-----+-------------------+----------------+ 18 rows in set (0.00 sec) mysql> describe categories -> ; +------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+------------------+------+-----+---------+----------------+ | category_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | category_name | varchar(45) | NO | MUL | NULL | | | parent_id | int(10) unsigned | YES | MUL | NULL | | | category_type_id | int(10) unsigned | NO | | NULL | | +------------------+------------------+------+-----+---------+----------------+ 4 rows in set (0.00 sec) mysql> describe compatibilities -> ; +------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+------------------+------+-----+---------+----------------+ | compatibility_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | name | varchar(45) | NO | | NULL | | | code_name | varchar(45) | NO | | NULL | | | description | varchar(128) | NO | | NULL | | | position | int(10) unsigned | NO | | NULL | | +------------------+------------------+------+-----+---------+----------------+ 5 rows in set (0.01 sec) mysql> describe distros -> ; +------------------+--------------------------------------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+--------------------------------------------------+------+-----+---------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | product_id | int(10) unsigned | NO | MUL | NULL | | | compatibility_id | int(10) unsigned | NO | MUL | NULL | | | user_id | int(10) unsigned | NO | | NULL | | | status | enum('pending','published','rejected','removed') | NO | | NULL | | | distro_type | enum('file','url') | NO | | NULL | | | version | varchar(150) | NO | | NULL | | | filename | varchar(50) | YES | | NULL | | | url | varchar(250) | YES | | NULL | | | virus | enum('READY','PASS','FAIL') | YES | | NULL | | | downloads | int(10) unsigned | NO | | 0 | | +------------------+--------------------------------------------------+------+-----+---------+----------------+ 11 rows in set (0.01 sec) mysql> describe downloads; +------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+------------------+------+-----+---------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | product_id | int(10) unsigned | NO | MUL | NULL | | | distro_id | int(10) unsigned | NO | MUL | NULL | | | user_id | int(10) unsigned | NO | MUL | NULL | | | ip_address | varchar(15) | NO | | NULL | | | date | datetime | NO | | NULL | | +------------+------------------+------+-----+---------+----------------+ 6 rows in set (0.01 sec) mysql> describe views -> ; +------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+------------------+------+-----+---------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | product_id | int(10) unsigned | NO | MUL | NULL | | | user_id | int(10) unsigned | NO | MUL | NULL | | | ip_address | varchar(15) | NO | | NULL | | | date | datetime | NO | | NULL | | +------------+------------------+------+-----+---------+----------------+ 5 rows in set (0.00 sec)

    Read the article

  • Google Web Optimizer -- How long until winning combination?

    - by Django Reinhardt
    I've had an A/B Test running in Google Web Optimizer for six weeks now, and there's still no end in sight. Google is still saying: "We have not gathered enough data yet to show any significant results. When we collect more data we should be able to show you a winning combination." Is there any way of telling how close Google is to making up its mind? (Does anyone know what algorithm does it use to decide if there's been any "high confidence winners"?) According to the Google help documentation: Sometimes we simply need more data to be able to reach a level of high confidence. A tested combination typically needs around 200 conversions for us to judge its performance with certainty. But all of our conversions have over 200 conversations at the moment: 230 / 4061 (Original) 223 / 3937 (Variation 1) 205 / 3984 (Variation 2) 205 / 4007 (Variation 3) How much longer is it going to have to run?? Thanks for any help.

    Read the article

  • Optimize a MySQL count each duplicate Query

    - by Onema
    I have the following query That gets the city name, city id, the region name, and a count of duplicate names for that record: SELECT Country_CA.City AS currentCity, Country_CA.CityID, globe_region.region_name, ( SELECT count(Country_CA.City) FROM Country_CA WHERE City LIKE currentCity ) as counter FROM Country_CA LEFT JOIN globe_region ON globe_region.region_id = Country_CA.RegionID AND globe_region.country_code = Country_CA.CountryCode ORDER BY City This example is for Canada, and the cities will be displayed on a dropdown list. There are a few towns in Canada, and in other countries, that have the same names. Therefore I want to know if there is more than one town with the same name region name will be appended to the town name. Region names are found in the globe_region table. Country_CA and globe_region look similar to this (I have changed a few things for visualization purposes) CREATE TABLE IF NOT EXISTS `Country_CA` ( `City` varchar(75) NOT NULL DEFAULT '', `RegionID` varchar(10) NOT NULL DEFAULT '', `CountryCode` varchar(10) NOT NULL DEFAULT '', `CityID` int(11) NOT NULL DEFAULT '0', PRIMARY KEY (`City`,`RegionID`), KEY `CityID` (`CityID`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; AND CREATE TABLE IF NOT EXISTS `globe_region` ( `country_code` char(2) COLLATE utf8_unicode_ci NOT NULL, `region_code` char(2) COLLATE utf8_unicode_ci NOT NULL, `region_name` varchar(50) COLLATE utf8_unicode_ci NOT NULL, PRIMARY KEY (`country_code`,`region_code`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; The query on the top does exactly what I want it to do, but It takes way too long to generate a list for 5000 records. I would like to know if there is a way to optimize the sub-query in order to obtain the same results faster. the results should look like this City CityID region_name counter sheraton 2349269 British Columbia 1 sherbrooke 2349270 Quebec 2 sherbrooke 2349271 Nova Scotia 2 shere 2349273 British Columbia 1 sherridon 2349274 Manitoba 1

    Read the article

  • In PHP is faster to get a value from an if statement or from an array?

    - by Vittorio Vittori
    Maybe this is a stupid question but what is faster? <?php function getCss1 ($id = 0) { if ($id == 1) { return 'red'; } else if ($id == 2) { return 'yellow'; } else if ($id == 3) { return 'green'; } else if ($id == 4) { return 'blue'; } else if ($id == 5) { return 'orange'; } else { return 'grey'; } } function getCss2 ($id = 0) { $css[] = 'grey'; $css[] = 'red'; $css[] = 'yellow'; $css[] = 'green'; $css[] = 'blue'; $css[] = 'orange'; return $css[$id]; } echo getCss1(3); echo getCss2(3); ?> I suspect is faster the if statement but I prefere to ask!

    Read the article

  • Difference between Logarithmic and Uniform cost criteria

    - by Marthin
    I'v got some problem to understand the difference between Logarithmic(Lcc) and Uniform(Ucc) cost criteria and also how to use it in calculations. Could someone please explain the difference between the two and perhaps show how to calculate the complexity for a problem like A+B*C (Yes this is part of an assignment =) ) Thx for any help! /Marthin

    Read the article

  • Trying to reduce the speed overhead of an almost-but-not-quite-int number class

    - by Fumiyo Eda
    I have implemented a C++ class which behaves very similarly to the standard int type. The difference is that it has an additional concept of "epsilon" which represents some tiny value that is much less than 1, but greater than 0. One way to think of it is as a very wide fixed point number with 32 MSBs (the integer parts), 32 LSBs (the epsilon parts) and a huge sea of zeros in between. The following class works, but introduces a ~2x speed penalty in the overall program. (The program includes code that has nothing to do with this class, so the actual speed penalty of this class is probably much greater than 2x.) I can't paste the code that is using this class, but I can say the following: +, -, +=, <, > and >= are the only heavily used operators. Use of setEpsilon() and getInt() is extremely rare. * is also rare, and does not even need to consider the epsilon values at all. Here is the class: #include <limits> struct int32Uepsilon { typedef int32Uepsilon Self; int32Uepsilon () { _value = 0; _eps = 0; } int32Uepsilon (const int &i) { _value = i; _eps = 0; } void setEpsilon() { _eps = 1; } Self operator+(const Self &rhs) const { Self result = *this; result._value += rhs._value; result._eps += rhs._eps; return result; } Self operator-(const Self &rhs) const { Self result = *this; result._value -= rhs._value; result._eps -= rhs._eps; return result; } Self operator-( ) const { Self result = *this; result._value = -result._value; result._eps = -result._eps; return result; } Self operator*(const Self &rhs) const { return this->getInt() * rhs.getInt(); } // XXX: discards epsilon bool operator<(const Self &rhs) const { return (_value < rhs._value) || (_value == rhs._value && _eps < rhs._eps); } bool operator>(const Self &rhs) const { return (_value > rhs._value) || (_value == rhs._value && _eps > rhs._eps); } bool operator>=(const Self &rhs) const { return (_value >= rhs._value) || (_value == rhs._value && _eps >= rhs._eps); } Self &operator+=(const Self &rhs) { this->_value += rhs._value; this->_eps += rhs._eps; return *this; } Self &operator-=(const Self &rhs) { this->_value -= rhs._value; this->_eps -= rhs._eps; return *this; } int getInt() const { return(_value); } private: int _value; int _eps; }; namespace std { template<> struct numeric_limits<int32Uepsilon> { static const bool is_signed = true; static int max() { return 2147483647; } } }; The code above works, but it is quite slow. Does anyone have any ideas on how to improve performance? There are a few hints/details I can give that might be helpful: 32 bits are definitely insufficient to hold both _value and _eps. In practice, up to 24 ~ 28 bits of _value are used and up to 20 bits of _eps are used. I could not measure a significant performance difference between using int32_t and int64_t, so memory overhead itself is probably not the problem here. Saturating addition/subtraction on _eps would be cool, but isn't really necessary. Note that the signs of _value and _eps are not necessarily the same! This broke my first attempt at speeding this class up. Inline assembly is no problem, so long as it works with GCC on a Core i7 system running Linux!

    Read the article

  • Meassure website

    - by s0mmer
    Hi, I was wondering if it is possible to install or use any online service to measure your website's performance? I've seen many just checking the download speed of images, external files etc. But is it possible to meassure how long asp/php code takes to execute? I have a site running a bit slowly, and it would be very nice with some app/service guiding where to optimize.

    Read the article

  • New to C/C++ Using Android NDK to port Legacy code, getting compile errors

    - by Donal Rafferty
    I have been trying to take some old Symbian C++ code over to Android today using the NDK. I have little to no C or C++ knowledge so its been a chore, however has to be done. My main issue is that I'm having trouble porting what I believe is Symbian specifi code to work using the small C/C++ subset that is available with the Android NDK. Here is a picture of the compilation errors I'm getting using cygwin I was wondering if anyone could point me in the right direction on how to deal with these errors? For instance is TBool/Int/TUint/RPointerArray/RSocket a Symbian primitive and thats why it wont compile or is it something else? Also what is ISO C++? Any tutorials, guides or tips and help would be greatly appreciated.

    Read the article

  • Javascriptlibrary more efficient than Rickshaw for realtime visualizations

    - by dan kutz
    I want to visualize data as time-series graphs on mobile devices(tablets) and therefore stumbled upon rickshaw, which is based on D3. First I must say I was a little bit confused when I realized that realtime in web design is defined totally different to realtime in engineering which has fixed(and often very short) timeframes. Anyway my aim is to visualize the data as fast as possible, and on older tablets visualization with rickshaw is quite slow. Can anybody recommend another library, which may be more efficient in rendering? Or is there no way out and I have to go native? regards Dan.

    Read the article

  • Rewriting a for loop in pure NumPy to decrease execution time

    - by Statto
    I recently asked about trying to optimise a Python loop for a scientific application, and received an excellent, smart way of recoding it within NumPy which reduced execution time by a factor of around 100 for me! However, calculation of the B value is actually nested within a few other loops, because it is evaluated at a regular grid of positions. Is there a similarly smart NumPy rewrite to shave time off this procedure? I suspect the performance gain for this part would be less marked, and the disadvantages would presumably be that it would not be possible to report back to the user on the progress of the calculation, that the results could not be written to the output file until the end of the calculation, and possibly that doing this in one enormous step would have memory implications? Is it possible to circumvent any of these? import numpy as np import time def reshape_vector(v): b = np.empty((3,1)) for i in range(3): b[i][0] = v[i] return b def unit_vectors(r): return r / np.sqrt((r*r).sum(0)) def calculate_dipole(mu, r_i, mom_i): relative = mu - r_i r_unit = unit_vectors(relative) A = 1e-7 num = A*(3*np.sum(mom_i*r_unit, 0)*r_unit - mom_i) den = np.sqrt(np.sum(relative*relative, 0))**3 B = np.sum(num/den, 1) return B N = 20000 # number of dipoles r_i = np.random.random((3,N)) # positions of dipoles mom_i = np.random.random((3,N)) # moments of dipoles a = np.random.random((3,3)) # three basis vectors for this crystal n = [10,10,10] # points at which to evaluate sum gamma_mu = 135.5 # a constant t_start = time.clock() for i in range(n[0]): r_frac_x = np.float(i)/np.float(n[0]) r_test_x = r_frac_x * a[0] for j in range(n[1]): r_frac_y = np.float(j)/np.float(n[1]) r_test_y = r_frac_y * a[1] for k in range(n[2]): r_frac_z = np.float(k)/np.float(n[2]) r_test = r_test_x +r_test_y + r_frac_z * a[2] r_test_fast = reshape_vector(r_test) B = calculate_dipole(r_test_fast, r_i, mom_i) omega = gamma_mu*np.sqrt(np.dot(B,B)) # write r_test, B and omega to a file frac_done = np.float(i+1)/(n[0]+1) t_elapsed = (time.clock()-t_start) t_remain = (1-frac_done)*t_elapsed/frac_done print frac_done*100,'% done in',t_elapsed/60.,'minutes...approximately',t_remain/60.,'minutes remaining'

    Read the article

  • How to index a date column with null values?

    - by Heinz Z.
    How should I index a date column when some rows has null values? We have to select rows between a date range and rows with null dates. We use Oracle 9.2 and higher. Options I found Using a bitmap index on the date column Using an index on date column and an index on a state field which value is 1 when the date is null Using an index on date column and an other granted not null column My thoughts to the options are: to 1: the table have to many different values to use an bitmap index to 2: I have to add an field only for this purpose and to change the query when I want to retrieve the null date rows to 3: locks tricky to add an field to an index which is not really needed What is the best practice for this case? Thanks in advance Some infos I have read: Oracle Date Index When does Oracle index null column values?

    Read the article

  • How to optimize an database suggestion engine

    - by Dimitar Vouldjeff
    Hi, I`m making an online engine for item-to-item recommending movies. I have made some researches and I think that the best way to implement that is using pearson correlation and make a table with item1, item2 and correlation fields, but the problem is that after each rate of item I have to regenerate the correlation for in the worst case N records (where N is the number of items). Another think that I read is the following article, but I haven`t thought a way to implement it. So what is your suggestion to optimize this process? Or any other suggestions? Thanks.

    Read the article

  • implementing type inference

    - by deepblue
    well I see some interesting discussions here about static vs. dynamic typing I generally prefer static typing, due to compile type checking, better documented code,etc. However I do agree that they do clutter up the code if done the way Java does it, for example. so Im about to start building a language of my own and type inference is one of the things that I want to implement, in a functional style language... I do understand that it is a big subject, and Im not trying to create something that has not been done before, just basic inferencing... any pointers on what to read up that will help me with this? preferably something more pragmatic/practical as oppose to more theoretical category theory/type theory texts. If there's a implementation discussion text out here, with data structures/algorithms, that would just be lovely much appreciated

    Read the article

  • Is there any program that obfuscates C# source code?

    - by markattwood
    Our requirement is being able to integrate our DLLs with ClickOnce. Dotfuscator does the obfuscation job nicely but the obfuscated DLLs cannot be deployed with ClickOnce on customer side. On our side, we can handle it perfectly. Moreover, the obfuscated assemblies sometime crashes our .NET CF app. It turns out to a solution that creates a temporary source and obfuscates it before compiling with VS. This ensures that the compiled assembly can be integrated with ClickOnce and fully compatible with .NET CF. What is the best tool to obfuscate C# SOURCE CODE (not assemblies)?

    Read the article

  • Clarification of atomic memory access for different OSs

    - by murrekatt
    I'm currently porting a Windows C++ library to MacOS as a hobby project as a learning experience. I stumbled across some code using the Win Interlocked* functions and thus I've been trying to read up on the subject in general. Reading related questions here in SO, I understand there are different ways to do these operations depending on the OS. Interlocked* in Windows, OSAtomic* in MacOS and I also found that compilers have builtin (intrinsic) operations for this. After reading gcc builtin atomic memory access, I'm left wondering what is the difference between intrinsic and the OSAtomic* or Interlocked* ones? I mean, can I not choose between OSAtomic* or gcc builtin if I'm on MacOS when I use gcc? The same if I'd be on Windows using gcc. I also read that on Windows Interlocked* come as both inline and intrinsic versions. What to consider when choosing between intrinsic or inline? In general, are there multiple options on OSs what to use? Or is this again "it depends"? If so, what does it depend on? Thanks!

    Read the article

  • An image from byte to optimized web page presentation

    - by blgnklc
    I get the data of the stored image on database as byte[] array; then I convert it to System.Drawing.Image like the code shown below; public System.Drawing.Image CreateImage(byte[] bytes) { System.IO.MemoryStream memoryStream = new System.IO.MemoryStream(bytes); System.Drawing.Image image = System.Drawing.Image.FromStream(memoryStream); return image; } (*) On the other hand I am planning to show a list of images on asp.net pages as the client scrolls downs the page. The more user gets down and down on the page he/she does see the more photos. So it means fast page loads and rich user experience. (you may see what I mean on www.mashable.com, just take care the new loads of the photos as you scroll down.) Moreover, the returned imgae object from the method above, how can i show it in a loop dynamically using the (*) conditions above. Regards bk

    Read the article

  • Type classe, generic memoization

    - by nicolas
    Something quite odd is happening with y types and I quite dont understand if this is justified or not. I would tend to think not. This code works fine : type DictionarySingleton private () = static let mutable instance = Dictionary<string*obj, obj>() static member Instance = instance let memoize (f:'a -> 'b) = fun (x:'a) -> let key = f.ToString(), (x :> obj) if (DictionarySingleton.Instance).ContainsKey(key) then let r = (DictionarySingleton.Instance).[key] r :?> 'b else let res = f x (DictionarySingleton.Instance).[key] <- (res :> obj) res And this ones complains type DictionarySingleton private () = static let mutable instance = Dictionary<string*obj, _>() static member Instance = instance let memoize (f:'a -> 'b) = fun (x:'a) -> let key = f.ToString(), (x :> obj) if (DictionarySingleton.Instance).ContainsKey(key) then let r = (DictionarySingleton.Instance).[key] r :?> 'b else let res = f x (DictionarySingleton.Instance).[key] <- (res :> obj) res The difference is only the underscore in the dictionary definition. The infered types are the same, but the dynamic cast from r to type 'b exhibits an error. 'this runtime coercition ... runtime type tests are not allowed on some types, etc..' Am I missing something or is it a rough edge ?

    Read the article

  • MySQL subqueries

    - by swamprunner7
    Can we do this query without subqueries? SELECT login, post_n, (SELECT SUM(vote) FROM votes WHERE votes.post_n=posts.post_n)AS votes, (SELECT COUNT(comments.post_n) FROM comments WHERE comments.post_n=posts.post_n)AS comments_count FROM users, posts WHERE posts.id=users.id AND (visibility=2 OR visibility=3) ORDER BY date DESC LIMIT 0, 15 tables: Users: id, login Posts: post_n, id, visibility Votes: post_n, vote id — it`s user id, Users the main table.

    Read the article

  • Oracle Sql Query taking a day long to return results using dblink

    - by Suresh S
    Guys i have the following oracle sql query that gives me the monthwise report between the dates. Basically for nov month i want sum of values between the dates 01nov to 30 nov. The table tha is being queried is residing in another database and accesssed using dblink. The DT columns is of NUMBER type (for ex 20101201) .The execution of the query is taking a day long and not completed. kindly suggest me , if their is any optimisation that can be suggested to my DBA on the dblink, or any tuning that can be done on the query , or rewriting the same. SELECT /*+ PARALLEL (A 8) */ TO_CHAR(TRUNC(TRUNC(SYSDATE,'MM')- 1,'MM'),'MONYYYY') "MONTH", TYPE AS "TYPE", COLUMN, COUNT (DISTINCT A) AS "A_COUNT", COUNT (COLUMN) AS NO_OF_COLS, SUM (DURATION) AS "SUM_DURATION", SUM (COST) AS "COST" FROM **A@LN_PROD A** WHERE DT >=TO_NUMBER(TO_CHAR(TRUNC(TRUNC(SYSDATE,'MM')-1,'MM'),'YYYYMMDD')) AND DT < TO_NUMBER(TO_CHAR(TRUNC(TRUNC(SYSDATE,'MM'),'MM'),'YYYYMMDD')) GROUP BY TYPE, COLUMN

    Read the article

  • Faster way to split a string and count characters using R?

    - by chrisamiller
    I'm looking for a faster way to calculate GC content for DNA strings read in from a FASTA file. This boils down to taking a string and counting the number of times that the letter 'G' or 'C' appears. I also want to specify the range of characters to consider. I have a working function that is fairly slow, and it's causing a bottleneck in my code. It looks like this: ## ## count the number of GCs in the characters between start and stop ## gcCount <- function(line, st, sp){ chars = strsplit(as.character(line),"")[[1]] numGC = 0 for(j in st:sp){ ##nested ifs faster than an OR (|) construction if(chars[[j]] == "g"){ numGC <- numGC + 1 }else if(chars[[j]] == "G"){ numGC <- numGC + 1 }else if(chars[[j]] == "c"){ numGC <- numGC + 1 }else if(chars[[j]] == "C"){ numGC <- numGC + 1 } } return(numGC) } Running Rprof gives me the following output: > a = "GCCCAAAATTTTCCGGatttaagcagacataaattcgagg" > Rprof(filename="Rprof.out") > for(i in 1:500000){gcCount(a,1,40)}; > Rprof(NULL) > summaryRprof(filename="Rprof.out") self.time self.pct total.time total.pct "gcCount" 77.36 76.8 100.74 100.0 "==" 18.30 18.2 18.30 18.2 "strsplit" 3.58 3.6 3.64 3.6 "+" 1.14 1.1 1.14 1.1 ":" 0.30 0.3 0.30 0.3 "as.logical" 0.04 0.0 0.04 0.0 "as.character" 0.02 0.0 0.02 0.0 $by.total total.time total.pct self.time self.pct "gcCount" 100.74 100.0 77.36 76.8 "==" 18.30 18.2 18.30 18.2 "strsplit" 3.64 3.6 3.58 3.6 "+" 1.14 1.1 1.14 1.1 ":" 0.30 0.3 0.30 0.3 "as.logical" 0.04 0.0 0.04 0.0 "as.character" 0.02 0.0 0.02 0.0 $sampling.time [1] 100.74 Any advice for making this code faster?

    Read the article

  • Data Access from single table in sql server 2005 is too slow

    - by Muhammad Kashif Nadeem
    Following is the script of table. Accessing data from this table is too slow. SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[Emails]( [id] [int] IDENTITY(1,1) NOT NULL, [datecreated] [datetime] NULL CONSTRAINT [DF_Emails_datecreated] DEFAULT (getdate()), [UID] [nvarchar](250) COLLATE Latin1_General_CI_AS NULL, [From] [nvarchar](100) COLLATE Latin1_General_CI_AS NULL, [To] [nvarchar](100) COLLATE Latin1_General_CI_AS NULL, [Subject] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [Body] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [HTML] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [AttachmentCount] [int] NULL, [Dated] [datetime] NULL ) ON [PRIMARY] Following query takes 50 seconds to fetch data. select id, datecreated, UID, [From], [To], Subject, AttachmentCount, Dated from emails If I include Body and Html in select then time is event worse. indexes are on: id unique clustered From Non unique non clustered To Non unique non clustered Tabls has currently 180000+ records. There might be 100,000 records each month so this will become more slow as time will pass. Does splitting data into two table will solve the problem? What other indexes should be there?

    Read the article

  • c compilation error

    - by ambika
    hi, this is my error error:static declaration of 'doct' follows non-static declaration error: previous declaration of 'doct' was here. my code int doct(int*); <- here the second error private int doct(int *a) { static int a=0; <- here the first error a++; *a=a; return 0; } give suggestion.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >