Search Results

Search found 6805 results on 273 pages for 'fast formula'.

Page 184/273 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • Unix: millionth number in the serie 2 3 4 6 9 13 19 28 42 63 ... ?

    - by HH
    It takes about minute to achieve 3000 in my comp but I need to know the millionth number in the serie. The definition is recursive so I cannot see any shortcuts except to calculate everything before the millionth number. How can you fast calculate millionth number in the serie? Serie Def n_{i+1} = \floor{ 3/2 * n_{i} } and n_{0}=2. Interestingly, only one site list the serie according to Goolge: this one. Too slow Bash code #!/bin/bash function serie { n=$( echo "3/2*$n" | bc -l | tr '\n' ' ' | sed -e 's@\\@@g' -e 's@ @@g' ); # bc gives \ at very large numbers, sed-tr for it n=$( echo $n/1 | bc ) #DUMMY FLOOR func } n=2 nth=1 while [ true ]; #$nth -lt 500 ]; do serie $n # n gets new value in the function throught global value echo $nth $n nth=$( echo $nth + 1 | bc ) #n++ done

    Read the article

  • keep open windows console after a python syntax error

    - by basweber
    File associations on my machine (winxp home) are such that a python script is directly opened with the python interpreter. If I double click on a python script a console window runs and every thing is fine - as long as there is no syntax error in the script. In that case the console window opens up for a moment but it is closed immediately. Too fast to read the error message. Of course their would be the possibility to manually open a console window and to execute the script by typing python myscript.py but I am sure that there is a more convenient (i.e. "double click based") solution.

    Read the article

  • What's the fastest way to strip and replace a document of high unicode characters using Python?

    - by Rhubarb
    I am looking to replace from a large document all high unicode characters, such as accented Es, left and right quotes, etc., with "normal" counterparts in the low range, such as a regular 'E', and straight quotes. I need to perform this on a very large document rather often. I see an example of this in what I think might be perl here: http://www.designmeme.com/mtplugins/lowdown.txt Is there a fast way of doing this in Python without using s.replace(...).replace(...).replace(...)...? I've tried this on just a few characters to replace and the document stripping became really slow.

    Read the article

  • jQuery Tools - Range Input with Mousewheel support

    - by pspahn
    I've got a nice looking horizontal scroller that uses the Range Input feature of jQ Tools. Mouse wheel scrolling support would be awesome, and I'm sure it can work, just not sure how to get it set up. I've got the generic setup: var scroll = jQuery('#scroll'); jQuery(':range').rangeinput({ speed: 0, progress: true, onSlide: function(ev, step) { scroll.css({left: -step}); }, change: function(e, i) { scroll.animate({left: -i}, "fast"); } }); The doc page ( http://jquerytools.org/documentation/toolbox/mousewheel.html ) for mousewheel gives an example: $("#myelement").mousewheel(function(event, delta) { }); But I'm not sure how this hooks in. Other tools in the package simply allow the option: mousewheel: true Unfortunately that doesn't work on range input. Any advice appreciated. Thank you.

    Read the article

  • Search filenames in MySQL database table restricted by filetype?

    - by ju
    Hello I have a MySQL database that I replicate from another server. The database contains a table with this columns ID, FileName and FileSize In the table there are more than 4'000'000 records. I want to make fast a search in FileName (varchar) column I found that I can use for this Sphinx search engine. The problem is that I want to restrict searches by filetype. Do I have to and how (trigers?) to extract file extensions for all rows? May be I have to create another table (because this one is replicated) and join them in 1:1 relation? Can you give me some advices please :)

    Read the article

  • 2D Histogram in R: Converting from Count to Frequency within a Column

    - by Jac
    Would appreciate help with generating a 2D histogram of frequencies, where frequencies are calculated within a column. My main issue: converting from counts to column based frequency. Here's my starting code: # expected packages library(ggplot2) library(plyr) # generate example data corresponding to expected data input x_data = sample(101:200,10000, replace = TRUE) y_data = sample(1:100,10000, replace = TRUE) my_set = data.frame(x_data,y_data) # define x and y interval cut points x_seq = seq(100,200,10) y_seq = seq(0,100,10) # label samples as belonging within x and y intervals my_set$x_interval = cut(my_set$x_data,x_seq) my_set$y_interval = cut(my_set$y_data,y_seq) # determine count for each x,y block xy_df = ddply(my_set, c("x_interval","y_interval"),"nrow") # still need to convert for use with dplyr # convert from count to frequency based on formula: freq = count/sum(count in given x interval) ################ TRYING TO FIGURE OUT ################# # plot results fig_count <- ggplot(xy_df, aes(x = x_interval, y = y_interval)) + geom_tile(aes(fill = nrow)) # count fig_freq <- ggplot(xy_df, aes(x = x_interval, y = y_interval)) + geom_tile(aes(fill = freq)) # frequency I would appreciate any help in how to calculate the frequency within a column. Thanks! jac EDIT: I think the solution will require the following steps 1) Calculate and store overall counts for each x-interval factor 2) Divide the individual bin count by its corresponding x-interval factor count to obtain frequency. Not sure how to carry this out though. .

    Read the article

  • When not to use a Drupal node?

    - by stotastic
    I've recently created a very simple CRUD table where the user stores some data. For the data, I created a custom node. The functionality works great for creating, editing, and deleting data in the CRUD table using the basic node functionality (I'm actually amazed how fast and easy it was to program the basic functionality with proper access controls using only a tiny bit of code).... Since the data isn't meant to be treated the same way as 'content' such as a blog post (no title, no body, no commments, no revisions, shouldn't show up on ?q=node page, no previews, no teasers, etc)... I find that I'm spending most of my time 'turning off' and modifying the stuff that drupal does automatically for nodes. I know its a matter of taste, but where should one draw the line on what should be treated as a node and what shouldn't? In other words, would it be better to program this stuff from scratch without using nodes?

    Read the article

  • Advice about testing an application before release?

    - by Troy
    I would like to get some tips from peer developers about how you go about testing an application you developed, prior to release to QA. Keep in mind, this is a small scale application (requirements are verbal), and so doing formal testing processes wont work, especially, since your boss told you to develop this app quick, push it out the door. Despite the time restraints, I would like to make sure it is bug free, however, numerous times in the past, I have had the app sent back to me because clicking the "Reset" button, messes up the other controls alignment etc. I know there are people out there that develop small scale apps fast, and send them out with minimal bugs. How can I achieve that? I researched this post, but it didnt quite answer my question. Testing your code before releasing to QA

    Read the article

  • querying larg text file containing JSON objects.

    - by Maciek Sawicki
    Hi, I have few Gigabytes text file in format: {"user_ip":"x.x.x.x", "action_type":"xxx", "action_data":{"some_key":"some_value"...},...} each entry is one line. First I would like to easily find entries for given ip. This part is easy because I can use grep for example. However even for this I would like to find better solution because I would like to get response as fast as possible. Next part is more complicated because I would like to find entries from selected ip and of selected type and with particular value of some_key in action_data. Probably I would have to convert this file to SQL db (probably SQLite, because it will be desktop APP), but I would ask if there are exists better solutions?

    Read the article

  • How do I Put Several Select Statements into Different Columns

    - by Russ Bradberry
    I basically have 7 select statement that I need to have the results output into separate columns. Normally I would use a crosstab for this but I need a fast efficient way to go about this as there are over 7 billion rows in the table. I am using the vertica database system. Below is an example of my statements: SELECT COUNT(user_id) AS '1' FROM event_log_facts WHERE date_dim_id=20100101 SELECT COUNT(user_id) AS '2' FROM event_log_facts WHERE date_dim_id=20100102 SELECT COUNT(user_id) AS '3' FROM event_log_facts WHERE date_dim_id=20100103 SELECT COUNT(user_id) AS '4' FROM event_log_facts WHERE date_dim_id=20100104 SELECT COUNT(user_id) AS '5' FROM event_log_facts WHERE date_dim_id=20100105 SELECT COUNT(user_id) AS '6' FROM event_log_facts WHERE date_dim_id=20100106 SELECT COUNT(user_id) AS '7' FROM event_log_facts WHERE date_dim_id=20100107

    Read the article

  • NoSQL or Ehcache caching ?

    - by paddydub
    I'm building a Route Planner Webapp using Spring/Hibernate/Tomcat and a mysql database, I have a database containing read only data, such as Bus Stop Coordinates, Bus times which is never updated. I'm trying to make the app run faster, each time the application is run it will preform approx 1000 reads to the database to calculate a route. I have setup a Ehcache which greatly improves the read from database times. I'm now setting terracotta + Ehcache distributed caching to share the cache with multiple Tomcat JVMs. This seems a bit complicated. I've tried memcached but it was not performing as fast as ehcache. I'm wondering if a MongoDb or Redis would be better suited. I have no experience with nosql but I would appreciate if anyone has any ideas. What i need is quick access to the read only database.

    Read the article

  • What function does .NET NPV() use? Doesn't match manual calculations

    - by Matthew PK
    I am using the NPV() function in VB.NET to get NPV for a set of cash flows. However, the result of NPV() is not consistent with my results performing the calculation manually (nor the Investopedia NPV calc... which matches my manual results) My correct manual results and the NPV() results are close, within 5%.. but not the same... Manually, using the NPV formula: NPV = C0 + C1/(1+r)^1 + C2/(1+r)^2 + C3/(1+r)^3 + .... + Cn/(1+r)^n The manual result is stored in RunningTotal With rate r = 0.04 and period n = 10 Here is my relevant code: EDIT: Do I have OBOB somewhere? YearCashOutFlow = CDbl(TxtAnnualCashOut.Text) YearCashInFlow = CDbl(TxtTotalCostSave.Text) YearCount = 1 PAmount = -1 * (CDbl(TxtPartsCost.Text) + CDbl(TxtInstallCost.Text)) RunningTotal = PAmount YearNPValue = PAmount AnnualRateIncrease = CDbl(TxtUtilRateInc.Text) While AnnualRateIncrease > 1 AnnualRateIncrease = AnnualRateIncrease / 100 End While AnnualRateIncrease = 1 + AnnualRateIncrease ' ZERO YEAR ENTRIES ListBoxNPV.Items.Add(Format(PAmount, "currency")) ListBoxCostSave.Items.Add("$0.00") ListBoxIRR.Items.Add("-100") ListBoxNPVCum.Items.Add(Format(PAmount, "currency")) CashFlows(0) = PAmount '''' Do While YearCount <= CInt(TxtLifeOfProject.Text) ReDim Preserve CashFlows(YearCount) CashFlows(YearCount) = Math.Round(YearCashInFlow - YearCashOutFlow, 2) If CashFlows(YearCount) > 0 Then OnePos = True YearNPValue = CashFlows(YearCount) / (1 + DiscountRate) ^ YearCount RunningTotal = RunningTotal + YearNPValue ListBoxNPVCum.Items.Add(Format(Math.Round(RunningTotal, 2), "currency")) ListBoxCostSave.Items.Add(Format(YearCashInFlow, "currency")) If OnePos Then ListBoxIRR.Items.Add((IRR(CashFlows, 0.1)).ToString) ListBoxNPV.Items.Add(Format(NPV(DiscountRate, CashFlows), "currency")) Else ListBoxIRR.Items.Add("-100") ListBoxNPV.Items.Add(Format(RunningTotal, "currency")) End If YearCount = YearCount + 1 YearCashInFlow = AnnualRateIncrease * YearCashInFlow Loop

    Read the article

  • Is there a way to animate on a Home Widget?

    - by David
    Hi All, I want to use an animation on a Home page Widget, i.e. an AppWidgetProvider. I was hoping to use the "Frame Animation" technique: http://developer.android.com/guide/topics/graphics/2d-graphics.html#frame-animation which I've used successfully in an activity. But I can't translate that code to an AppWidgetProvider. Basically, in an AppWidgetProvider, I create and work with a RemoteViews object, which AFAIK doesn't provide me with a method to get a reference to an ImageView in the layout for me to call start() on the animation. There is also not a handler or a callback for when the widget displays so I can make the start() call. Is there another way this can be done? I suppose that I can probably do the animation on my own with very fast onUpdate() calls on the widget, but that seems awfully expensive.

    Read the article

  • mysql - filtering a list against keywords, both list and keywords > 20 million records

    - by threecheeseopera
    I have two tables, both having more than 20 million records; table1 is a list of terms, and table2 is a list of keywords that may or may not appear in those terms. I need to identify the terms that contain a keyword. My current strategy is: SELECT table1.term, table2.keyword FROM table1 INNER JOIN table2 ON table1.term LIKE CONCAT('%', table2.keyword, '%'); This is not working, it takes f o r e v e r. It's not the server (see notes). How might I rewrite this so that it runs in under a day? Notes: As for server optimization: both tables are myisam and have unique indexes on the matching fields; the myisam key buffer is greater than the sum of both index file sizes, and it is not even being fully taxed (key_blocks_unused is ... large); the server is a dual-xeon 2U beast with fast sas drives and 8G of ram, fine-tuned for the mysql workload.

    Read the article

  • WCF Caching Solution - Need Advice

    - by Brandon
    The company I work for is looking to implement a caching solution. We have several WCF Web Services hosted and we need to cache certain values that can be persisted and fetched regardless of a client's session to a service. I am looking at the following technologies: Caching Application Block 4.1 WCF TCP Service using HttpRuntime Caching Memcached Win32 and Client Microsoft AppFabric Caching Beta 2 Our test server is a Windows Server 2003 with IIS6, but our production server is Windows Server 2008, so any of the above options would work (except for AppFabric Caching on our test server). Does anyone have any experience with any of these? This caching solution will not be used to store a lot of data, but it will need to be fetched from frequently and fast. Thanks in advance.

    Read the article

  • Nested query to find details in table B for maximum value in table A

    - by jpatokal
    I've got a huge bunch of flights travelling between airports. Each airport has an ID and (x,y) coordinates. For a given list of flights, I want to find the northernmost (highest x) airport visited. Here's the query I'm currently using: SELECT name,iata,icao,apid,x,y FROM airports WHERE y=(SELECT MAX(y) FROM airports AS a , flights AS f WHERE (f.src_apid=a.apid OR f.dst_apid=a.apid) ) This works beautifully and reasonably fast as long as y is unique, but fails once it isn't. What I'd want to do instead is find the MAX(y) in the subquery, but return the unique apid for the airport with the highest y. Any suggestions?

    Read the article

  • How to calculate 2^n-1 efficiently without overflow?

    - by Ludwig Weinzierl
    I want to calculate 2^n-1 for a 64bit integer value. What I currently do is this for(i=0; i<n; i++) r|=1<<i; and I wonder if there is more elegant way to do it. The line is in an inner loop, so I need it to be fast. I thought of r=(1ULL<<n)-1; but it doesn't work for n=64, because << is only defined for values of n up to 63.

    Read the article

  • Loading a DB table into nested dictionaries in Python

    - by Hossein
    Hi, I have a table in MySql DB which I want to load it to a dictionary in python. the table columns is as follows: id,url,tag,tagCount tagCount is the number of times that a tag has been repeated for a certain url. So in that case I need a nested dictionary, in other words a dictionary of dictionary, to load this table. Because each url have several tags for which there are different tagCounts.the code that I used is this:( the whole table is about 22,000 records ) cursor.execute( ''' SELECT url,tag,tagCount FROM wtp ''') urlTagCount = cursor.fetchall() d = defaultdict(defaultdict) for url,tag,tagCount in urlTagCount: d[url][tag]=tagCount print d first of all I want to know if this is correct.. and if it is why it takes so much time? Is there any faster solutions? I am loading this table into memory to have fast access to get rid of the hassle of slow database operations, but with this slow speed it has become a bottleneck itself, it is even much slower than DB access. and anyone help? thanks

    Read the article

  • What is the fastest way to copy content of DVD to hard disc using Linux

    - by Ritesh
    I have gone through some of the links Which talks about fastest way of copying files in windows using FILE_FLAG_NO_BUFFERING and FILE_FLAG_OVERLAPPED . It also talks about how request made for read and write opeartions with BUFFER SIZE as 256KB and 128KB are faster than 1Mb .The link for that is :- Explanation for tiny reads (overlapped, buffered) outperforming large contiguous reads? I am also loking for a Similar method in linux Which allows me to copy the content of my DVD to Hard Disc in a fast Way . So I wanted to know Is there some file operation flags in Linux which would provide me the best result or Which way of Copy in Linux is the best ? My codes are all in c++.

    Read the article

  • label in my tabel celll

    - by ven in Iphone world
    hi this is lak.. thank you for your fast feedback but that not worked for me i am using labels in table.. UILabel *label1 = (UILabel *) [cell viewWithTag:1]; label1.backgroundColor = [UIColor clearColor]; label1.text=aStation.station_name; label1.textColor = [UIColor colorWithRed:0.76 green:0.21 blue:0.07 alpha:1.0]; [label1 setFont:[UIFont fontWithName:@"Trebuchet MS" size:15]]; for this type of label i want to limit number of characters. hope i will get an answer..

    Read the article

  • Migrate from VB.net to C#

    - by rowmark
    Hello Experts, I have been developing applications using VB.net for the past 5 years. As I tried to learn Java earlier and found it very difficult for me I did stick on to VB.net. And for me C# is more or less similar to Java. Now I cannot get away with it. I have to code on C#. Is there a way I can get to speed with C# fast. I would really appreciate if you can let me know your thoughts and if there are any good resources I can try.

    Read the article

  • Python fCGI + sqlAlchemy = malformed header from script. Bad header=FROM tags : index.py

    - by crgwbr
    I'm writing an Fast-CGI application that makes use of sqlAlchemy & MySQL for persistent data storage. I have no problem connecting to the DB and setting up ORM (so that tables get mapped to classes); I can even add data to tables (in memory). But, as soon as I query the DB (and push any changes from memory to storage) I get a 500 Internal Server Error and my error.log records malformed header from script. Bad header=FROM tags : index.py, when tags is the table name. Any idea what could be causing this? Also, I don't think it matters, but its a Linux development server talking to an off-site (across the country) MySQL server.

    Read the article

  • Can this loop be sped up in pure Python?

    - by Noctis Skytower
    I was trying out an experiment with Python, trying to find out how many times it could add one to an integer in one minute's time. Assuming two computers are the same except for the speed of the CPUs, this should give an estimate of how fast some CPU operations may take for the computer in question. The code below is an example of a test designed to fulfill the requirements given above. This version is about 20% faster than the first attempt and 150% faster than the third attempt. Can anyone make any suggestions as to how to get the most additions in a minute's time span? Higher numbers are desireable. EDIT: This experiment is being written in Python 3.1 and is 15% faster than the fourth speed-up attempt. def start(seconds): import time, _thread def stop(seconds, signal): time.sleep(seconds) signal.pop() total, signal = 0, [None] _thread.start_new_thread(stop, (seconds, signal)) while signal: total += 1 return total if __name__ == '__main__': print('Testing the CPU speed ...') print('Relative speed:', start(60))

    Read the article

  • HBase as web app backend

    - by NathanD
    Can anyone advise if it is a good idea to have HBase as primary data source for web-based application? My primary concern is HBase's response time to queries. Is it possible to have sub-second response? edit: more details about the app itself. Amount of data: ~500GB of text data, expect to reach 1TB soon Number of concurrent users using the app: up to 50 The app will be used to present reports about data stored in HBase, like how many times keyword "X" occured in last 24h. For ~80% of requests from that app I will know the exact key, 20% will be scans (I'm looking into HBase schema design related topics to make it run fast)

    Read the article

  • Optimizing MySQL statement with lot of count(row) an sum(row+row2)...

    - by Zombies
    I need to use InnoDB storage engine on a table with about 1mil or so records in it at any given time. It has records being inserted to it at a very fast rate, which are then dropped within a few days, maybe a week. The ping table has about a million rows, whereas the website table only about 10,000. My statement is this: select url from website ws, ping pi where ws.idproxy = pi.idproxy and pi.entrytime > curdate() - 3 and contentping+tcpping is not null group by url having sum(contentping+tcpping)/(count(*)-count(errortype)) < 500 and count(*) > 3 and count(errortype)/count(*) < .15 order by sum(contentping+tcpping)/(count(*)-count(errortype)) asc; I added an index on entrytime, yet no dice. Can anyone throw me a bone as to what I should consider to look into for basic optimization of this query. The result set is only like 200 rows, so I'm not getting killed there.

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >