Search Results

Search found 21838 results on 874 pages for 'long double'.

Page 78/874 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • How long does it take each thread timeslice in Windows XP ?

    - by IHawk
    I am trying to find out how long does it take each thread timeslice (quantum) in Windows but the only information that I found out is about the clock ticks being from 15 to 20ms or 20-30ms. How can I find this information ? I think it may vary from OS to OS, but I am not certain. I appreciate any suggestion on this subject. Thank you.

    Read the article

  • Need image, text, youtube, etc. jquery modal overlay with long description

    - by Jason
    Hi, I am looking for a jquery modal window script for displaying images, text, html, videos, etc. There are a lot of great ones out there, but I am looking for one that allows for a long description (that isn't pulled from the title) - like highslide that lets you have a caption and will display the photo text to the right or left of your image in the same modal window. Due to licensing, I can't use highslide. So I'm looking for something else. Thoughts?

    Read the article

  • Why can't your switch statement data type be long Java?

    - by Fostah
    Here's an excerpt from Sun's Java tutorials: A switch works with the byte, short, char, and int primitive data types. It also works with enumerated types (discussed in Classes and Inheritance) and a few special classes that "wrap" certain primitive types: Character, Byte, Short, and Integer (discussed in Simple Data Objects ). There must be a good reason why the long primitive data type is not allowed. Anyone know what it is?

    Read the article

  • SQLiteOpenHelper.getWriteableDatabase() null pointer exception on Android

    - by Drew Dara-Abrams
    I've had fine luck using SQLite with straight, direct SQL in Android, but this is the first time I'm wrapping a DB in a ContentProvider. I keep getting a null pointer exception when calling getWritableDatabase() or getReadableDatabase(). Is this just a stupid mistake I've made with initializations in my code or is there a bigger issue? public class DatabaseProvider extends ContentProvider { ... private DatabaseHelper databaseHelper; private SQLiteDatabase db; ... @Override public boolean onCreate() { databaseHelper = new DatabaseProvider.DatabaseHelper(getContext()); return (databaseHelper == null) ? false : true; } ... @Override public Uri insert(Uri uri, ContentValues values) { db = databaseHelper.getWritableDatabase(); // NULL POINTER EXCEPTION HERE ... } private static class DatabaseHelper extends SQLiteOpenHelper { public static final String DATABASE_NAME = "cogsurv.db"; public static final int DATABASE_VERSION = 1; public static final String[] TABLES = { "people", "travel_logs", "travel_fixes", "landmarks", "landmark_visits", "direction_distance_estimates" }; // people._id does not AUTOINCREMENT, because it's set based on server's people.id public static final String[] CREATE_TABLE_SQL = { "CREATE TABLE people (_id INTEGER PRIMARY KEY," + "server_id INTEGER," + "name VARCHAR(255)," + "email_address VARCHAR(255))", "CREATE TABLE travel_logs (_id INTEGER PRIMARY KEY AUTOINCREMENT," + "server_id INTEGER," + "person_local_id INTEGER," + "person_server_id INTEGER," + "start DATE," + "stop DATE," + "type VARCHAR(15)," + "uploaded VARCHAR(1))", "CREATE TABLE travel_fixes (_id INTEGER PRIMARY KEY AUTOINCREMENT," + "datetime DATE, " + "latitude DOUBLE, " + "longitude DOUBLE, " + "altitude DOUBLE," + "speed DOUBLE," + "accuracy DOUBLE," + "travel_mode VARCHAR(50), " + "person_local_id INTEGER," + "person_server_id INTEGER," + "travel_log_local_id INTEGER," + "travel_log_server_id INTEGER," + "uploaded VARCHAR(1))", "CREATE TABLE landmarks (_id INTEGER PRIMARY KEY AUTOINCREMENT," + "server_id INTEGER," + "name VARCHAR(150)," + "latitude DOUBLE," + "longitude DOUBLE," + "person_local_id INTEGER," + "person_server_id INTEGER," + "uploaded VARCHAR(1))", "CREATE TABLE landmark_visits (_id INTEGER PRIMARY KEY AUTOINCREMENT," + "server_id INTEGER," + "person_local_id INTEGER," + "person_server_id INTEGER," + "landmark_local_id INTEGER," + "landmark_server_id INTEGER," + "travel_log_local_id INTEGER," + "travel_log_server_id INTEGER," + "datetime DATE," + "number_of_questions_asked INTEGER," + "uploaded VARCHAR(1))", "CREATE TABLE direction_distance_estimates (_id INTEGER PRIMARY KEY AUTOINCREMENT," + "server_id INTEGER," + "person_local_id INTEGER," + "person_server_id INTEGER," + "travel_log_local_id INTEGER," + "travel_log_server_id INTEGER," + "landmark_visit_local_id INTEGER," + "landmark_visit_server_id INTEGER," + "start_landmark_local_id INTEGER," + "start_landmark_server_id INTEGER," + "target_landmark_local_id INTEGER," + "target_landmark_server_id INTEGER," + "datetime DATE," + "direction_estimate DOUBLE," + "distance_estimate DOUBLE," + "uploaded VARCHAR(1))" }; public DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); Log.v(Constants.TAG, "DatabaseHelper()"); } @Override public void onCreate(SQLiteDatabase db) { Log.v(Constants.TAG, "DatabaseHelper.onCreate() starting"); // create the tables int length = CREATE_TABLE_SQL.length; for (int i = 0; i < length; i++) { db.execSQL(CREATE_TABLE_SQL[i]); } Log.v(Constants.TAG, "DatabaseHelper.onCreate() finished"); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { for (String tableName : TABLES) { db.execSQL("DROP TABLE IF EXISTS" + tableName); } onCreate(db); } } } As always, thanks for the assistance! -- Not sure if this detail helps, but here's LogCat showing the exception:

    Read the article

  • Compiler optimization of repeated accessor calls

    - by apocalypse9
    I've found recently that for some types of financial calculations that the following pattern is much easier to follow and test especially in situations where we may need to get numbers from various stages of the computation. public class nonsensical_calculator { ... double _rate; int _term; int _days; double monthlyRate { get { return _rate / 12; }} public double days { get { return (1 - i); }} double ar { get { return (1+ days) /(monthlyRate * days) double bleh { get { return Math.Pow(ar - days, _term) public double raar { get { return bleh * ar/2 * ar / days; }} .... } Obviously this often results in multiple calls to the same accessor within a given formula. I was curious as to whether or not the compiler is smart enough to optimize away these repeated calls with no intervening change in state, or whether this style is causing a decent performance hit. Further reading suggestions are always appreciated

    Read the article

  • Pinvoke- to call a function with pointer to pointer to pointer parameter

    - by jambodev
    complete newbe in PInvoke. I have a function in C with this signature: int addPos(int init_array_size, int *cnt, int *array_size, PosT ***posArray, PosT ***hPtr, char *id, char *record_id, int num, char *code, char *type, char *name, char *method, char *cont1, char *cont2, char *cont_type, char *date1, char *date_day, char *date2, char *dsp, char *curr, char *contra_acc, char *np, char *ten, char *dsp2, char *covered, char *cont_subtype, char *Xcode, double strike, int version, double t_price, double long, double short, double scale, double exrcised_price, char *infoMsg); and here is how PosT looks like: typedef union pu { struct dpos d; struct epo e; struct bpos b; struct spos c; } PosT ; my questions are: 1- do I need to define a class in CSharp representing PosT? 2- how do I pass PosT ***posArray parameter across frm CSharp to C? 3- How do I specify marshaling for it all? I Do appreciate your help

    Read the article

  • Random Page Cost and Planning

    - by Dave Jarvis
    A query (see below) that extracts climate data from weather stations within a given radius of a city using the dates for which those weather stations actually have data. The query uses the table's only index, rather effectively: CREATE UNIQUE INDEX measurement_001_stc_idx ON climate.measurement_001 USING btree (station_id, taken, category_id); Reducing the server's configuration value for random_page_cost from 2.0 to 1.1 had a massive performance improvement for the given range (nearly an order of magnitude) because it suggested to PostgreSQL that it should use the index. While the results now return in 5 seconds (down from ~85 seconds), problematic lines remain. Bumping the query's end date by a single year causes a full table scan: sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1997-12-31'::date AND How do I persuade PostgreSQL to use the indexes regardless of years between the two dates? (A full table scan against 43 million rows is probably not the best plan.) Find the EXPLAIN ANALYSE results below the query. Thank you! Query SELECT extract(YEAR FROM m.taken) AS year, avg(m.amount) AS amount FROM climate.city c, climate.station s, climate.station_category sc, climate.measurement m WHERE c.id = 5182 AND earth_distance( ll_to_earth(c.latitude_decimal,c.longitude_decimal), ll_to_earth(s.latitude_decimal,s.longitude_decimal)) / 1000 <= 30 AND s.elevation BETWEEN 0 AND 3000 AND s.applicable = TRUE AND sc.station_id = s.id AND sc.category_id = 1 AND sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1996-12-31'::date AND m.station_id = s.id AND m.taken BETWEEN sc.taken_start AND sc.taken_end AND m.category_id = sc.category_id GROUP BY extract(YEAR FROM m.taken) ORDER BY extract(YEAR FROM m.taken) 1900 to 1996: Index "Sort (cost=1348597.71..1348598.21 rows=200 width=12) (actual time=2268.929..2268.935 rows=92 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1348586.56..1348590.06 rows=200 width=12) (actual time=2268.829..2268.886 rows=92 loops=1)" " -> Nested Loop (cost=0.00..1344864.01 rows=744510 width=12) (actual time=0.807..2084.206 rows=134893 loops=1)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (sc.station_id = m.station_id))" " -> Nested Loop (cost=0.00..12755.07 rows=1220 width=18) (actual time=0.502..521.937 rows=23 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.014..0.015 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Nested Loop (cost=0.00..9907.73 rows=3659 width=34) (actual time=0.014..28.937 rows=3458 loops=1)" " -> Seq Scan on station_category sc (cost=0.00..970.20 rows=3659 width=14) (actual time=0.008..10.947 rows=3458 loops=1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1996-12-31'::date) AND (category_id = 1))" " -> Index Scan using station_pkey1 on station s (cost=0.00..2.43 rows=1 width=20) (actual time=0.004..0.004 rows=1 loops=3458)" " Index Cond: (s.id = sc.station_id)" " Filter: (s.applicable AND (s.elevation >= 0) AND (s.elevation <= 3000))" " -> Append (cost=0.00..1072.27 rows=947 width=18) (actual time=6.996..63.199 rows=5865 loops=23)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.000..0.000 rows=0 loops=23)" " Filter: (m.category_id = 1)" " -> Bitmap Heap Scan on measurement_001 m (cost=20.79..1047.27 rows=941 width=18) (actual time=6.995..62.390 rows=5865 loops=23)" " Recheck Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" " -> Bitmap Index Scan on measurement_001_stc_idx (cost=0.00..20.55 rows=941 width=0) (actual time=5.775..5.775 rows=5865 loops=23)" " Index Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" "Total runtime: 2269.264 ms" 1900 to 1997: Full Table Scan "Sort (cost=1370192.26..1370192.76 rows=200 width=12) (actual time=86165.797..86165.809 rows=94 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1370181.12..1370184.62 rows=200 width=12) (actual time=86165.654..86165.736 rows=94 loops=1)" " -> Hash Join (cost=4293.60..1366355.81 rows=765061 width=12) (actual time=534.786..85920.007 rows=139721 loops=1)" " Hash Cond: (m.station_id = sc.station_id)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end))" " -> Append (cost=0.00..867005.80 rows=43670150 width=18) (actual time=0.009..79202.329 rows=43670079 loops=1)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.001..0.001 rows=0 loops=1)" " Filter: (category_id = 1)" " -> Seq Scan on measurement_001 m (cost=0.00..866980.80 rows=43670144 width=18) (actual time=0.008..73312.008 rows=43670079 loops=1)" " Filter: (category_id = 1)" " -> Hash (cost=4277.93..4277.93 rows=1253 width=18) (actual time=534.704..534.704 rows=25 loops=1)" " -> Nested Loop (cost=847.87..4277.93 rows=1253 width=18) (actual time=415.837..534.682 rows=25 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.012..0.014 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Hash Join (cost=847.87..1352.07 rows=3760 width=34) (actual time=6.427..35.107 rows=3552 loops=1)" " Hash Cond: (s.id = sc.station_id)" " -> Seq Scan on station s (cost=0.00..367.25 rows=7948 width=20) (actual time=0.004..23.529 rows=7949 loops=1)" " Filter: (applicable AND (elevation >= 0) AND (elevation <= 3000))" " -> Hash (cost=800.87..800.87 rows=3760 width=14) (actual time=6.416..6.416 rows=3552 loops=1)" " -> Bitmap Heap Scan on station_category sc (cost=430.29..800.87 rows=3760 width=14) (actual time=2.316..5.353 rows=3552 loops=1)" " Recheck Cond: (category_id = 1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1997-12-31'::date))" " -> Bitmap Index Scan on station_category_station_category_idx (cost=0.00..429.35 rows=6376 width=0) (actual time=2.268..2.268 rows=6339 loops=1)" " Index Cond: (category_id = 1)" "Total runtime: 86165.936 ms"

    Read the article

  • Replace LinkedList element value through LinkedList.Enumerator

    - by Yan Cheng CHEOK
    I realize there are no way for me to replace value through LinkedList.Enumerator. For instead, I try to port the below Java code to C# // Java ListIterator<Double> itr1 = linkedList1.listIterator(); ListIterator<Double> itr2 = linkedList2.listIterator(); while(itr1.hasNext() && itr2.hasNext()){ Double d = itr1.next() + itr2.next(); itr1.set(d); } // C# LinkedList<Double>.Enumerator itr1 = linkedList1.GetEnumerator(); LinkedList<Double>.Enumerator itr2 = linkedList2.GetEnumerator(); while(itr1.MoveNext() && itr2.MoveNext()){ Double d = itr1.Current + itr2.Current; // Opps. Compilation error! itr1.Current = d; } Any other technique I can use?

    Read the article

  • IEnumerable.Cast<>

    - by Renato Person
    If I can implicitly cast an integer value to a double (and vice versa), like: int a = 4; double b = a; // now b holds 4.0 Why can I not do this: int[] intNumbers = {10, 6, 1, 9}; double[] doubleNumbers2 = intNumbers.Cast<double>().ToArray<double>(); I get an "Specified cast is not valid" error message. Doing the opposite (casting from double to int) results in the same error. What am I doing wrong?

    Read the article

  • Compilier optimization of repeated accessor calls C#

    - by apocalypse9
    I've found recently that for some types of financial calculations that the following pattern is much easier to follow and test especially in situations where we may need to get numbers from various stages of the computation. public class nonsensical_calculator { ... double _rate; int _term; int _days; double monthlyRate { get { return _rate / 12; }} public double days { get { return (1 - i); }} double ar { get { return (1+ days) /(monthlyRate * days) double bleh { get { return Math.Pow(ar - days, _term) public double raar { get { return bleh * ar/2 * ar / days; }} .... } Obviously this often results in multiple calls to the same accessor within a given formula. I was curious as to whether or not the compiler is smart enough to optimize away these repeated calls with no intervening change in state, or whether this style is causing a decent performance hit. Further reading suggestions are always appreciated

    Read the article

  • dynamic array pointer to binary file

    - by Yijinsei
    Hi guys, Know this might be rather basic, but I been trying to figure out how to one after create a dynamic array such as double* data = new double[size]; be used as a source of data to be kept in to a binary file such as ofstream fs("data.bin",ios:binary"); fs.write(reinterpret_cast<const char *> (data),size*sizeof(double)); When I finish writing, I attempt to read the file through double* data = new double[size]; ifstream fs("data.bin",ios:binary"); fs.read(reinterpret_cast<char*> (data),size*sizeof(double)); However I seem to encounter a run time error when reading the data. Do you guys have any advice how i should attempt to write a dynamic array using pointers passed from other methods to be stored in binary files?

    Read the article

  • Neo4j Performing shortest path calculations on stored data

    - by paddydub
    I would like to store the following graph data in the database, graph.makeEdge( "s", "c", "cost", (double) 7 ); graph.makeEdge( "c", "e", "cost", (double) 7 ); graph.makeEdge( "s", "a", "cost", (double) 2 ); graph.makeEdge( "a", "b", "cost", (double) 7 ); graph.makeEdge( "b", "e", "cost", (double) 2 ); Then run the Dijskra algorighm from a web servlet, to find shortest path calculations using the stored graph data. Then I will print the path to a html file from the servlet. Dijkstra<Double> dijkstra = getDijkstra( graph, 0.0, "s", "e" );

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >