Search Results

Search found 7216 results on 289 pages for 'low cost'.

Page 2/289 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Windows FAT/NTFS Low-Level Disk Viewer (Norton DiskEdit alternative)

    - by Synetech inc.
    Hi, One of my most valuable software tools has always been Norton DiskEdit from Norton Utilities/Symantec Systemworks. I have used it for years for so many things, including, but not limited to learning file-systems. I am now in search of a similar tool that can let me view FAT and NTFS disks at a low and high level under Windows. I have seen Runtime Software’s DiskExplorer, but unfortunately, it is limited in a number of ways, particularly annoying is that it does not really let you view the disk as structured (it does not let you see directory entries from a directory that is fragmented like DiskEdit can without doing a too-exhaustive scan which doesn’t even work for FAT partitions that have cluster sizes <4KB). Does anyone know of a Windows alternative of a Norton DiskEditor? Thanks a lot.

    Read the article

  • How low-power can a home server get?

    - by Halik
    I've got quite simple question actually. How green, low-power and efficient x86 home server can I build using consumer parts with rather constrained budget. After looking through some Google hits I've found out that system based on dual-core atom, some modest mITX board (gigabit lan, integrated audio and gfx etc), one RAM module and one 'green' WD HDD, powered by picoITX PSU uses about 30W at idle up to 40 at load. Can you get lower (or how much lower) then that? Maybe some VIA nano chips, or single core atom? My home server would take care of some back-upping mixed with little ftp/http traffic.

    Read the article

  • Server cost for smartphone app with web service

    - by FrankieA
    Hello, I am working on a smartphone application that will require a backend web service - but I have absolutely clueless to how much it will cost. Web Service will handle: - login of users - cataloging of our user base - holding minimal profile information for users (the only binary data is a display picture which will be < 20k each) - performing some very minor calculation/algorithm before return results - All the above will be communicated to server from a smartphone (iPhone/BlackBerry/Android) Bandwidth Requirements: - We want to handle up to 10k users throughout the day. - I predict 10k * 50 HTTP requests a day = 500,000 requests a day * 30 = 15 million requests a month Space Requirements: - Data will be in SQL database. - I predict 1MB/user * 10k = 10GB + overhead. In other words - space is not a big issue. Software Requirements: (unless someone knows an alternative) - Windows Server 2008 + IIS - MSFT SQL Server Note: This is 100% new to me, so please hit me with all you got. Do I need Windows Server or are there alternative? Is it better to get multiple cheap servers to distribute load? Will Amazon S3 work for me? How about Windows Azure? Thank you!!

    Read the article

  • How much does a computer actually cost?

    - by Cawas
    Ok, so when you buy a new notebook, you spend about $1 or $2 thousand with the OS included. When you make your own desktop machine you can get as low as $100 for a good one today, with not a single piece of software included. It can't be much lower, but it can go lot higher. People, including myself, tend to believe that's the price of a computer, but then there comes the softwares. I just stumbled upon a nice piece of application I could use myself, but it's very specific, very tiny, and most people would never bother about this. And it costs "just $12". That is a lot for something I may use just once or twice! OS upgrades, hardware malfunction, and your custom set of software actually raise the computer price quite a lot, thus this question: how much we pay in the end for our personal computers? I'd like to see some statistics on that. Maybe divided into 3 categories or something, but some data with averages, minimum and "maximum" costs would be very nice. Maybe a "cost per year" would be nice. Just wondering.

    Read the article

  • Low latency programming

    - by Sambatyon
    I've been reading a lot about low latency financial systems (especially since the famous case of corporate espionage) and the idea of low latency systems has been in my mind ever since. There are a million applications that can use what these guys are doing, so I would like to learn more about the topic. The thing is I cannot find anything valuable about the topic. Can anybody recommend books, sites, examples on low latency systems?

    Read the article

  • The big dude : server cost € , and the what 'i must look for' question .

    - by Angelus
    Hi again and sorry for the bad title . This time I'm thinking in a big project , and I have a big hole of acknowledge about servers and cost of them (economic cost). The big project consist in a new table game for playing online with bets. Think in it like a poker server that must have a good response to thousands of people at the same time. Then i have the big question , what type of server must i look for? , what features must i see in them? . ¿Must I think in cloud computing? thank you in advance.

    Read the article

  • AWS EC2: how to compute the cost

    - by EsseTi
    i'm new to AWS, i'm using the free right not and it's terrific. Now, in 1yr the free expires. i went to the website http://aws.amazon.com/ec2/pricing/ where the pricing is but i didn't really get how to compute it. The price are in $ per Hours but i don't think that this means, if i need to have my application running 24h/365d i've to multiplay it for 8760, or do i have? because they write about usage, but how do i compute this value? if i've a website where people in total spend smt like 10 minutes a month and 1 where people spend 750hour a months i pay the same? i can't believe that is the same price. PS:if i've a scheduled task, does it affect the usage?

    Read the article

  • Low Latency Serial Communications In .Net

    - by bvillersjr
    I have been researching various third party libraries and approaches to low latency serial communications in .Net. I've read enough that I have now come full circle and know as little as I did when I started due to the variety of conflicting opinions. For example, the functionality in the Framework was ruled out due to some convincing articles stating: "that the Microsoft provided solution has not been stable across framework versions and is lacking in functionality." I have found articles bashing many of the older COM based libraries. I have found articles bashing the idea of a low latency .Net app as a whole due to garbage collection. I have also read articles demonstrating how P/Invoking Windows API functionality for the purpose of low latency communication is unacceptable. THIS RULES OUT JUST ABOUT ANY APPROACH I CAN THINK OF! I would really appreciate some words from those with been there / done that experience. Ideally, I could locate a solid library / partner and not have to build the communications library myself. I have the following simple objectives: Sustained low latency serial communication in C# / VB.Net 32/64 bit Well documented (if the solution is 3rd party) Relatively unimpacted (communication and latency wise) by garbage collection . Flexible (I have no idea what I will have to interface with in the future!) The only requirement that I have for certain is that I need to be able to interface with many different industrial devices such as RS485 based linear actuators, serial / microcontroller based gauges, and ModBus (also RS485) devices. Any comments, ideas, thoughts or links to articles that may iron out my confusion are much appreciated!

    Read the article

  • How to run Firefox in Protected Mode? (i.e. at low integrity level)

    - by Ian Boyd
    i noticed that Firefox, unlike Chrome and Internet Explorer, doesn't run in the Low Mandatory Level (aka Protected Mode, Low Integrity) Google Chrome: Microsoft Internet Explorer: Mozilla Firefox: Following Microsoft's instructions, i can manually force Firefox into Low Integrity Mode by using: icacls firefox.exe /setintegritylevel Low But Firefox doesn't react well to not running with enough rights: i like the security of knowing that my browser is running with less rights than i have. Is there a way to run Firefox into low rights mode? Is Mozilla planning on adding "protected mode" sometime? Has someone found a workaround to Firefox not handling low rights mode? Update From a July 2007 interview with Mike Schroepfer, VP of Engineering at the Mozilla Foundation: ...we also believe in defense in depth and are investigating protected mode along with many other techniques to improve security for future releases. After a year and a half it doesn't seem like it is a priority.

    Read the article

  • Mulitple full joins in Postgres is slow

    - by blast83
    I have a program to use the IMDB database and am having very slow performance on my query. It appears that it doesn't use my where condition until after it materializes everything. I looked around for hints to use but nothing seems to work. Here is my query: SELECT * FROM name as n1 FULL JOIN aka_name ON n1.id = aka_name.person_id FULL JOIN cast_info as t2 ON n1.id = t2.person_id FULL JOIN person_info as t3 ON n1.id = t3.person_id FULL JOIN char_name as t4 ON t2.person_role_id = t4.id FULL JOIN role_type as t5 ON t2.role_id = t5.id FULL JOIN title as t6 ON t2.movie_id = t6.id FULL JOIN aka_title as t7 ON t6.id = t7.movie_id FULL JOIN complete_cast as t8 ON t6.id = t8.movie_id FULL JOIN kind_type as t9 ON t6.kind_id = t9.id FULL JOIN movie_companies as t10 ON t6.id = t10.movie_id FULL JOIN movie_info as t11 ON t6.id = t11.movie_id FULL JOIN movie_info_idx as t19 ON t6.id = t19.movie_id FULL JOIN movie_keyword as t12 ON t6.id = t12.movie_id FULL JOIN movie_link as t13 ON t6.id = t13.linked_movie_id FULL JOIN link_type as t14 ON t13.link_type_id = t14.id FULL JOIN keyword as t15 ON t12.keyword_id = t15.id FULL JOIN company_name as t16 ON t10.company_id = t16.id FULL JOIN company_type as t17 ON t10.company_type_id = t17.id FULL JOIN comp_cast_type as t18 ON t8.status_id = t18.id WHERE n1.id = 2003 Very table is related to each other on the join via foreign-key constraints and have indexes for all the mentioned columns. The query plan details: "Hash Left Join (cost=5838187.01..13756845.07 rows=15579622 width=835) (actual time=146879.213..146891.861 rows=20 loops=1)" " Hash Cond: (t8.status_id = t18.id)" " -> Hash Left Join (cost=5838185.92..13542624.18 rows=15579622 width=822) (actual time=146879.199..146891.833 rows=20 loops=1)" " Hash Cond: (t10.company_type_id = t17.id)" " -> Hash Left Join (cost=5838184.83..13328403.29 rows=15579622 width=797) (actual time=146879.165..146891.781 rows=20 loops=1)" " Hash Cond: (t10.company_id = t16.id)" " -> Hash Left Join (cost=5828372.95..10061752.03 rows=15579622 width=755) (actual time=146426.483..146429.756 rows=20 loops=1)" " Hash Cond: (t12.keyword_id = t15.id)" " -> Hash Left Join (cost=5825164.23..6914088.45 rows=15579622 width=731) (actual time=146372.411..146372.529 rows=20 loops=1)" " Hash Cond: (t13.link_type_id = t14.id)" " -> Merge Left Join (cost=5825162.82..6699867.24 rows=15579622 width=715) (actual time=146372.366..146372.472 rows=20 loops=1)" " Merge Cond: (t6.id = t13.linked_movie_id)" " -> Merge Left Join (cost=5684009.29..6378956.77 rows=15579622 width=699) (actual time=144019.620..144019.711 rows=20 loops=1)" " Merge Cond: (t6.id = t12.movie_id)" " -> Merge Left Join (cost=5182403.90..5622400.75 rows=8502523 width=687) (actual time=136849.731..136849.809 rows=20 loops=1)" " Merge Cond: (t6.id = t19.movie_id)" " -> Merge Left Join (cost=4974472.00..5315778.48 rows=8502523 width=637) (actual time=134972.032..134972.099 rows=20 loops=1)" " Merge Cond: (t6.id = t11.movie_id)" " -> Merge Left Join (cost=1830064.81..2033131.89 rows=1341632 width=561) (actual time=63784.035..63784.062 rows=2 loops=1)" " Merge Cond: (t6.id = t10.movie_id)" " -> Nested Loop Left Join (cost=1417360.29..1594294.02 rows=1044480 width=521) (actual time=59279.246..59279.264 rows=1 loops=1)" " Join Filter: (t6.kind_id = t9.id)" " -> Merge Left Join (cost=1417359.22..1429787.34 rows=1044480 width=507) (actual time=59279.222..59279.224 rows=1 loops=1)" " Merge Cond: (t6.id = t8.movie_id)" " -> Merge Left Join (cost=1405731.84..1414378.65 rows=1044480 width=491) (actual time=59121.773..59121.775 rows=1 loops=1)" " Merge Cond: (t6.id = t7.movie_id)" " -> Sort (cost=1346206.04..1348817.24 rows=1044480 width=416) (actual time=58095.230..58095.231 rows=1 loops=1)" " Sort Key: t6.id" " Sort Method: quicksort Memory: 17kB" " -> Hash Left Join (cost=172406.29..456387.53 rows=1044480 width=416) (actual time=57969.371..58095.208 rows=1 loops=1)" " Hash Cond: (t2.movie_id = t6.id)" " -> Hash Left Join (cost=104700.38..256885.82 rows=1044480 width=358) (actual time=49981.493..50006.303 rows=1 loops=1)" " Hash Cond: (t2.role_id = t5.id)" " -> Hash Left Join (cost=104699.11..242522.95 rows=1044480 width=343) (actual time=49981.441..50006.250 rows=1 loops=1)" " Hash Cond: (t2.person_role_id = t4.id)" " -> Hash Left Join (cost=464.96..12283.95 rows=1044480 width=269) (actual time=0.071..0.087 rows=1 loops=1)" " Hash Cond: (n1.id = t3.person_id)" " -> Nested Loop Left Join (cost=0.00..49.39 rows=7680 width=160) (actual time=0.051..0.066 rows=1 loops=1)" " -> Nested Loop Left Join (cost=0.00..17.04 rows=3 width=119) (actual time=0.038..0.041 rows=1 loops=1)" " -> Index Scan using name_pkey on name n1 (cost=0.00..8.68 rows=1 width=39) (actual time=0.022..0.024 rows=1 loops=1)" " Index Cond: (id = 2003)" " -> Index Scan using aka_name_idx_person on aka_name (cost=0.00..8.34 rows=1 width=80) (actual time=0.010..0.010 rows=0 loops=1)" " Index Cond: ((aka_name.person_id = 2003) AND (n1.id = aka_name.person_id))" " -> Index Scan using cast_info_idx_pid on cast_info t2 (cost=0.00..10.77 rows=1 width=41) (actual time=0.011..0.020 rows=1 loops=1)" " Index Cond: ((t2.person_id = 2003) AND (n1.id = t2.person_id))" " -> Hash (cost=463.26..463.26 rows=136 width=109) (actual time=0.010..0.010 rows=0 loops=1)" " -> Index Scan using person_info_idx_pid on person_info t3 (cost=0.00..463.26 rows=136 width=109) (actual time=0.009..0.009 rows=0 loops=1)" " Index Cond: (person_id = 2003)" " -> Hash (cost=42697.62..42697.62 rows=2442362 width=74) (actual time=49305.872..49305.872 rows=2442362 loops=1)" " -> Seq Scan on char_name t4 (cost=0.00..42697.62 rows=2442362 width=74) (actual time=14.066..22775.087 rows=2442362 loops=1)" " -> Hash (cost=1.12..1.12 rows=12 width=15) (actual time=0.024..0.024 rows=12 loops=1)" " -> Seq Scan on role_type t5 (cost=0.00..1.12 rows=12 width=15) (actual time=0.012..0.014 rows=12 loops=1)" " -> Hash (cost=31134.07..31134.07 rows=1573507 width=58) (actual time=7841.225..7841.225 rows=1573507 loops=1)" " -> Seq Scan on title t6 (cost=0.00..31134.07 rows=1573507 width=58) (actual time=21.507..2799.443 rows=1573507 loops=1)" " -> Materialize (cost=59525.80..63203.88 rows=294246 width=75) (actual time=812.376..984.958 rows=192075 loops=1)" " -> Sort (cost=59525.80..60261.42 rows=294246 width=75) (actual time=812.363..922.452 rows=192075 loops=1)" " Sort Key: t7.movie_id" " Sort Method: external merge Disk: 24880kB" " -> Seq Scan on aka_title t7 (cost=0.00..6646.46 rows=294246 width=75) (actual time=24.652..164.822 rows=294246 loops=1)" " -> Materialize (cost=11627.38..12884.43 rows=100564 width=16) (actual time=123.819..149.086 rows=41907 loops=1)" " -> Sort (cost=11627.38..11878.79 rows=100564 width=16) (actual time=123.807..138.530 rows=41907 loops=1)" " Sort Key: t8.movie_id" " Sort Method: external merge Disk: 3136kB" " -> Seq Scan on complete_cast t8 (cost=0.00..1549.64 rows=100564 width=16) (actual time=0.013..10.744 rows=100564 loops=1)" " -> Materialize (cost=1.08..1.15 rows=7 width=14) (actual time=0.016..0.029 rows=7 loops=1)" " -> Seq Scan on kind_type t9 (cost=0.00..1.07 rows=7 width=14) (actual time=0.011..0.013 rows=7 loops=1)" " -> Materialize (cost=412704.52..437969.09 rows=2021166 width=40) (actual time=3420.356..4278.545 rows=1028995 loops=1)" " -> Sort (cost=412704.52..417757.43 rows=2021166 width=40) (actual time=3420.349..3953.483 rows=1028995 loops=1)" " Sort Key: t10.movie_id" " Sort Method: external merge Disk: 90960kB" " -> Seq Scan on movie_companies t10 (cost=0.00..35214.66 rows=2021166 width=40) (actual time=13.271..566.893 rows=2021166 loops=1)" " -> Materialize (cost=3144407.19..3269057.42 rows=9972019 width=76) (actual time=65485.672..70083.219 rows=5039009 loops=1)" " -> Sort (cost=3144407.19..3169337.23 rows=9972019 width=76) (actual time=65485.667..68385.550 rows=5038999 loops=1)" " Sort Key: t11.movie_id" " Sort Method: external merge Disk: 735512kB" " -> Seq Scan on movie_info t11 (cost=0.00..212815.19 rows=9972019 width=76) (actual time=15.750..15715.608 rows=9972019 loops=1)" " -> Materialize (cost=207925.01..219867.92 rows=955433 width=50) (actual time=1483.989..1785.636 rows=429401 loops=1)" " -> Sort (cost=207925.01..210313.59 rows=955433 width=50) (actual time=1483.983..1654.165 rows=429401 loops=1)" " Sort Key: t19.movie_id" " Sort Method: external merge Disk: 31720kB" " -> Seq Scan on movie_info_idx t19 (cost=0.00..15047.33 rows=955433 width=50) (actual time=7.284..221.597 rows=955433 loops=1)" " -> Materialize (cost=501605.39..537645.64 rows=2883220 width=12) (actual time=5823.040..6868.242 rows=1597396 loops=1)" " -> Sort (cost=501605.39..508813.44 rows=2883220 width=12) (actual time=5823.026..6477.517 rows=1597396 loops=1)" " Sort Key: t12.movie_id" " Sort Method: external merge Disk: 78888kB" " -> Seq Scan on movie_keyword t12 (cost=0.00..44417.20 rows=2883220 width=12) (actual time=11.672..839.498 rows=2883220 loops=1)" " -> Materialize (cost=141143.93..152995.81 rows=948150 width=16) (actual time=1916.356..2253.004 rows=478358 loops=1)" " -> Sort (cost=141143.93..143514.31 rows=948150 width=16) (actual time=1916.344..2125.698 rows=478358 loops=1)" " Sort Key: t13.linked_movie_id" " Sort Method: external merge Disk: 29632kB" " -> Seq Scan on movie_link t13 (cost=0.00..14607.50 rows=948150 width=16) (actual time=27.610..297.962 rows=948150 loops=1)" " -> Hash (cost=1.18..1.18 rows=18 width=16) (actual time=0.020..0.020 rows=18 loops=1)" " -> Seq Scan on link_type t14 (cost=0.00..1.18 rows=18 width=16) (actual time=0.010..0.012 rows=18 loops=1)" " -> Hash (cost=1537.10..1537.10 rows=91010 width=24) (actual time=54.055..54.055 rows=91010 loops=1)" " -> Seq Scan on keyword t15 (cost=0.00..1537.10 rows=91010 width=24) (actual time=0.006..14.703 rows=91010 loops=1)" " -> Hash (cost=4585.61..4585.61 rows=245461 width=42) (actual time=445.269..445.269 rows=245461 loops=1)" " -> Seq Scan on company_name t16 (cost=0.00..4585.61 rows=245461 width=42) (actual time=12.037..309.961 rows=245461 loops=1)" " -> Hash (cost=1.04..1.04 rows=4 width=25) (actual time=0.013..0.013 rows=4 loops=1)" " -> Seq Scan on company_type t17 (cost=0.00..1.04 rows=4 width=25) (actual time=0.009..0.010 rows=4 loops=1)" " -> Hash (cost=1.04..1.04 rows=4 width=13) (actual time=0.006..0.006 rows=4 loops=1)" " -> Seq Scan on comp_cast_type t18 (cost=0.00..1.04 rows=4 width=13) (actual time=0.002..0.003 rows=4 loops=1)" "Total runtime: 147055.016 ms" Is there anyway to force the name.id = 2003 before it tries to join all the tables together? As you can see, the end result is 4 tuples but it seems like it should be a fast join by using the available index after it limited it down with the name clause, although very complex.

    Read the article

  • Random Page Cost and Planning

    - by Dave Jarvis
    A query (see below) that extracts climate data from weather stations within a given radius of a city using the dates for which those weather stations actually have data. The query uses the table's only index, rather effectively: CREATE UNIQUE INDEX measurement_001_stc_idx ON climate.measurement_001 USING btree (station_id, taken, category_id); Reducing the server's configuration value for random_page_cost from 2.0 to 1.1 had a massive performance improvement for the given range (nearly an order of magnitude) because it suggested to PostgreSQL that it should use the index. While the results now return in 5 seconds (down from ~85 seconds), problematic lines remain. Bumping the query's end date by a single year causes a full table scan: sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1997-12-31'::date AND How do I persuade PostgreSQL to use the indexes regardless of years between the two dates? (A full table scan against 43 million rows is probably not the best plan.) Find the EXPLAIN ANALYSE results below the query. Thank you! Query SELECT extract(YEAR FROM m.taken) AS year, avg(m.amount) AS amount FROM climate.city c, climate.station s, climate.station_category sc, climate.measurement m WHERE c.id = 5182 AND earth_distance( ll_to_earth(c.latitude_decimal,c.longitude_decimal), ll_to_earth(s.latitude_decimal,s.longitude_decimal)) / 1000 <= 30 AND s.elevation BETWEEN 0 AND 3000 AND s.applicable = TRUE AND sc.station_id = s.id AND sc.category_id = 1 AND sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1996-12-31'::date AND m.station_id = s.id AND m.taken BETWEEN sc.taken_start AND sc.taken_end AND m.category_id = sc.category_id GROUP BY extract(YEAR FROM m.taken) ORDER BY extract(YEAR FROM m.taken) 1900 to 1996: Index "Sort (cost=1348597.71..1348598.21 rows=200 width=12) (actual time=2268.929..2268.935 rows=92 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1348586.56..1348590.06 rows=200 width=12) (actual time=2268.829..2268.886 rows=92 loops=1)" " -> Nested Loop (cost=0.00..1344864.01 rows=744510 width=12) (actual time=0.807..2084.206 rows=134893 loops=1)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (sc.station_id = m.station_id))" " -> Nested Loop (cost=0.00..12755.07 rows=1220 width=18) (actual time=0.502..521.937 rows=23 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.014..0.015 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Nested Loop (cost=0.00..9907.73 rows=3659 width=34) (actual time=0.014..28.937 rows=3458 loops=1)" " -> Seq Scan on station_category sc (cost=0.00..970.20 rows=3659 width=14) (actual time=0.008..10.947 rows=3458 loops=1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1996-12-31'::date) AND (category_id = 1))" " -> Index Scan using station_pkey1 on station s (cost=0.00..2.43 rows=1 width=20) (actual time=0.004..0.004 rows=1 loops=3458)" " Index Cond: (s.id = sc.station_id)" " Filter: (s.applicable AND (s.elevation >= 0) AND (s.elevation <= 3000))" " -> Append (cost=0.00..1072.27 rows=947 width=18) (actual time=6.996..63.199 rows=5865 loops=23)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.000..0.000 rows=0 loops=23)" " Filter: (m.category_id = 1)" " -> Bitmap Heap Scan on measurement_001 m (cost=20.79..1047.27 rows=941 width=18) (actual time=6.995..62.390 rows=5865 loops=23)" " Recheck Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" " -> Bitmap Index Scan on measurement_001_stc_idx (cost=0.00..20.55 rows=941 width=0) (actual time=5.775..5.775 rows=5865 loops=23)" " Index Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" "Total runtime: 2269.264 ms" 1900 to 1997: Full Table Scan "Sort (cost=1370192.26..1370192.76 rows=200 width=12) (actual time=86165.797..86165.809 rows=94 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1370181.12..1370184.62 rows=200 width=12) (actual time=86165.654..86165.736 rows=94 loops=1)" " -> Hash Join (cost=4293.60..1366355.81 rows=765061 width=12) (actual time=534.786..85920.007 rows=139721 loops=1)" " Hash Cond: (m.station_id = sc.station_id)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end))" " -> Append (cost=0.00..867005.80 rows=43670150 width=18) (actual time=0.009..79202.329 rows=43670079 loops=1)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.001..0.001 rows=0 loops=1)" " Filter: (category_id = 1)" " -> Seq Scan on measurement_001 m (cost=0.00..866980.80 rows=43670144 width=18) (actual time=0.008..73312.008 rows=43670079 loops=1)" " Filter: (category_id = 1)" " -> Hash (cost=4277.93..4277.93 rows=1253 width=18) (actual time=534.704..534.704 rows=25 loops=1)" " -> Nested Loop (cost=847.87..4277.93 rows=1253 width=18) (actual time=415.837..534.682 rows=25 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.012..0.014 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Hash Join (cost=847.87..1352.07 rows=3760 width=34) (actual time=6.427..35.107 rows=3552 loops=1)" " Hash Cond: (s.id = sc.station_id)" " -> Seq Scan on station s (cost=0.00..367.25 rows=7948 width=20) (actual time=0.004..23.529 rows=7949 loops=1)" " Filter: (applicable AND (elevation >= 0) AND (elevation <= 3000))" " -> Hash (cost=800.87..800.87 rows=3760 width=14) (actual time=6.416..6.416 rows=3552 loops=1)" " -> Bitmap Heap Scan on station_category sc (cost=430.29..800.87 rows=3760 width=14) (actual time=2.316..5.353 rows=3552 loops=1)" " Recheck Cond: (category_id = 1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1997-12-31'::date))" " -> Bitmap Index Scan on station_category_station_category_idx (cost=0.00..429.35 rows=6376 width=0) (actual time=2.268..2.268 rows=6339 loops=1)" " Index Cond: (category_id = 1)" "Total runtime: 86165.936 ms"

    Read the article

  • Low latency technologies for c++, c# and java?

    - by James
    I've been reading job descriptions and many mention 'low latency'. However, I wondered if someone could clarify what type of technologies this would refer to? One of the adverts mentioned 'ACE' which I googled to find out was some CISCO telephony technology. If you were hiring someone for a low latency role, what would you use as a checklist for ensuring they knew about low latency programming? I'm using this to learn more about low latency programming myself.

    Read the article

  • Server cost/requirements for a web site with thousands of concurrent users?

    - by Angelus
    I'm working on a big project, and I do not have much experience with servers and how much they cost. The big project consist of a new table game for online playing and betting. Basically, a poker server that must be responsive with thousands of concurrent users. What type of server must i look for? What features, hardware or software, are required? Should I consider cloud computing? thank you in advance.

    Read the article

  • The True Cost of a Solution

    - by D'Arcy Lussier
    I had a Twitter chat recently with someone suggesting Oracle and SQL Server were losing out to OSS (Open Source Software) in the enterprise due to their issues with scaling or being too generic (one size fits all). I challenged that a bit, as my experience with enterprise sized clients has been different – adverse to OSS but receptive to an established vendor. The response I got was: Found it easier to influence change by showing how X can’t solve our problems or X is extremely costly to scale. Money talks. I think this is definitely the right approach for anyone pitching an alternate or alien technology as part of a solution: identify the issue, identify the solution, then present pros and cons including a cost/benefit analysis. What can happen though is we get tunnel vision and don’t present a full view of the costs associated with a solution. An “Acura”te Example (I’m so clever…) This is my dream vehicle, a Crystal Black Pearl coloured Acura MDX with the SH-AWD package! We’re a family of 4 (5 if my daughters ever get their wish of adding a dog), and I’ve always wanted a luxury type of vehicle, so this is a perfect replacement in a few years when our Rav 4 has hit the 8 – 10 year mark. MSRP – $62,890 But as we all know, that’s not *really* the cost of the vehicle. There’s taxes and fees added on, there’s the extended warranty if I choose to purchase it, there’s the finance rate that needs to be factored in… MSRP –   $62,890 Taxes –      $7,546 Warranty - $2,500 SubTotal – $72,936 Finance Charge – $ 1094.04 Grand Total – $74,030 Well! Glad we did that exercise – we discovered an extra $11k added on to the MSRP! Well now we have our true price…or do we? Lifetime of the Vehicle I’m expecting to have this vehicle for 7 – 10 years. While the hard cost of the vehicle is known and dealt with, the costs to run and maintain the vehicle are on top of this. I did some research, and here’s what I’ve found: Fuel and Mileage Gas prices are high as it is for regular fuel, but getting into an MDX will require that I *only* purchase premium fuel, which comes at a premium price. I need to expect my bill at the pump to be higher. Comparing the MDX to my 2007 Rav4 also shows I’ll be gassing up more often. The Rav4 has a city MPG of 21, while the MDX plummets to 16! The MDX does have a bigger fuel tank though, so all in all the number of times I hit the pumps might even out. Still, I estimate I’ll be spending approximately $8000 – $10000 more on gas over a 10 year period than my current Rav4. Service Options Limited Although I have options with my Toyota here in Winnipeg (we have 4 Toyota dealerships), I do go to my original dealer for any service work. Still, I like the fact that I have options. However, there’s only one Acura dealership in all of Winnipeg! So if, for whatever reason, I’m not satisfied with the level of service I’m stuck. Non Warranty Service Work Also let’s not forget that there’s a bulk of work required every year that is *not* covered under warranty – oil changes, tire rotations, brake pads, etc. I expect I’ll need to get new tires at the 5 years mark as well, which can easily be $1200 – $1500 (I just paid $1000 for new tires for the Rav4 and we’re at the 5 year mark). Now these aren’t going to be *new* costs that I’m not used to from our existing vehicles, but they should still be factored in. I’d budget $500/year, or $5000 over the 10 years I’ll own the vehicle. Final Assessment So let’s re-assess the true cost of my dream MDX: MSRP                    $62,890 Taxes                       $7,546 Warranty                 $2,500 Finance Charge         $1094 Gas                        $10,000 Service Work            $5000 Grand Total           $89,030 So now I have a better idea of 10 year cost overall, and I’ve identified some concerns with local service availability. And there’s now much more to consider over the original $62,890 price tag. Tying This Back to Technology Solutions The process that we just went through is no different than what organizations do when considering implementing a new system, technology, or technology based solution, within their environments. It’s easy to tout the short term cost savings of particular product/platform/technology in a vacuum. But its when you consider the wider impact that the true cost comes into play. Let’s create a scenario: A company is not happy with its current data reporting suite. An employee suggests moving to an open source solution. The selling points are: - Because its open source its free - The organization would have access to the source code so they could alter it however they wished - It provided features not available with the current reporting suite At first this sounds great to the management and executive, but then they start asking some questions and uncover more information: - The OSS product is built on a technology not used anywhere within the organization - There are no vendors offering product support for the OSS product - The OSS product requires a specific server platform to operate on, one that’s not standard in the organization All of a sudden, the true cost of implementing this solution is starting to become clearer. The company might save money on licensing costs, but their training costs would increase significantly – developers would need to learn how to develop in the technology the OSS solution was built on, IT staff must learn how to set up and maintain a new server platform within their existing infrastructure, and if a problem was found there was no vendor to contact for support. The true cost of implementing a “free” OSS solution is actually spinning up a project to implement it within the organization – no small cost. And that’s just the short-term cost. Now the organization must ensure they maintain trained staff who can make changes to the OSS reporting solution and IT staff that will stay knowledgeable in the new server platform. If those skills are very niche, then higher labour costs could be incurred if those people are hard to find or if trained employees use that knowledge as leverage for higher pay. Maybe a vendor exists that will contract out support, but then there are those costs to consider as well. And let’s not forget end-user training – in our example, anyone that runs reports will need to be trained on how to use the new system. Here’s the Point We still tend to look at software in an “off the shelf” kind of way. It’s very easy to say “oh, this product is better than vendor x’s product – and its free because its OSS!” but the reality is that implementing any new technology within an organization has a cost regardless of the retail price of the product. Training, integration, support – these are real costs that impact an organization and span multiple departments. Whether you’re pitching an improved business process, a new system, or a new technology, you need to consider the bigger picture costs of implementation. What you define as success (in our example, having better reporting functionality) might not be what others define as success if implementing your solution causes them issues. A true enterprise solution needs to consider the entire enterprise.

    Read the article

  • Delphi low-level machine parameter access

    - by tonyhooley.mp
    There are many very low-level parameters measured by PCs and their processors (e.g. core temperatures, fan-speeds, voltage levels at various parts of the motherboard and processor internals) which are available and displayed by the BIOS, and by some aaplication programs. How does one access these low-level (real-time) data via Delphi? Is there a library? Is there a Windows API?

    Read the article

  • Writing low latency Java

    - by user997112
    Are there any Java-specific techniques (things which wouldnt apply to C++) for writing low latency code, in Java? I often see Java low latency roles and they ask for experience writing low latency Java- which sometimes seems a little bit of an oxymoron. The only think I could think of is experience with JNI, outsourcing I/O calls to native code. Also possibly using the disruptor pattern, but thats not an actual technology. Are there any Java specific tips for writing low latency code? I am aware there is a Real Time Java Spec, but I have been warned real-time is not the same as low latency....

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • release does not free up memory in low-memory condidtion

    - by user322945
    I am trying to follow the Apple's recommendation to handle low-memory warnings (found in Session 416 of WWDC 2009 videos) by freeing up resources used by freeing up my dataController object (referenced in my app delegate) that contains a large number of strings for read from a plist: - (void)applicationDidReceiveMemoryWarning:(UIApplication *)application { [_dataController release]; _dataController = nil; NSLog([NSString stringWithFormat:@"applicationDidReceiveMemoryWarning bottom... retain count:%i", [_dataController retainCount]]); } But when I run ObjectAlloc within Instruments and simulate a Low-Memory Condition, I don't see a decrease in the memory used by my app even though I see the NSLog statements written out and the retain count is zero for the object. I do pass references to the app delegate around to some of the view controllers. But the code above releases the reference to the _dataController object (containing the plist data) so I would expect the memory to be freed. Any help would be appreciated.

    Read the article

  • Cost of using ASP.NET

    - by Mackristo
    One thing that I keep hearing in reference to ASP.NET and MSFT technologies is that they cost money to use. Often when they are being compared to open source languages someone will mention that one factor in favor of open source is that it's free (to an extent). My question is, when does ASP.NET actually cost money to use in terms of using the proprietary technology? Understandably there are the hosting fees, but I'm curious about the fees outside of these hosting fees. I'm especially curious about this as it relates one-person smaller-site development (non-team/large enterprise). Any help is appreciated. (edits) Some excellent answers. Much appreciated The projects that I'm looking to use the technologies for would be personal sites and very small business sites (1 or 2). The intent would of course be that these projects get much bigger. It seems that for commercial production, fees will apply. What about just basic dynamic "shared hosting" sites that provide information?

    Read the article

  • Cost of Design vs Development

    - by Marco
    I usually develop content management solutions in php, I work with designers that create the front end (html & css) and I usually develop the backend (php & mysql) of said cms. I know that the cost of the website may vary depending of the complexity of the html or the backend, but taking as reference a very basic website, what percentage of the cost should go to design and what percentage should go to development. I want to make clear that I as the developer i'm in charge of all the effects and animations of the websites using some of the javascript frameworks, not the designer. I just want to know what´s the fair share for me and for the designer.

    Read the article

  • UITableView's cellForRowAtIndexPath is not getting called after low memory warning

    - by Jinesh
    I am new to COCOA and Objective C. I am working on an application which have two controllers with one table view in each, clicking an item form this table will lead to another controller to be pushed to the stack. All was working fine till i started handling low memory warning in app delegate. What i am doing in app delegate's applicationDidReceiveMemoryWarning is, deleting all of my model and popping out all controllers to its root view using popToRootViewControllerAnimated. Now my problem starts, once low mem warning is received table's cellForRowAtIndexPath is not getting called. All other methods of UITableViewDataSource is properly called. What i get on screen is a blank white screen. I am testing my app in iPhone OS 3.0 and development is done in Xcode V 3.1.3. Hope you guys can help me to nail this. Thanks in advance, Jinesh.

    Read the article

  • low latency data link pc to android

    - by steveh
    can anyone recommend a method for low latency bi-directional com link between my pc app and android slave app. the app i have works now via wifi but the latency is too slow (about 300mS), i'm looking to get it down to 10mS or so. the android is acting like a glorified remote control to the game on the pc. the apk displays a low res image and sends button presses back to the game and the round trip need to be quick i'm thinking the only option beside the network, is to connect a usb cable but i don't see a lot of support for that path and not even sure it would be lower latency than wifi any ideas please?

    Read the article

  • atomic operation cost

    - by osgx
    Hello What is the cost of the atomic operation? How much cycles does it consume? Will it pause other processors on SMP or NUMA, or will it block memory accesses? Will it flush reorder buffer in out-of-order CPU? What effects will be on the cache? Thanks.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >