Search Results

Search found 6199 results on 248 pages for 'fast enumeration'.

Page 164/248 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Primary reasons why programming language runtimes use stacks?

    - by manuel aldana
    Many programming language runtime environments use stacks as their primary storage structure (e.g. see JVM bytecode to runtime example). Quickly recalling I see following advantages: Simple structure (pop/push), trivial to implement Most processors are anyway optimized for stack operations, so it is very fast Less problems with memory fragmentation, it is always about moving memory-pointer up and down for allocation and freeing complete blocks of memory by resetting the pointer to the last entry offset. Is the list complete or did I miss something? Are there programming language runtime environments which are not using stacks for storage at all?

    Read the article

  • Efficient implementation of natural logarithm (ln) and exponentiation

    - by Donotalo
    Basically, I'm looking for implementation of log() and exp() functions provided in C library <math.h>. I'm working with 8 bit microcontrollers (OKI 411 and 431). I need to calculate Mean Kinetic Temperature. The requirement is that we should be able to calculate MKT as fast as possible and with as little code memory as possible. The compiler comes with log() and exp() functions in <math.h>. But calling either function and linking with the library causes the code size to increase by 5 Kilobytes, which will not fit in one of the micro we work with (OKI 411), because our code already consumed ~12K of available ~15K code memory. The implementation I'm looking for should not use any other C library functions (like pow(), sqrt() etc). This is because all library functions are packed in one library and even if one function is called, the linker will bring whole 5K library to code memory.

    Read the article

  • What is the fastest way to copy content of DVD to hard disc using Linux

    - by Ritesh
    I have gone through some of the links Which talks about fastest way of copying files in windows using FILE_FLAG_NO_BUFFERING and FILE_FLAG_OVERLAPPED . It also talks about how request made for read and write opeartions with BUFFER SIZE as 256KB and 128KB are faster than 1Mb .The link for that is :- Explanation for tiny reads (overlapped, buffered) outperforming large contiguous reads? I am also loking for a Similar method in linux Which allows me to copy the content of my DVD to Hard Disc in a fast Way . So I wanted to know Is there some file operation flags in Linux which would provide me the best result or Which way of Copy in Linux is the best ? My codes are all in c++.

    Read the article

  • how to make a program like fraps.

    - by blood
    i want to make a program that will capture video. what if the best way to captrue the video, i know c++ and im learning assembly and i found in my assembly book i can get data from the video card i think? would that be the best way? i know fraps hooks into programs but i want my program to take the full screen? so i want something fast low memory useage if i can and something i can use on other computers with them having the same hardware.

    Read the article

  • imported vm gives "failed to open/create network" error

    - by Colleen
    steps: 1. created a vm in windows 2. partitioned drive and installed ubuntu 3. exported the vm I created 4. mounted windows drive in ubuntu 5. imported the vm from the export, in the mounted drive 6. tried to start vm, got the following error: "Failed to open a session for the virtual machine XXXX. Failed to open/create the internal network 'HostInterfaceNetworking-Intel(R) 82579LM Gigabit Network Connection' (you might need to modprobe vboxnetflt to make it accessible) (VERR_INTNET_FLT_IF_NOT_FOUND). Result Code: NS_ERROR_FAILURE (0x80004005) Component: Console Interface: IConsole {1968b7d3-e3bf-4ceb-99e0-cb7c913317bb} " Network settings: Adapter 1: PCnet-FAST III (Bridged adapter, Intel(R) 82579LM Gigabit Network Connection)

    Read the article

  • How to I correctly add brackets to this code

    - by Mohammad
    This code removes whites paces, (fyi: it's credited to be very fast) function wSpaceTrim(s){ var start = -1, end = s.length; while (s.charCodeAt(--end) < 33 ); //here while (s.charCodeAt(++start) < 33 ); //here also return s.slice( start, end + 1 ); } The while loops don't have brackets, how would i correctly add brackets to this code? while(iMean){ like this; } Thank you so much!

    Read the article

  • C: Pointers to any type?

    - by dragme
    I hear that C isn't so type-safe and I think that I could use that as an advantage for my current project. I'm designing an interpreter with the goal for the VM to be extremely fast, much faster than Ruby and Python, for example. Now I know that premature optimization "is the root of all evil" but this is rather a conceptual problem. I have to use some sort of struct to represent all values in my language (from number over string to list and map) Would the following be possible? struct Value { ValueType type; void* value; } I would store the actual values elsewhere, e.g: a separate array for strings and integers, value* would then point to some member in this table. I would always know the type of the value via the type variable, so there wouldn't be any problems with type errors. Now: Is this even possible in terms of syntax and typing?

    Read the article

  • does a ruby on rails rack class get access to the entire rails environment?

    - by Andrew Arrow
    that is when the def call(env) method is invoked by hitting any url, can I inside that method make some ActiveRecord queries, use classes defined in lib, etc. etc. Or is it more like an irb console without the rails env loaded? Another way to put it with a rake task example: task :foo => :environment do # with env end task :foo2 do # without env end I would think rack classes would NOT get the environment so they are super fast and don't take all the overhead of a normal rails request. But that doesn't seem to be the case. I CAN make ActiveRecord queries inside my rack class. So what is the advantage of rack then?

    Read the article

  • packing fields of a class into a byte array in c#

    - by alex
    Hi all: I am trying to create a fast way to convert c# classes into byte array. I thought of serializing the class directly to a byte array using an example I found: // Convert an object to a byte array private byte[] ObjectToByteArray(Object obj) { if(obj == null) return null; BinaryFormatter bf = new BinaryFormatter(); MemoryStream ms = new MemoryStream(); bf.Serialize(ms, obj); return ms.ToArray(); } But the byte array I got contains some other information that are not related to the fields in the class. I guess it is also converting the Properties of the class. Is there a way to serialize only the fields of the class to a byte array? Thanks

    Read the article

  • jquery change function into code for plugin

    - by jaap Klevering
    Help would be greatly appreciated. What is the correct markup that would change this function into a plugin. I tried, but cant make it to work. $(document).ready(function(){ $('ul.tabNav a').click(function() { var curChildIndex = $(this).parent().prevAll().length + 1; $(this).parent().parent().children('.current').removeClass('current'); $(this).parent().addClass('current'); $(this).parent().parent().next('.tabContainer').children('.current').slideUp('fast',function() { $(this).removeClass('current'); $(this).parent().children('div:nth-child('+curChildIndex+')').slideDown('normal',function() { $(this).addClass('current'); }); }); return true; }); });

    Read the article

  • Why is doing a top(1) on an indexed column in mssql slow?

    - by reinier
    I'm puzzled by the following. I have a DB with around 10 million rows, and (among other indices) on 1 column is an index. Now I have 700k rows where the campaignid is indeed 3835 For all these rows, the connectionid is the same. I just want to find out this connectionid. use messaging_db; SELECT TOP (1) connectionid FROM outgoing_messages WITH (NOLOCK) WHERE (campaignid_int = 3835) Now this query takes approx 30 seconds to perform! I (with my small db knowledge) would expect that it would take any of the rows, and return me that connectionid If I test this same query for a campaign which only has 1 entry, it goes really fast. So the index works. How would I tackle this and why does this not work?

    Read the article

  • Operations on bytes in C#

    - by Hooch
    Hello. I'm writing application to control LEDS on LPT. I have everything working except this. This is one small function. I have sth like that: I want to build function that will take two argument and return one number: In actual code those binary numers will be in hex. I put them there like that so that it's easier for you to visualize it. Example1: arg1 = 1100 1100 arg2 = 1001 0001 retu = 0100 1100 Example2: arg1 = 1111 1111 arg2 = 0001 0010 retu = 1110 1101 Example3: arg1 = 1111 0000 arg2 = 0010 0010 retu = 1101 0000 I have no idea how this function should look like. I want it to be as fast as possible. I'll call this function 200 times per second.

    Read the article

  • Multiple indexes for a Java Collection - most basic solution?

    - by chris_l
    Hi, I'm looking for the most basic solution to create multiple indexes on a Java Collection. Required functionality: When a Value is removed, all index entries associated with that value must be removed. Index lookup must be faster than linear search (at least as fast as a TreeMap). Side conditions: It should ideally work with JavaSE (6.0) alone - no extra libraries, if possible. If necessary, then only small (not something like Lucene), common and well tested libraries. No database! Of course, I could write a class that manages multiple Maps myself. But I'd like to know, if it can be done without - while still getting a simple usage similar to using a single indexed java.util.Map. Thanks, Chris

    Read the article

  • Implications of Fulltext Search over many columns

    - by Alex
    Hello, I have a really wide table which includes separate columns for billing address, shipping address, primary address, names, aliases etc. (I can't normalize this table further, and that's not the question here anyways). I'm implementing SQL Server fulltext search, and I'm wondering whether I should limit the search ability to just the primary fields (primary address and names for example), or if I can extend the search ability across all columns without occurring too much of a performance or memory penalty. I've done some basic testing with 10,000 sample rows and it's quite fast but I don't have much experience with fulltext indexing, especially its dictionary internals, so I don't know if the index is going to grow over time, or if there is anything else to consider. Thoughts?

    Read the article

  • Teaching a kid to type in order to start programming

    - by at
    My 5 year old wants to start programming, but he doesn't yet know how to type without looking for each letter 1 at a time. I know it's going to frustrate him to go so slow because of his typing speed. And it's not going to be fun looking for all the letters constantly... So what's the best way to get him to type fast? Clearly he'll need to type a lot of punctuation like semicolons, colons and symbols. So his little hands will have to get used to spreading over the keyboard... I did some google searches and found some very poor looking apps that focused on the letters.

    Read the article

  • Need help on nested loop of queries in php and mysql?

    - by mysqllearner
    Hi, I am trying to get do this: <?php $good_customer = 0; $q = mysql_query("SELECT user FROM users WHERE activated = '1'"); // this gives me about 40k users while($r = mysql_fetch_assoc($q)){ $money_spent = 0; $user = $r['user']; // Do queries on another 20 tables for($i = 1; $i<=20 ; $i++){ $tbl_name = 'data' . $i; $q2 = mysql_query("SELECT money_spent FROM $tbl_name WHERE user = '{$user}'"); while($r2 = mysql_fetch_assoc($q2)){ $money_spend += $r2['money_spent']; } if($money_spend > 1000000){ $good_customer += 1; } } } This is just an example. I am testing on localhost, for single user, it returns very fast. But when I try 1000, it takes forever, not even mentioned 40k users. Anyway to optimise/improve this code? EDIT: By the way, each of the others 20 tables has ~20 - 40k records

    Read the article

  • Anyone Know a Great Sparse One Dimensional Array Library in Python?

    - by TheJacobTaylor
    I am working on an algorithm in Python that uses arrays heavily. The arrays are typically sparse and are read from and written to constantly. I am currently using relatively large native arrays and the performance is good but the memory usage is high (as expected). I would like to be able to have the array implementation not waste space for values that are not used and allow an index offset other than zero. As an example, if my numbers start at 1,000,000 I would like to be able to index my array starting at 1,000,000 and not be required to waste memory with a million unused values. Array reads and writes needs to be fast. Expanding into new territory can be a small delay but reads and writes should be O(1) if possible. Does anybody know of a library that can do it? Thanks!

    Read the article

  • gcc options for fastest code

    - by rwallace
    I'm distributing a C++ program with a makefile for the Unix version, and I'm wondering what compiler options I should use to get the fastest possible code (it falls into the category of programs that can use all the computing power they can get and still come back for more), given that I don't know in advance what hardware, operating system or gcc version the user will have, and I want above all else to make sure it at least works correctly on every major Unix-like operating system. Thus far, I have g++ -O3 -Wno-write-strings, are there any other options I should add? On Windows, the Microsoft compiler has options for things like fast calling convention and link time code generation that are worth using, are there any equivalents on gcc? (I'm assuming it will default to 64-bit on a 64-bit platform, please correct me if that's not the case.)

    Read the article

  • Retrieving data from database. Retrieve only when needed or get everything?

    - by RHaguiuda
    I have a simple application to store Contacts. This application uses a simple relational database to store Contact information, like Name, Address and other data fields. While designing it, I question came to my mind: When designing programs that uses databases, should I retrieve all database records and store them in objects in my program, so I have a very fast performance or I should always gather data only when required? Of course, retrieving all data can only be done if it`s not too many, but do you use this approach when you make sure that the database will be small (< 300 records for example)? I have designed once a similar application that fetches data only when needed, but that was slow (using a Access database). Thanks for all help.

    Read the article

  • What kind of good approaches use c++ programmers for storing error messages?

    - by Narek
    Say I have a huge code and have different kinds of error messages. For that I want to have a separate place where I store error codes and error messages. For example, for an error that occured because the program could not open a file I stroe: F001 "Can not open a file." "The same error message in another language" "The same error message in third language" What is the best way of storing different kind of error messages and codes in a file for c++ programmer in order to use that in a programme fast and easily? FYI I am working with Qt lib.

    Read the article

  • how to optimize an oracle query that has to_char in where clause for date

    - by panorama12
    I have a table that contains about 49403459 records. I want to query the table on a date range. say 04/10/2010 to 04/10/2010. However, the dates are stored in the table as format 10-APR-10 10.15.06.000000 AM (time stamp). As a result. When I do: SELECT bunch,of,stuff,create_date FROM myTable WHERE TO_CHAR (create_date,'MM/DD/YYYY)' >= '04/10/2010' AND TO_CHAR (create_date, 'MM/DD/YYYY' <= '04/10/2010' I get 529 rows but in 255.59 seconds! which is because I guess I am doing to_char on EACH record. However, When I do SELECT bunch,of,stuff,create_date FROM myTable WHERE create_date >= to_date('04/10/2010','MM/DD/YYYY') AND create_date <= to_date('04/10/2010','MM/DD/YYYY') then I get 0 results in 0.14 seconds. How can I make this query fast and still get valid (529) results?? At this point I can not change indexes. Right now I think index is created on create_date column

    Read the article

  • is it posible to upload directly to remote server using SFTP on ASP.net MVC

    - by DucDigital
    Hi! I am currently develope something using asp.net MVC, im still quite not experience with it so please help me out. I have a form for user to upload Video. The current ideal concept to upload to remote server is to Upload it to to the current server, then use FTP to push it to a remote server. For me, this is not quite fast since you have to upload to current server (Time x1) and then the current server push to new server (Time x2) so it's double the time. So my idea is to make user upload it to the current server, and WHILE user is uploading, the current server add the file to DB and also send the file to the remote server at the same time using SFTP... is it posible and are there any security hole in this concept? Thank you very much

    Read the article

  • Trying to find a good javascript function for hmac-sha1

    - by Darxval
    So i have been searching the web for a javascript source for an Hmac-sha1 algorithm. I saw Crypto's but i cant seem to get it to work, mainly because it has no idea what crypto means. (i copied the .js script functions into my script file) http://code.google.com/p/crypto-js/ I have my base64 encoded function already. that i got from here: http://nerds-central.blogspot.com/2007/01/fast-scalable-javascript-and-vbscript.html btw this for a twitter application using the new OAuth system. any help or links to where i can find anything on this would be helpful If you need me to elaborate let me know. thank you!

    Read the article

  • What are some good ways to do intermachine locking?

    - by mike
    Our server cluster consists of 20 machines, each with 10 pids of 5 threads. We'd like some way to prevent any two threads, in any pid, on any machine, from modifying the same object at the same time. Our code's written in Python and runs on Linux, if that helps narrow things down. Also, it's a pretty rare case that two such threads want to do this, so we'd prefer something that optimizes the "only one thread needs this object" case to be really fast, even if it means that the "one thread has locked this object and another one needs it" case isn't great. What are some of the best practices?

    Read the article

  • Slow query with unexpected scan

    - by zerkms
    Hello I have this query: SELECT * FROM SAMPLE SAMPLE INNER JOIN TEST TEST ON SAMPLE.SAMPLE_NUMBER = TEST.SAMPLE_NUMBER INNER JOIN RESULT RESULT ON TEST.TEST_NUMBER = RESULT . TEST_NUMBER WHERE SAMPLED_DATE BETWEEN '2010-03-17 09:00' AND '2010-03-17 12:00' the biggest table here is RESULT, contains 11.1M records. The left 2 tables about 1M. this query works slowly (more than 10 minutes) and returns about 800 records. executing plan shows clustered index scan over all 11M records. RESULT.TEST_NUMBER is a clustered primary key. if I change 2010-03-17 09:00 to 2010-03-17 10:00 - i get about 40 records. it executes for 300ms. and plan shows clustered index seek if i replace * in SELECT clause to RESULT.TEST_NUMBER (covered with index) - then all become fast in first case too. this points to hdd io issues, but doesn't clarifies changing plan. so, any ideas?

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >