Search Results

Search found 6670 results on 267 pages for 'speed dial'.

Page 53/267 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • How can I speed up the "finally get it" process?

    - by Earlz
    Hello, I am a hobby programmer and began when I was about 13. I'm currently going to college(freshman) for my computer science degree(which means, I'm still in the stuff I already know such as for loops). I've been programming professionally for a start up for about 9 months or so now. I have a serious problem though. I think that almost all of the code I write is perfect. Now I remember reading an article somewhere where there is like 3 stages of learning programming You don't know anything and you know you don't know anything. You don't know anything but you think you do. You finally get and accept that you don't know anything. (if someone finds that article tell me and I'll give a link) So right now, I'm at stage 2. How can I get to stage 3 quicker? The more and more of some people's code I read I think "this is complete rubbish, I would've done it like..." and I really dislike how I think that way. (and this fairly recently began happening, like over the past year)

    Read the article

  • I need to speed this code at least 2 times!

    - by Dominating
    include include include include using namespace std; inline void PrintMapName(multimap pN, string s) { pair::iterator, multimap::iterator ii; multimap::iterator it; ii = pN.equal_range(s); multimap tmp; for(it = ii.first; it != ii.second; ++it) { tmp.insert(pair(it-second,1)); } multimap::iterator i; bool flag = false; for(i = tmp.begin(); i != tmp.end(); i++) { if(flag) { cout<<" "; } cout<first; if(flag) { cout<<" "; } flag = true; } cout< int main() { multimap phoneNums; multimap numPhones; int N; cinN; int tests; string tmp, tmp1,tmp2; while(N 0) { cintests; while(tests 0) { cintmp; if(tmp == "add") { cintmp1tmp2; phoneNums.insert(pair(tmp1,tmp2)); numPhones.insert(pair(tmp2,tmp1)); } else { if(tmp == "delnum") { cintmp1; multimap::iterator it; multimap::iterator tmpr; for(it = phoneNums.begin(); it != phoneNums.end();it++) { tmpr = it; if(it-second == tmp1) { phoneNums.erase(it,tmpr); } } numPhones.erase(tmp1); } else { if(tmp == "delname") { cintmp1; phoneNums.erase(tmp1); multimap::iterator it; multimap::iterator tmpr; for(it = numPhones.begin(); it != numPhones.end();it++) { tmpr = it; if(it-second == tmp1) { numPhones.erase(it,tmpr); } } } else { if(tmp =="queryname") { cintmp1; PrintMapName(phoneNums, tmp1); } else//querynum { cintmp1; PrintMapName(numPhones, tmp1); } } } } tests--; } N--; } return 0; }

    Read the article

  • How do I get the CPU speed and total physical ram in C#?

    - by edude05
    I need a simple way of checking how much ram and fast the CPU of the host PC is. I tried WMI however the code I'm using private long getCPU() { ManagementClass mObject = new ManagementClass("Win32_Processor"); mObject.Get(); return (long)mObject.Properties["MaxClockSpeed"].Value; } Throws a null reference exception. Furthermore, WMI queries are a bit slow and I need to make a few to get all the specs. Is there a better way?

    Read the article

  • Why not speed up testing by using function dependency graph?

    - by Maltrap
    It seems logical to me that if you have a dependency graph of your source code (tree showing call stack of all functions in your code base) you should be able to save a tremendous amount of time doing functional and integration tests after each release. Essentially you will be able to tell the testers exactly what functionality to test as the rest of the features remain unchanged from a source code point of view. If for instance you change a spelling mistake in once piece of the code, there is no reason to run through your whole test script again "just in case" you introduced a critical bug. My question, why are dependency trees not used in software engineering and if you use them, how do you maintain them? What tools are available that generate these trees for C# .NET, C++ and C source code?

    Read the article

  • What is a good CPU/PC setup to speed up intensive C++/templates compilation?

    - by ApplePieIsGood
    I currently have a machine with an Opteron 275 (2.2Ghz), which is a dual core CPU, and 4GB of RAM, along with a very fast hard drive. I find that when compiling even somewhat simple projects that use C++ templates (think boost, etc.), my compile times can take quite a while (minutes for small things, much longer for bigger projects). Unfortunately only one of the cores is pegged at 100%, so I know it's not the I/O, and it would seem that there is no way to take advantage of the other core for C++ compilation?

    Read the article

  • [C++] Is it possible to use threads to speed up file reading ?

    - by Mister Mystère
    Hi there, I want to read a file as fast as possible (40k lines) [Edit : the rest is obsolete]. Edit: Andres Jaan Tack suggested a solution based on one thread per file, and I want to be sure I got this (thus this is the fastest way) : One thread per entry file reads it whole and stocks its content in a container associated (- as many containers as there are entry files) One thread calculates the linear combination of every cell read by the input threads, and stocks the results in the exit container (associated to the output file). One thread writes by block (every 4kB of data, so about 10 lines) the content of the output container. Should I deduce that I must not use m-mapped files (because the program's on standby waiting for the data) ? Thanks aforehand. Sincerely, Mister mystère.

    Read the article

  • Speed up a web service for auto complete and avoid too many method calls.

    - by jphenow
    So I've got my jquery autocomplete 'working,' but its a little fidgety since I call the webservice method each time a keydown() fires so I get lots of methods hanging and sometimes to get the "auto" to work I have to type it out and backspace a bit because i'm assuming it got its return value a little slow. I've limited the query results to 8 to mininmize time. Is there anything i can do to make this a little snappier? This thing seems near useless if I don't get it a little more responsive. javascript $("#clientAutoNames").keydown(function () { $.ajax({ type: "POST", url: "WebService.asmx/LoadData", data: "{'input':" + JSON.stringify($("#clientAutoNames").val()) + "}", contentType: "application/json; charset=utf-8", dataType: "json", success: function (data) { if (data.d != null) { var serviceScript = data.d; } $("#autoNames").html(serviceScript); $('#clientAutoNames').autocomplete({ minLength: 2, source: autoNames, delay: 100, focus: function (event, ui) { $('#project').val(ui.item.label); return false; }, select: function (event, ui) { $('#clientAutoNames').val(ui.item.label); $('#projectid').val(ui.item.value); $('#project-description').html(ui.item.desc); pkey = $('#project-id').val; return false; } }) .data("autocomplete")._renderItem = function (ul, item) { return $("<li></li>") .data("item.autocomplete", item) .append("<a>" + item.label + "<br>" + item.desc + "</a>") .appendTo(ul); } } }); }); WebService.asmx <WebMethod()> _ Public Function LoadData(ByVal input As String) As String Dim result As String = "<script>var autoNames = [" Dim sqlOut As Data.SqlClient.SqlDataReader Dim connstring As String = *Datasource* Dim strSql As String = "SELECT TOP 2 * FROM v_Clients WHERE (SearchName Like '" + input + "%') ORDER BY SearchName" Dim cnn As Data.SqlClient.SqlConnection = New Data.SqlClient.SqlConnection(connstring) Dim cmd As Data.SqlClient.SqlCommand = New Data.SqlClient.SqlCommand(strSql, cnn) cnn.Open() sqlOut = cmd.ExecuteReader() Dim c As Integer = 0 While sqlOut.Read() result = result + "{" result = result + "value: '" + sqlOut("ContactID").ToString() + "'," result = result + "label: '" + sqlOut("SearchName").ToString() + "'," 'result = result + "desc: '" + title + " from " + company + "'," result = result + "}," End While result = result + "];</script>" sqlOut.Close() cnn.Close() Return result End Function I'm sure I'm just going about this slightly wrong or not doing a better balance of calls or something. Greatly appreciated!

    Read the article

  • What Visual C++ references are worth a look for a Java programmer looking to get up to speed?

    - by Terry V.
    I have a lot of experience with Java/OO. There are tons of C++ tutorials/references out there, but I was wondering if there are a few key ones that a Java programmer might find useful when making the transition. I will be moving from server-side J2EE to Windows Visual C++ desktop programming. I have googled and found tons of resources, but am overwhelmed and don't know where to best spend my time. I have only a few days to get a good start. Is Visual Studio Express / Microsoft Visual C++ the best IDE for me to start with? Also, any words of wisdom from others who know and work with both languages?

    Read the article

  • Are there any tools to speed up Cocoa development?

    - by user262325
    I noticed that there is much repeated work to do when creating Cocoa source code. For example, if I set an instance for an object: NSMutableArray *infoArray; I need add code: @property (retain,nonatomic) NSMutableArray *infoArray; @synthesize infoArray; in - (void)dealloc { I also need add: [infoArray release]; Is there any tool that can automate this, perhaps by automatically paste or copy the source code and add the repeated code at right place?

    Read the article

  • JQuery Optimization: Is there any way to speed up the rendering of the FlexSelect control?

    - by Sephrial
    Greetings, I am new to jQuery, and I have a performance problem with the FlexSelect control where it takes about 5 seconds to render the dropdown control (in the renderDropdown() function). The dropdown list contains about 5000 element. I believe all the runtime is attributed to the following block of code: var list = this.dropdownList.html(""); $.each(this.results, function() { list.append($("<li/>").html(this.name)); }); Question: Are there any alternatives that would build this list of elements in a more inefficient manner?

    Read the article

  • speed of map() vs. list comprehension vs. numpy vectorized function in python

    - by mcstrother
    I have a function foo(i) that takes an integer and takes a significant amount of time to execute. Will there be a significant performance difference between any of the following ways of initializing 'a': a = [foo(i) for i in xrange(100)] , a = map(foo, range(100)) , and vfoo = numpy.vectorize(foo) vfoo(range(100)) ? (I don't care whether the output is a list or a numpy array). Is there some other better way of doing this? Thanks.

    Read the article

  • How can I speed up line by line reading of an ASCII file? (C++)

    - by Jon
    Here's a bit of code that is a considerable bottleneck after doing some measuring: //----------------------------------------------------------------------------- // Construct dictionary hash set from dictionary file //----------------------------------------------------------------------------- void constructDictionary(unordered_set<string> &dict) { ifstream wordListFile; wordListFile.open("dictionary.txt"); string word; while( wordListFile >> word ) { if( !word.empty() ) { dict.insert(word); } } wordListFile.close(); } I'm reading in ~200,000 words and this takes about 240 ms on my machine. Is the use of ifstream here efficient? Can I do better? I'm reading about mmap() implementations but I'm not understanding them 100%. The input file is simply text strings with *nix line terminations.

    Read the article

  • How can I store large amount of data from a database to XML (speed problem, part three)?

    - by Andrija
    After getting some responses, the current situation is that I'm using this tip: http://www.ibm.com/developerworks/xml/library/x-tipbigdoc5.html (Listing 1. Turning ResultSets into XML), and XMLWriter for Java from http://www.megginson.com/downloads/ . Basically, it reads date from the database and writes them to a file as characters, using column names to create opening and closing tags. While doing so, I need to make two changes to the input stream, namely to the dates and numbers. // Iterate over the set while (rs.next()) { w.startElement("row"); for (int i = 0; i < count; i++) { Object ob = rs.getObject(i + 1); if (rs.wasNull()) { ob = null; } String colName = meta.getColumnLabel(i + 1); if (ob != null ) { if (ob instanceof Timestamp) { w.dataElement(colName, Util.formatDate((Timestamp)ob, dateFormat)); } else if (ob instanceof BigDecimal){ w.dataElement(colName, Util.transformToHTML(new Integer(((BigDecimal)ob).intValue()))); } else { w.dataElement(colName, ob.toString()); } } else { w.emptyElement(colName); } } w.endElement("row"); } The SQL that gets the results has the to_number command (e.g. to_number(sif.ID) ID ) and the to_date command (e.g. TO_DATE (sif.datum_do, 'DD.MM.RRRR') datum_do). The problems are that the returning date is a timestamp, meaning I don't get 14.02.2010 but rather 14.02.2010 00:00:000 so I have to format it to the dd.mm.yyyy format. The second problem are the numbers; for some reason, they are in database as varchar2 and can have leading zeroes that need to be stripped; I'm guessing I could do that in my SQL with the trim function so the Util.transformToHTML is unnecessary (for clarification, here's the method): public static String transformToHTML(Integer number) { String result = ""; try { result = number.toString(); } catch (Exception e) {} return result; } What I'd like to know is a) Can I get the date in the format I want and skip additional processing thus shortening the processing time? b) Is there a better way to do this? We're talking about XML files that are in the 50 MB - 250 MB filesize category.

    Read the article

  • Speed/expensive of SQLite query vs. List.contains() for "in-set" icon on list rows

    - by kpdvx
    An application I'm developing requires that the app main a local list of things, let's say books, in a local "library." Users can access their local library of books and search for books using a remote web service. The app will be aware of other users of the app through this web service, and users can browse other users' lists of books in their library. Each book is identified by a unique bookId (represented as an int). When viewing books returned through a search result or when viewing another user's book library, the individual list row cells need to visually represent if the book is in the user's local library or not. A user can have at most 5,000 books in the library, stored in SQLite on the device (and synchronized with the remote web service). My question is, to determine if the book shown in the list row is in the user's library, would it be better to directly ask SQLite (via SELECT COUNT(*)...) or to maintain, in-memory, a List or int[] array of some sort containing the unique bookIds. So, on each row display do I query SQLite or check if the List or int[] array contains the unique bookId? Because the user can have at most 5,000 books, each bookId occupies 4 bytes so at most this would use ~ 20kB. In thinking about this, and in typing this out, it seems obvious to me that it would be far better for performance if I maintained a list or int[] array of in-library bookIds vs. querying SQLite (the only caveat to maintaining an int[] array is that if books are added or removed I'll need to grow or shrink the array by hand, so with this option I'll most likely use an ArrayList or Vector, though I'm not sure of the additional memory overhead of using Integer objects as opposed to primitives). Opinions, thoughts, suggestions?

    Read the article

  • Does allocation speed depend on the garbage collector being used?

    - by jkff
    My app is allocating a ton of objects (1mln per second; most objects are byte arrays of size ~80-100 and strings of the same size) and I think it might be the source of its poor performance. The app's working set is only tens of megabytes. Profiling the app shows that GC time is negligibly small. However, I suspect that perhaps the allocation procedure depends on which GC is being used, and some settings might make allocation faster or perhaps make a positive influence on cache hit rate, etc. Is that so? Or is allocation performance independent on GC settings under the assumption that garbage collection itself takes little time?

    Read the article

  • Is learning ed worth it to boost my speed in VIM?

    - by Ksiresh
    I've learned the basic/intermediate levels of VIM ( it's to vast to list ). I often find that I slip back to my old ways and start using the mouse, holding down keys to get somewhere, and doing other stupid things that could be spead up. Would it be worth spending time to learn ed to break the habits learned from years in Windoze? Does using ed cultivate the right type of thinking that will transfer to VIM???

    Read the article

  • How to speed up saving a UIImagePickerController image from the camera to the filesystem via UIImagePNGRepresentation()?

    - by kazuhito0000
    I'm making an applications that let users take a photo and show them both in thumbnail and photo viewer. I have NSManagedObject class called photo and photo has a method that takes UIImage and converts it to PNG using UIImagePNGRepresentation() and saves it to filesystem. After this operation, resize the image to thumbnail size and save it. The problem here is UIImagePNGRepresentation() and conversion of image size seems to be really slow and I don't know if this is a right way to do it. Tell me if anyone know the best way to accomplish what I want to do. Thank you in advance.

    Read the article

  • How do I test the speed of a mySQL query?

    - by Chris
    I have a select and query like below... $sql = "SELECT * FROM notifications WHERE to_id='".$userid."' AND (alert_read != '1' OR user_read != '1') ORDER BY alert_time DESC"; $result = mysql_query($sql); how do I test how long the query took to run?

    Read the article

  • How to copy bytes from buffer into the managed struct?

    - by Chupo_cro
    I have a problem with getting the code to work in a managed environment (VS2008 C++/CLI Win Forms App). The problem is I cannot declare the unmanaged struct (is that even possible?) inside the managed code, so I've declared a managed struct but now I have a problem how to copy bytes from buffer into that struct. Here is the pure C++ code that obviously works as expected: typedef struct GPS_point { float point_unknown_1; float latitude; float longitude; float altitude; // x10000 float time; int point_unknown_2; int speed; // x100 int manually_logged_point; // flag (1 --> point logged manually) } track_point; int offset = 0; int filesize = 256; // simulates filesize int point_num = 10; // simulates number of records int main () { char *buffer_dyn = new char[filesize]; // allocate RAM // here, the file would have been read into the buffer buffer_dyn[0xa8] = 0x1e; // simulates the speed data (1e 00 00 00) buffer_dyn[0xa9] = 0x00; buffer_dyn[0xaa] = 0x00; buffer_dyn[0xab] = 0x00; offset = 0x90; // if the data with this offset is transfered trom buffer // to struct, int speed is alligned with the buffer at the // offset of 0xa8 track_point *points = new track_point[point_num]; points[0].speed = 0xff; // (debug) it should change into 0x1e memcpy(&points[0],buffer_dyn+offset,32); cout << "offset: " << offset << "\r\n"; //cout << "speed: " << points[0].speed << "\r\n"; printf ("speed : 0x%x\r\n",points[0].speed); printf("byte at offset 0xa8: 0x%x\r\n",(unsigned char)buffer_dyn[0xa8]); // should be 0x1e delete[] buffer_dyn; // release RAM delete[] points; /* What I need is to rewrite the lines 29 and 31 to work in the managed code (VS2008 Win Forms C++/CLI) What should I have after: array<track_point^>^ points = gcnew array<track_point^>(point_num); so I can copy 32 bytes from buffer_dyn to the managed struct declared as typedef ref struct GPS_point { float point_unknown_1; float latitude; float longitude; float altitude; // x10000 float time; int point_unknown_2; int speed; // x100 int manually_logged_point; // flag (1 --> point logged manually) } track_point; */ return 0; } Here is the paste to codepad.org so it can be seen the code is OK. What I need is to rewrite these two lines: track_point *points = new track_point[point_num]; memcpy(&points[0],buffer_dyn+offset,32); to something that will work in a managed application. I wrote: array<track_point^>^ points = gcnew array<track_point^>(point_num); and now trying to reproduce the described copying of the data from buffer over the struct, but haven't any idea how it should be done. Alternatively, if there is a way to use an unmanaged struct in the same way shown in my code, then I would like to avoid working with managed struct.

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >