Search Results

Search found 8250 results on 330 pages for 'dunn less'.

Page 254/330 | < Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >

  • Is there a practical benefit to casting a NULL pointer to an object and calling one of its member fu

    - by zdawg
    Ok, so I know that technically this is undefined behavior, but nonetheless, I've seen this more than once in production code. And please correct me if I'm wrong, but I've also heard that some people use this "feature" as a somewhat legitimate substitute of a lacking aspect of the current C++ standard, namely, the inability to obtain the address (well, offset really) of a member function. For example, this is out of a popular implementation of a PCRE (Perl-compatible Regular Expression) library: #ifndef offsetof #define offsetof(p_type,field) ((size_t)&(((p_type *)0)->field)) #endif One can debate whether the exploitation of such a language subtlety in a case like this is valid or not, or even necessary, but I've also seen it used like this: struct Result { void stat() { if(this) // do something... else // do something else... } }; // ...somewhere else in the code... ((Result*)0)->stat(); This works just fine! It avoids a null pointer dereference by testing for the existence of this, and it does not try to access class members in the else block. So long as these guards are in place, it's legitimate code, right? So the question remains: Is there a practical use case, where one would benefit from using such a construct? I'm especially concerned about the second case, since the first case is more of a workaround for a language limitation. Or is it? PS. Sorry about the C-style casts, unfortunately people still prefer to type less if they can.

    Read the article

  • capturing CMD batch file parameter list; write to file for later processing

    - by BobB
    I have written a batch file that is launched as a post processing utility by a program. The batch file reads ~24 parameters supplied by the calling program, stores them into variables, and then writes them to various text files. Since the max input variable in CMD is %9, it's necessary to use the 'shift' command to repeatedly read and store these individually to named variables. Because the program outputs several similar batch files, the result is opening several CMD windows sequentially, assigning variables and writing data files. This ties up the calling program for too long. It occurs to me that I could free up the calling program much faster if maybe there's a way to write a very simple batch file that can write all the command parameters to a text file, where I can process them later. Basically, just grab the parameter list, write it and done. Q: Is there some way to treat an entire series of parameter data as one big text string and write it to one big variable... and then echo the whole big thing to one text file? Then later read the string into %n variables when there's no program waiting to resume? Parameter list is something like 25 - 30 words, less than 200 characters. Sample parameter list: "First Name" "Lastname" "123 Steet Name Way" "Cityname" ST 12345 1004968 06/01/2010 "Firstname+Lastname" 101738 "On Account" 20.67 xy-1z 1 8.95 3.00 1.39 0 0 239 8.95 Items in quotes are processed as string variables. List is space delimited. Any suggestions?

    Read the article

  • SQL Queries SELECT IN and SELECT NOT IN

    - by Sequenzia
    Does anyone know why the results of the following 2 queries do not add up to the results of the 3rd one? SELECT COUNT(leadID) FROM leads WHERE makeID NOT IN (SELECT uploadDataMapID FROM DG_App.dbo.uploadData WHERE uploadID = 3 AND uploadRowID = 1) AND modelID NOT IN (SELECT uploadDataMapID FROM DG_App.dbo.uploadData WHERE uploadID = 3 AND uploadRowID = 2) SELECT COUNT(leadID) FROM Leads WHERE makeID IN (SELECT uploadDataMapID FROM DG_App.dbo.uploadData WHERE uploadID = 3 AND uploadRowID = 1) OR modelID IN (SELECT uploadDataMapID FROM DG_App.dbo.uploadData WHERE uploadID = 3 AND uploadRowID = 2) SELECT COUNT(leadID) FROM Leads The first query is the count I need. The second one is to tell the user how many records were suppressed based on the contents of the DG_App.dbo.uploadData table. The third query is just a straight count of all the records. When I run these the results of query 1 + the results of query 2 comes up about 46K records less than the count of the entire table. I have played with grouping the WHERE statements with () but that did not change the counts at all. This is MSSQL Server 2012. Any input on this would be great. Thanks

    Read the article

  • How should I launch a Portable Python Tkinter application on Windows without ugliness?

    - by Andrew
    I've written a simple GUI program in python using Tkinter. Let's call this program 'gui.py'. My users run 'gui.py' on Windows machines from a USB key using Portable Python; installing anything on the host machine is undesirable. I'd like my users to run 'gui.py' by double-clicking an icon at the root of the USB key. My users don't care what python is, and they don't want to use a command prompt if they don't have to. I don't want them to have to care what drive letter the USB key is assigned. I'd like this to work on XP, Vista, and 7. My first ugly solution was to create a shortcut in the root directory of the USB key, and set the "Target" property of the shortcut to something like "(root)\App\pythonw.exe (root)\App\gui.py", but I couldn't figure out how to do a relative path in a windows shortcut, and using an absolute path like "E:" seems fragile. My next solution was to create a .bat script in the root directory of the USB key, something like this: @echo off set basepath=%~dp0 "%basepath%App\pythonw.exe" "%basepath%\App\gui.py" This doesn't seem to care what drive letter the USB key is assigned, but it does leave a DOS window open while my program runs. Functional, but ugly. Next I tried a .bat script like this: @echo off set basepath=%~dp0 start "" "%basepath%App\pythonw.exe" "%basepath%\App\gui.py" (See here for an explanation of the funny quoting) Now, the DOS window briefly flashes on screen before my GUI opens. Less ugly! Still ugly. How do real men deal with this problem? What's the least ugly way to start a python Tkinter GUI on a Windows machine from a USB stick?

    Read the article

  • Rewriting a for loop in pure NumPy to decrease execution time

    - by Statto
    I recently asked about trying to optimise a Python loop for a scientific application, and received an excellent, smart way of recoding it within NumPy which reduced execution time by a factor of around 100 for me! However, calculation of the B value is actually nested within a few other loops, because it is evaluated at a regular grid of positions. Is there a similarly smart NumPy rewrite to shave time off this procedure? I suspect the performance gain for this part would be less marked, and the disadvantages would presumably be that it would not be possible to report back to the user on the progress of the calculation, that the results could not be written to the output file until the end of the calculation, and possibly that doing this in one enormous step would have memory implications? Is it possible to circumvent any of these? import numpy as np import time def reshape_vector(v): b = np.empty((3,1)) for i in range(3): b[i][0] = v[i] return b def unit_vectors(r): return r / np.sqrt((r*r).sum(0)) def calculate_dipole(mu, r_i, mom_i): relative = mu - r_i r_unit = unit_vectors(relative) A = 1e-7 num = A*(3*np.sum(mom_i*r_unit, 0)*r_unit - mom_i) den = np.sqrt(np.sum(relative*relative, 0))**3 B = np.sum(num/den, 1) return B N = 20000 # number of dipoles r_i = np.random.random((3,N)) # positions of dipoles mom_i = np.random.random((3,N)) # moments of dipoles a = np.random.random((3,3)) # three basis vectors for this crystal n = [10,10,10] # points at which to evaluate sum gamma_mu = 135.5 # a constant t_start = time.clock() for i in range(n[0]): r_frac_x = np.float(i)/np.float(n[0]) r_test_x = r_frac_x * a[0] for j in range(n[1]): r_frac_y = np.float(j)/np.float(n[1]) r_test_y = r_frac_y * a[1] for k in range(n[2]): r_frac_z = np.float(k)/np.float(n[2]) r_test = r_test_x +r_test_y + r_frac_z * a[2] r_test_fast = reshape_vector(r_test) B = calculate_dipole(r_test_fast, r_i, mom_i) omega = gamma_mu*np.sqrt(np.dot(B,B)) # write r_test, B and omega to a file frac_done = np.float(i+1)/(n[0]+1) t_elapsed = (time.clock()-t_start) t_remain = (1-frac_done)*t_elapsed/frac_done print frac_done*100,'% done in',t_elapsed/60.,'minutes...approximately',t_remain/60.,'minutes remaining'

    Read the article

  • Scope of "library" methods

    - by JS
    Hello, I'm apparently laboring under a poor understanding of Python scoping. Perhaps you can help. Background: I'm using the 'if name in "main"' construct to perform "self-tests" in my module(s). Each self test makes calls to the various public methods and prints their results for visual checking as I develop the modules. To keep things "purdy" and manageable, I've created a small method to simplify the testing of method calls: def pprint_vars(var_in): print("%s = '%s'" % (var_in, eval(var_in))) Calling pprint_vars with: pprint_vars('some_variable_name') prints: some_variable_name = 'foo' All fine and good. Problem statement: Not happy to just KISS, I had the brain-drizzle to move my handy-dandy 'pprint_vars' method into a separate file named 'debug_tools.py' and simply import 'debug_tools' whenever I wanted access to 'pprint_vars'. Here's where things fall apart. I would expect import debug_tools foo = bar debug_tools.pprint_vars('foo') to continue working its magic and print: foo = 'bar' Instead, it greets me with: NameError: name 'some_var' is not defined Irrational belief: I believed (apparently mistakenly) that import puts imported methods (more or less) "inline" with the code, and thus the variable scoping rules would remain similar to if the method were defined inline. Plea for help: Can someone please correct my (mis)understanding of scoping regards imports? Thanks, JS

    Read the article

  • Multiple inequality conditions (range queries) in NoSQL

    - by pableu
    Hi, I have an application where I'd like to use a NoSQL database, but I still want to do range queries over two different properties, for example select all entries between times T1 and T2 where the noiselevel is smaller than X. On the other hand, I would like to use a NoSQL/Key-Value store because my data is very sparse and diverse, and I do not want to create new tables for every new datatype that I might come across. I know that you cannot use multiple inequality filters for the Google Datastore (source). I also know that this feature is coming (according to this). I know that this is also not possible in CouchDB (source). I think I also more or less understand why this is the case. Now, this makes me wonder.. Is that the case with all NoSQL databases? Can other NoSQL systems make range queries over two different properties? How about, for example, Mongo DB? I've looked in the Documentation, but the only thing I've found was the following snippet in their docu: Note that any of the operators on this page can be combined in the same query document. For example, to find all document where j is not equal to 3 and k is greater than 10, you'd query like so: db.things.find({j: {$ne: 3}, k: {$gt: 10} }); So they use greater-than and not-equal on two different properties. They don't say anything about two inequalities ;-) Any input and enlightenment is welcome :-)

    Read the article

  • Search engine recommendation for 100 sites of about 4000 pages

    - by fwkb
    I am looking for a search engine that can regularly (daily-ish) scan about 100 pages for changes and index an associated site if changes since the last scan are found. It should be able to handle about 100 sites, each averaging 4000 pages of about 5k average size, each on a different server (but only the one centralized search engine). Each of these sites will have a search form that gets submitted to this search engine. The results that are returned must be specific to the site that submitted them. I create the templates for the external sites, so I can give the search form a hidden field that specifies which site the form is submitted from. What would you recommend I look into? I would love to use a Python-based system for this, if feasible. I am currently using something called iSearch2. It doesn't seem very stable at this scale, the description of the product states it is not really intended to do multiple sites, is in PHP (which is less comfortable to me than Python), and has a few other shortcomings for my specific situation.

    Read the article

  • Parsing HTTP - Bytes.length != String.length

    - by hotzen
    Hello, I consume HTTP via nio.SocketChannel, so I get chunks of data as Array[Byte]. I want to put these chunks into a parser and continue parsing after each chunk has been put. HTTP itself seems to use an ISO8859-Charset but the Payload/Body itself may be arbitrarily encoded: If the HTTP Content-Length specifies X bytes, the UTF8-decoded Body may have much less Characters (1 Character may be represented in UTF8 by 2 bytes, etc). So what is a good parsing strategy to honor an explicitly specified Content-Length and/or a Transfer-Encoding: Chunked which specifies a chunk-length to be honored. append each data-chunk to an mutable.ArrayBuffer[Byte], search for CRLF in the bytes, decode everything from 0 until CRLF to String and match with Regular-Expressions like StatusRegex, HeaderRegex, etc? decode each data-chunk with the proper charset (e.g. iso8859, utf8, etc) and add to StringBuilder. With this solution I am not able to honor any Content-Length or Chunk-Size, but.. do I have to care for it? any other solution... ?

    Read the article

  • PHP Socket Server vs node.js: Web Chat

    - by Eliasdx
    I want to program a HTTP WebChat using long-held HTTP requests (Comet), ajax and websockets (depending on the browser used). Userdatabase is in mysql. Chat is written in PHP except maybe the chat stream itself which could also be written in javascript (node.js): I don't want to start a php process per user as there is no good way to send the chat messages between these php childs. So I thought about writing an own socket server in either PHP or node.js which should be able to handle more then 1000 connections (chat users). As a purely web developer (php) I'm not much familiar with sockets as I usually let web server care about connections. The chat messages won't be saved on disk nor in mysql but in RAM as an array or object for best speed. As far as I know there is no way to handle multiple connections at the same time in a single php process (socket server), however you can accept a great amount of socket connections and process them successive in a loop (read and write; incoming message - write to all socket connections). The problem is that there will most-likely be a lag with ~1000 users and mysql operations could slow the whole thing down which will then affect all users. My question is: Can node.js handle a socket server with better performance? Node.js is event-based but I'm not sure if it can process multiple events at the same time (wouldn't that need multi-threading?) or if there is just an event queue. With an event queue it would be just like php: process user after user. I could also spawn a php process per chat room (much less users) but afaik there are singlethreaded IRC servers which are also capable to handle thousands of users. (written in c++ or whatever) so maybe it's also possible in php. I would prefer PHP over Node.js because then the project would be php-only and not a mixture of programming languages. However if Node can process connections simultaneously I'd probably choose it.

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • Why are symbols not frozen strings?

    - by Alex Chaffee
    I understand the theoretical difference between Strings and Symbols. I understand that Symbols are meant to represent a concept or a name or an identifier or a label or a key, and Strings are a bag of characters. I understand that Strings are mutable and transient, where Symbols are immutable and permanent. I even like how Symbols look different from Strings in my text editor. What bothers me is that practically speaking, Symbols are so similar to Strings that the fact that they're not implemented as Strings causes a lot of headaches. They don't even support duck-typing or implicit coercion, unlike the other famous "the same but different" couple, Float and Fixnum. The mere existence of HashWithIndifferentAccess, and its rampant use in Rails and other frameworks, demonstrates that there's a problem here, an itch that needs to be scratched. Can anyone tell me a practical reason why Symbols should not be frozen Strings? Other than "because that's how it's always been done" (historical) or "because symbols are not strings" (begging the question). Consider the following astonishing behavior: :apple == "apple" #=> false, should be true :apple.hash == "apple".hash #=> false, should be true {apples: 10}["apples"] #=> nil, should be 10 {"apples" => 10}[:apples] #=> nil, should be 10 :apple.object_id == "apple".object_id #=> false, but that's actually fine All it would take to make the next generation of Rubyists less confused is this: class Symbol < String def initialize *args super self.freeze end (and a lot of other library-level hacking, but still, not too complicated) See also: http://onestepback.org/index.cgi/Tech/Ruby/SymbolsAreNotImmutableStrings.red http://www.randomhacks.net/articles/2007/01/20/13-ways-of-looking-at-a-ruby-symbol Why does my code break when using a hash symbol, instead of a hash string? Why use symbols as hash keys in Ruby? What are symbols and how do we use them? Ruby Symbols vs Strings in Hashes Can't get the hang of symbols in Ruby

    Read the article

  • Fading out everything but (this) - while honoring a click()

    - by Kasper Lewau
    I'm trying to achieve a fading navigation system, where everything in the nav but the element being hovered will fade out to say, 0.3 opacity. At the same time, I want clicks to have a greater "value", so as to not fade out a clicked element (or in this case, the active subpage).. That didn't make much sense to me either, I'll just post the code I have. <nav id="main"> <ul> <li> <a>work</a> </li> <li> <a>about me</a> </li> <li> <a>contact</a> </li> </ul> </nav> And the script that makes it sparkle: var nava = "nav#main ul li a"; $(nava).hover(function(){ $(nava).not(this).removeClass().addClass("inactive"); $(this).addClass("active"); }); $(nava).click(function(){ $(this).removeClass().addClass("active"); }); }); And the classes / css(less): .inactive{color:@color2; border-bottom:0 solid #000;} .active{color:@color1; border-bottom:1px solid #000;} nav#main ul li a{color:@color1;} Basically the hover states take priority over the click, which I do not want to happen. Ideally I'd like for all of the anchor elements to revert to its original state whenever you hover out from the unordered list holding it all. If anyone has some pointers on this it'd be greatly appreciated. Cheers!

    Read the article

  • Spring Integration 1.0 RC2: Streaming file content?

    - by gdm
    I've been trying to find information on this, but due to the immaturity of the Spring Integration framework I haven't had much luck. Here is my desired work flow: New files are placed in an 'Incoming' directory Files are picked up using a file:inbound-channel-adapter The file content is streamed, N lines at a time, to a 'Stage 1' channel, which parses the line into an intermediary (shared) representation. This parsed line is routed to multiple 'Stage 2' channels. Each 'Stage 2' channel does its own processing on the N available lines to convert them to a final representation. This channel must have a queue which ensures no Stage 2 channel is overwhelmed in the event that one channel processes significantly slower than the others. The final representation of the N lines is written to a file. There will be as many output files as there were routing destinations in step 4. *'N' above stands for any reasonable number of lines to read at a time, from [1, whatever I can fit into memory reasonably], but is guaranteed to always be less than the number of lines in the full file. How can I accomplish streaming (steps 3, 4, 5) in Spring Integration? It's fairly easy to do without streaming the files, but my files are large enough that I cannot read the entire file into memory. As a side note, I have a working implementation of this work flow without Spring Integration, but since we're using Spring Integration in other places in our project, I'd like to try it here to see how it performs and how the resulting code compares for length and clarity.

    Read the article

  • VS2010 development web server does not use integrated-mode HTTP handlers/modules

    - by Domenic
    I am developing an ASP.NET MVC 2 web site, targeted for .NET Framework 4.0, using Visual Studio 2010. My web.config contains the following code: <system.webServer> <modules runAllManagedModulesForAllRequests="true"> <add name="XhtmlModule" type="DomenicDenicola.Website.XhtmlModule" /> </modules> <handlers> <add name="DotLess" type="dotless.Core.LessCssHttpHandler,dotless.Core" path="*.less" verb="*" /> </handlers> </system.webServer> When I use Build > Publish to put the web site on my local IIS7 instance, it works great. However, when I use Debug > Start Debugging, neither the HTTP handler nor module are executed on any requests. Strangely enough, when I put the handler and module <add /> tags back into <system.web /> under <httpHandlers /> and <httpModules />, they work. This seems to imply that the development web server is running in classic mode. How do I fix this?

    Read the article

  • Alternatives to the Entity Framework for Serving/Consuming an OData Interface

    - by Egahn
    I'm researching how to set up an OData interface to our database. I would like to be able to pull/query data from our DB into Excel, as a start. Eventually I would like to have Excel run queries and pull data over HTTP from a remote client, including authentication, etc. I've set up a working (rickety) prototype so far, using the ADO.NET Entity Data Model wizard in Visual Studio, and VSTO to create a test Excel worksheet with a button to pull from that ADO.NET interface. This works OK so far, and I can query the DB using Linq through the entities/objects that are created by the ADO.NET EDM wizard. However, I have started to run into some problems with this approach. I've been finding the Entity Framework difficult to work with (and in fact, also difficult to research solutions to, as there's a lot of chaff out there regarding it and older versions of it). An example of this is my being unable to figure out how to set the SQL command timeout (as opposed to the HTTP request timeout) on the DataServiceContext object that the wizard generates for my schema, but that's not the point of my question. The real question I have is, if I want to use OData as my interface standard, am I stuck with the Entity Framework? Are there any other solutions out there (preferably open source) which can set up, serve and consume an OData interface, and are easier to work with and less bloated than the Entity Framework? I have seen mention of NHibernate as an alternative, but most of the comparison threads I've seen are a few years old. Are there any other alternatives out there now? Thanks very much!

    Read the article

  • IF-block brackets: best practice

    - by MasterPeter
    I am preparing a short tutorial for level 1 uni students learning JavaScript basics. The task is to validate a phone number. The number must not contain non-digits and must be 14 digits long or less. The following code excerpt is what I came up with and I would like to make it as readable as possible. if ( //set of rules for invalid phone number phoneNumber.length == 0 //empty || phoneNumber.length > 14 //too long || /\D/.test(phoneNumber) //contains non-digits ) { setMessageText(invalid); } else { setMessageText(valid); } A simple question I can not quite answer myself and would like to hear your opinions on: How to position the surrounding (outermost) brackets? It's hard to see the difference between a normal and a curly bracket. Do you usually put the last ) on the same line as the last condition? Do you keep the first opening ( on a line by itself? Do you wrap each individual sub-condition in brackets too? Do you align horizontally the first ( with the last ), or do you place the last ) in the same column as the if? Do you keep ) { on a separate line or you place the last ) on the same line with the last sub-condition and then place the opening { on a new line? Or do you just put the ) { on the same line as the last sub-condition? Community wiki.

    Read the article

  • Converting ntext to nvcharmax(max) - Getting around size limitation

    - by Overflew
    Hi all, I'm trying to change an existing SQL NText column to nvcharmax(max), and encountering an error on the size limit. There's a large amount of existing data, some of which is more than the 8k limit, I believe. We're looking to convert this, so that the field is searchable in LINQ. The 2x SQL statements I've tried are: update Table set dataNVarChar = convert(nvarchar(max), dataNtext) where dataNtext is not null update Table set dataNVarChar = cast(dataNtext as nvarchar(max)) where dataNtext is not null And the error I get is: Cannot create a row of size 8086 which is greater than the allowable maximum row size of 8060. This is using SQL Server 2008. Any help appreciated, Thanks. Update / Solution: The marked answer below is correct, and SQL 2008 can change the column to the correct data type in my situation, and there are no dramas with the LINQ-utilising application we use on top of it: alter table [TBL] alter column [COL] nvarchar(max) I've also been advised to follow it up with: update [TBL] set [COL] = [COL] Which completes the conversion by moving the data from the lob structure to the table (if the length in less than 8k), which improves performance / keeps things proper.

    Read the article

  • How to read time from recorded surveillance camera video?

    - by stressed_geek
    I have a problem where I have to read the time of recording from the video recorded by a surveillance camera. The time shows up on the top-left area of the video. Below is a link to screen grab of the area which shows the time. Also, the digit color(white/black) keeps changing during the duration of the video. http://i55.tinypic.com/2j5gca8.png Please guide me in the direction to approach this problem. I am a Java programmer so would prefer an approach through Java. EDIT: Thanks unhillbilly for the comment. I had looked at the Ron Cemer OCR library and its performance is much below our requirement. Since the ocr performance is less than desired, I was planning to build a character set using the screen grabs for all the digits, and using some image/pixel comparison library to compare the frame time with the character-set which will show a probabilistic result after comparison. So I was looking for a good image comparison library(I would be OK with a non-java library which I can run using the command-line). Also any advice on the above approach would be really helpful.

    Read the article

  • When and why can sprintf fail?

    - by Srekel
    I'm using swprintf to build a string into a buffer (using a loop among other things). const int MaxStringLengthPerCharacter = 10 + 1; wchar_t* pTmp = pBuffer; for ( size_t i = 0; i < nNumPlayers ; ++i) { const int nPlayerId = GetPlayer(i); const int nWritten = swprintf(pTmp, MaxStringLengthPerCharacter, TEXT("%d,"), nPlayerId); assert(nWritten >= 0 ); pTmp += nWritten; } *pTaskPlayers = '\0'; If during testing the assert never hits, can I be sure that it will never hit in live code? That is, do I need to check if nWritten < 0 and handle that, or can I safely assume that there won't be a problem? Under which circumstances can it return -1? The documentation more or less just states "If the function fails". In one place I've read that it will fail if it can't match the arguments (i.e. the formatting string to the varargs) but that doesn't worry me. I'm also not worried about buffer overrun in this case - I know the buffer is big enough.

    Read the article

  • Is there any appreciable difference between if and if-else?

    - by Drew
    Given the following code snippets, is there any appreciable difference? public boolean foo(int input) { if(input > 10) { doStuff(); return true; } if(input == 0) { doOtherStuff(); return true; } return false; } vs. public boolean foo(int input) { if(input > 10) { doStuff(); return true; } else if(input == 0) { doOtherStuff(); return true; } else { return false; } } Or would the single exit principle be better here with this piece of code... public boolean foo(int input) { boolean toBeReturned = false; if(input > 10) { doStuff(); toBeReturned = true; } else if(input == 0) { doOtherStuff(); toBeReturned = true; } return toBeReturned; } Is there any perceptible performance difference? Do you feel one is more or less maintainable/readable than the others?

    Read the article

  • How can I split my conkeror-rc config over multiple files?

    - by Ryan Thompson
    Short version: can you help me fill in this code? var conkeror_settings_dir = ".conkeror.mozdev.org/settings"; function load_all_js_files_in_dir (dir) { var full_path = get_home_directory().appendRelativePath(dir); // YOUR CODE HERE } load_all_js_files_in_dir(conkeror_settings_dir); Background I'm trying out Conkeror for web browsing. It's an emacs-like browser running on Mozilla's rendering engine, using javascript as configuration language (filling the role that elisp plays for emacs). In my emacs config, I have split my customizations into a series of files, where each file is a single unit of related options (for example, all my perl-related settings might be in perl-settings.el. All these settings files are loaded automatically by a function in my .emacs that simply loads every elisp file under my "settings" directory. I am looking to structure my Conkeror config in the same way, with my main conkeror-rc file basically being a stub that loads all the js files under a certain directory relative to my home directory. Unfortunately, I am much less literate in javascript than I am in elisp, so I don't even know how to "source" a file.

    Read the article

  • Validating Time & Date To Be At Least A Certain Amount Of Time In The Future

    - by MJH
    I've built a reservation form for a taxi company which works fine, but I'm having an issue with users making reservations that are due too soon in the future. Since the entire form is kind of long, I first want to make sure the user is not trying to make a reservation for less than an hour ahead of time, without them having to fill out the whole form. This is what I have come up with so far, but it's just not working: <?php //Set local time zone. date_default_timezone_set('America/New_York'); //Get current date and time. $current_time = date('Y-m-d H:i:s'); //Set reservation time variable $res_datetime = $_POST['res_datetime']; //Set event time. $event_time = strtotime($res_datetime); ?> <!doctype html> <html> <head> <meta charset="utf-8"> <title>Check Date and Time</title> </head> <?php //Check to be sure reservation time is at least one hour in the future. if (($current_time - $event_time) <= (3600)) { echo "You must make a reservation at least one hour ahead of time."; } ?> <form name="datetime" action="" method="post"> <input name="res_datetime" type="datetime-local" id="res_datetime"> <input type="submit"> </form> <body> </body> </html> How can I create a validation check to make sure the date and time of the reservation is at least one hour ahead of time?

    Read the article

  • If a nonblocking recv with MSG_PEEK succeeds, will a subsequent recv without MSG_PEEK also succeed?

    - by Michael Wolf
    Here's a simplified version of some code I'm working on: void stuff(int fd) { int ret1, ret2; char buffer[32]; ret1 = recv(fd, buffer, 32, MSG_PEEK | MSG_DONTWAIT); /* Error handling -- and EAGAIN handling -- would go here. Bail if necessary. Otherwise, keep going. */ /* Can this call to recv fail, setting errno to EAGAIN? */ ret2 = recv(fd, buffer, ret1, 0); } If we assume that the first call to recv succeeds, returning a value between 1 and 32, is it safe to assume that the second call will also succeed? Can ret2 ever be less than ret1? In which cases? (For clarity's sake, assume that there are no other error conditions during the second call to recv: that no signal is delivered, that it won't set ENOMEM, etc. Also assume that no other threads will look at fd. I'm on Linux, but MSG_DONTWAIT is, I believe, the only Linux-specific thing here. Assume that the right fnctl was set previously on other platforms.)

    Read the article

  • How is a referencing environment generally implemented for closures?

    - by Alexandr Kurilin
    Let's say I have a statically/lexically scoped language with deep binding and I create a closure. The closure will consist of the statements I want executed plus the so called referencing environment, or, to quote this post, the collection of variables which can be used. What does this referencing environment actually look like implementation-wise? I was recently reading about ObjectiveC's implementation of blocks, and the author suggests that behind the scenes you get a copy of all of the variables on the stack and also of all the references to heap objects. The explanation claims that you get a "snapshot" of the referencing environment at the point in time of the closure's creation. Is that more or less what happens, or did I misread that? Is anything done to "freeze" a separate copy of the heap objects, or is it safe to assume that if they get modified between closure creation and the closure executing, the closure will no longer be operating on the original version of the object? If indeed there's copying being made, are there memory usage considerations in situations where one might want to create plenty of closures and store them somewhere? I think that misunderstanding of some of these concepts might lead to tricky issues like the ones Eric Lippert mentions in this blog post. It's interesting because you'd think that it wouldn't make sense to keep a reference to a value type that might be gone by the time the closure is called, but I'm guessing that in C# the compiler will figure out that the variable is needed later and put it into the heap instead. It seems that in most memory-managed languages everything is a reference and thus ObjectiveC is a somewhat unique situation with having to deal with copying what's on the stack.

    Read the article

< Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >