Search Results

Search found 90546 results on 3622 pages for 'code optimization'.

Page 121/3622 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • Opening the Internet Settings Dialog and using Windows Default Network Settings via Code

    - by Rick Strahl
    Ran into a question from a client the other day that asked how to deal with Internet Connection settings for running  HTTP requests. In this case this is an old FoxPro app and it's using WinInet to handle the actual HTTP connection. Another client asked a similar question about using the IE Web Browser control and configuring connection properties. Regardless of platform or tools used to do HTTP connections, you can probably configure custom connection and proxy settings in your application to configure http connection settings manually. However, this is a repetitive process for each application requires you to track system information in your application which is undesirable. Often it's much easier to rely on the system wide proxy settings that Windows provides via the Internet Settings dialog. The dialog is a Control Panel applet (inetcpl.cpl) and is the same dialog that you see when you pop up Internet Explorer's Options dialog: This dialog controls the Windows connection properties that determine how the Windows HTTP stack connects to the Internet and how Proxy's are used if configured. Depending on how the HTTP client is configured - it can typically inherit and use these global settings. Loading the Settings Dialog Programmatically The settings dialog is a Control Panel applet with the name of: inetcpl.cpl and you can use any Shell execution mechanism (Run dialog, ShellExecute API, Process.Start() in .NET etc.) to invoke the dialog. Changes made there are immediately reflected in any applications that use the default connection settings. In .NET you can simply do this to bring up the Internet Settings dialog with the Connection tab enabled: Process.Start("inetcpl.cpl",",4"); In FoxPro you can simply use the RUN command to execute inetcpl.cpl: lcCmd = "inetcpl.cpl ,4" RUN &lcCmd Using the Default Connection/Proxy Settings When using WinInet you specify the Http connect type in the call to InternetOpen() like this (FoxPro code here): hInetConnection=; InternetOpen(THIS.cUserAgent,0,; THIS.chttpproxyname,THIS.chttpproxybypass,0) The second parameter of 0 specifies that the default system proxy settings should be used and it uses the settings from the Internet Settings Connections tab. Other connection options for HTTP connections include 1 - direct (no proxies and ignore system settings), 3 - explicit Proxy specification. In most situations a connection mode setting of 0 should work. In .NET HTTP connections by default are direct connections and so you need to explicitly specify a default proxy or proxy configuration to use. The easiest way to do this is on the application level in the config file: <configuration> <system.net> <defaultProxy> <proxy bypassonlocal="False" autoDetect="True" usesystemdefault="True" /> </defaultProxy> </system.net> </configuration> You can do the same sort of thing in code specifying the proxy explicitly and using System.Net.WebProxy.GetDefaultProxy(). So when making HTTP calls to Web Services or using the HttpWebRequest class you can set the proxy with: StoreService.Proxy = WebProxy.GetDefaultProxy(); All of this is pretty easy to deal with and in my opinion is a way better choice to managing connection settings than having to track this stuff in your own application. Plus if you use default settings, most of the time it's highly likely that the connection settings are already properly configured making further configuration rare.© Rick Strahl, West Wind Technologies, 2005-2011Posted in Windows  HTTP  .NET  FoxPro   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Part 15: Fail a build based on the exit code of a console application

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application When you have a Console Application or a batch file that has errors, the exitcode is set to another value then 0. You would expect that the build would see this and report an error. This is not true however. First we setup the scenario. Add a ConsoleApplication project to your solution you are building. In the Main function set the ExitCode to 1     class Program    {        static void Main(string[] args)        {            Console.WriteLine("This is an error in the script.");            Environment.ExitCode = 1;        }    } Checkin the code. You can choose to include this Console Application in the build or you can decide to add the exe to source control Now modify the Build Process Template CustomTemplate.xaml Add an argument ErrornousScript Scroll down beneath the TryCatch activity called “Try Compile, Test, and Associate Changesets and Work Items” Add an Sequence activity to the template In the Sequence, add a ConvertWorkspaceItem and an InvokeProcess activity (see Part 14: Execute a PowerShell script  for more detailed steps) In the FileName property of the InvokeProcess use the ErrornousScript so the ConsoleApplication will be called. Modify the build definition and make sure that the ErrornousScript is executing the exe that is setting the ExitCode to 1. You have now setup a build definition that will execute the errornous Console Application. When you run it, you will see that the build succeeds. This is not what you want! To solve this, you can make use of the Result property on the InvokeProcess activity. So lets change our Build Process Template. Add the new variables (scoped to the sequence where you run the Console Application) called ExitCode (type = Int32) and ErrorMessage Click on the InvokeProcess activity and change the Result property to ExitCode In the Handle Standard Output of the InvokeProcess add a Sequence activity In the Sequence activity, add an Assign primitive. Set the following properties: To = ErrorMessage Value = If(Not String.IsNullOrEmpty(ErrorMessage), Environment.NewLine + ErrorMessage, "") + stdOutput And add the default BuildMessage to the sequence that outputs the stdOutput Add beneath the InvokeProcess activity and If activity with the condition ExitCode <> 0 In the Then section add a Throw activity and set the Exception property to New Exception(ErrorMessage) The complete workflow looks now like When you now check in the Build Process Template and run the build, you get the following result And that is exactly what we want.   You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Using a parser to locate faulty code

    - by ryan.riverside
    Lately I've been working a lot in PHP and have run into an abnormally large number of parsing errors. I realize these are my own fault and a result of sloppy initial coding on my part, but it's getting to the point that I'm spending more time resolving tags than developing. In the interest of not slamming my productivity, are there any tricks to locating the problem in the code? What I'd really be looking for would be a line to put in the code which would output the entire faulty tag in the parsing error, or something similar. Purely for reference sake, my current error is Parse error: syntax error, unexpected '}' in /home/content/80/9480880/html/cache/tpl_prosilver_viewtopic_body.html.php on line 50 (which refers to this): </dd><dd><?php if ($_poll_option_val['POLL_OPTION_RESULT'] == 0) { echo ((isset($this->_rootref['L_NO_VOTES'])) ? $this->_rootref['L_NO_VOTES'] : ((isset($user->lang['NO_VOTES'])) ? $user->lang['NO_VOTES'] : '{ NO_VOTES }')); } else { echo $_poll_option_val['POLL_OPTION_PERCENT']; } ?></dd> </dl> <?php }} if ($this->_rootref['S_DISPLAY_RESULTS']) { ?> <dl> <dt>&nbsp;</dt> <dd class="resultbar"><?php echo ((isset($this->_rootref['L_TOTAL_VOTES'])) ? $this->_rootref['L_TOTAL_VOTES'] : ((isset($user->lang['TOTAL_VOTES'])) ? $user->lang['TOTAL_VOTES'] : '{ TOTAL_VOTES }')); ?> : <?php echo (isset($this->_rootref['TOTAL_VOTES'])) ? $this->_rootref['TOTAL_VOTES'] : ''; ?></dd> </dl> <?php } if ($this->_rootref['S_CAN_VOTE']) { ?> <dl style="border-top: none;"> <dt>&nbsp;</dt> <dd class="resultbar"><input type="submit" name="update" value="<?php echo ((isset($this->_rootref['L_SUBMIT_VOTE'])) ? $this->_rootref['L_SUBMIT_VOTE'] : ((isset($user->lang['SUBMIT_VOTE'])) ? $user->lang['SUBMIT_VOTE'] : '{ SUBMIT_VOTE }')); ?>" class="button1" /></dd> </dl> <?php } if (! $this->_rootref['S_DISPLAY_RESULTS']) { ?> <dl style="border-top: none;"> <dt>&nbsp;</dt> <dd class="resultbar"><a href="<?php echo (isset($this->_rootref['U_VIEW_RESULTS'])) ? $this->_rootref['U_VIEW_RESULTS'] : ''; ?>"><?php echo ((isset($this->_rootref['L_VIEW_RESULTS'])) ? $this->_rootref['L_VIEW_RESULTS'] : ((isset($user->lang['VIEW_RESULTS'])) ? $user->lang['VIEW_RESULTS'] : '{ VIEW_RESULTS }')); ?></a></dd> </dl> <?php } ?> </fieldset></div>

    Read the article

  • Opening a new Windows from ASP.NET code behind

    - by TATWORTH
    At http://weblogs.asp.net/infinitiesloop/archive/2007/09/25/response-redirect-into-a-new-window-with-extension-methods.aspx there is an excellent post on how to open a new windows from code behind. The purists may not like it but it helped solve a problem for a client's client. Here is an update for VS2010 users: using System; using System.Web; using System.Web.UI; /// <summary> /// Response Helper for opening popup windo from code behind. /// </summary> public static class ResponseHelper {   /// <summary>   /// Redirect to popup window   /// </summary>   /// <param name="response">The response.</param>   /// <param name="url">URL to open to</param>   /// <param name="target">Target of window _self or _blank</param>   /// <param name="windowFeatures">Features such as window bar</param>   /// <remarks>   ///     <list type="bullet">   ///         <item>   /// From http://weblogs.asp.net/infinitiesloop/archive/2007/09/25/response-redirect-into-a-new-window-with-extension-methods.aspx   /// </item>   /// <item>   /// Note: If you use it outside the context of a Page request, you can't redirect to a new window. The reason is the need to call the ResolveClientUrl method on Page, which I can't do if there is no Page. I could have just built my own version of that method, but it's more involved than you might think to do it right. So if you need to use this from an HttpHandler other than a Page, you are on your own.   /// </item>   ///         <item>   /// Beware of popup blockers.   /// </item>   /// <item>   /// Note: Obviously when you are redirecting to a new window, the current window will still be hanging around. Normally redirects abort the current request -- no further processing occurs. But for these redirects, processing continues, since we still have to serve the response for the current window (which also happens to contain the script to open the new window, so it is important that it completes).   /// </item>   /// <item>   /// Sample call Response.Redirect("popup.aspx", "_blank", "menubar=0,width=100,height=100");   /// </item>   ///     </list>   /// </remarks>   public static void Redirect(this HttpResponse response, string url, string target, string windowFeatures)   {     if ((String.IsNullOrEmpty(target) || target.Equals("_self", StringComparison.OrdinalIgnoreCase)) && String.IsNullOrEmpty(windowFeatures))     {       response.Redirect(url);     }     else     {       Page page = (Page)HttpContext.Current.Handler;       if (page == null)       {         throw new InvalidOperationException("Cannot redirect to new window outside Page context.");       }       url = page.ResolveClientUrl(url);       string script;       if (!String.IsNullOrEmpty(windowFeatures))       {         script = @"window.open(""{0}"", ""{1}"", ""{2}"");";       }       else       {         script = @"window.open(""{0}"", ""{1}"");";       }       script = String.Format(script, url, target, windowFeatures);       ScriptManager.RegisterStartupScript(page, typeof(Page), "Redirect", script, true);     }   } }

    Read the article

  • CI tests to enforce specific development rules - good practice?

    - by KeithS
    The following is all purely hypothetical and any particular portion of it may or may not accurately describe real persons or situations, whether living, dead or just pretending. Let's say I'm a senior dev or architect in charge of a dev team working on a project. This project includes a security library for user authentication/authorization of the application under development. The library must be available for developers to edit; however, I wish to "trust but verify" that coders are not doing things that could compromise the security of the finished system, and because this isn't my only responsibility I want it to be done in an automated way. As one example, let's say I have an interface that represents a user which has been authenticated by the system's security library. The interface exposes basic user info and a list of things the user is authorized to do (so that the client app doesn't have to keep asking the server "can I do this?"), all in an immutable fashion of course. There is only one implementation of this interface in production code, and for the purposes of this post we can say that all appropriate measures have been taken to ensure that this implementation can only be used by the one part of our code that needs to be able to create concretions of the interface. The coders have been instructed that this interface and its implementation are sacrosanct and any changes must go through me. However, those are just words; the security library's source is open for editing by necessity. Any of my devs could decide that this secured, private, hash-checked implementation needs to be public so that they could do X, or alternately they could create their own implementation of this public interface in a different library, exposing the hashing algorithm that provides the secure checksum, in order to do Y. I may not be made aware of these changes so that I can beat the developer over the head for it. An attacker could then find these little nuggets in an unobfuscated library of the compiled product, and exploit it to provide fake users and/or falsely-elevated administrative permissions, bypassing the entire security system. This possibility keeps me awake for a couple of nights, and then I create an automated test that reflectively checks the codebase for types deriving from the interface, and fails if it finds any that are not exactly what and where I expect them to be. I compile this test into a project under a separate folder of the VCS that only I have rights to commit to, have CI compile it as an external library of the main project, and set it up to run as part of the CI test suite for user commits. Now, I have an automated test under my complete control that will tell me (and everyone else) if the number of implementations increases without my involvement, or an implementation that I did know about has anything new added or has its modifiers or those of its members changed. I can then investigate further, and regain the opportunity to beat developers over the head as necessary. Is this considered "reasonable" to want to do in situations like this? Am I going to be seen in a negative light for going behind my devs' backs to ensure they aren't doing something they shouldn't?

    Read the article

  • Multiple Zend application code organisation

    - by user966936
    For the past year I have been working on a series of applications all based on the Zend framework and centered on a complex business logic that all applications must have access to even if they don't use all (easier than having multiple library folders for each application as they are all linked together with a common center). Without going into much detail about what the project is specifically about, I am looking for some input (as I am working on the project alone) on how I have "grouped" my code. I have tried to split it all up in such a way that it removes dependencies as much as possible. I'm trying to keep it as decoupled as I logically can, so in 12 months time when my time is up anyone else coming in can have no problem extending on what I have produced. Example structure: applicationStorage\ (contains all applications and associated data) applicationStorage\Applications\ (contains the applications themselves) applicationStorage\Applications\external\ (application grouping folder) (contains all external customer access applications) applicationStorage\Applications\external\site\ (main external customer access application) applicationStorage\Applications\external\site\Modules\ applicationStorage\Applications\external\site\Config\ applicationStorage\Applications\external\site\Layouts\ applicationStorage\Applications\external\site\ZendExtended\ (contains extended Zend classes specific to this application example: ZendExtended_Controller_Action extends zend_controller_Action ) applicationStorage\Applications\external\mobile\ (mobile external customer access application different workflow limited capabilities compared to full site version) applicationStorage\Applications\internal\ (application grouping folder) (contains all internal company applications) applicationStorage\Applications\internal\site\ (main internal application) applicationStorage\Applications\internal\mobile\ (mobile access has different flow and limited abilities compared to main site version) applicationStorage\Tests\ (contains PHP unit tests) applicationStorage\Library\ applicationStorage\Library\Service\ (contains all business logic, services and servicelocator; these are completely decoupled from Zend framework and rely on models' interfaces) applicationStorage\Library\Zend\ (Zend framework) applicationStorage\Library\Models\ (doesn't know services but is linked to Zend framework for DB operations; contains model interfaces and model datamappers for all business objects; examples include Iorder/IorderMapper, Iworksheet/IWorksheetMapper, Icustomer/IcustomerMapper) (Note: the Modules, Config, Layouts and ZendExtended folders are duplicated in each application folder; but i have omitted them as they are not required for my purposes.) For the library this contains all "universal" code. The Zend framework is at the heart of all applications, but I wanted my business logic to be Zend-framework-independent. All model and mapper interfaces have no public references to Zend_Db but actually wrap around it in private. So my hope is that in the future I will be able to rewrite the mappers and dbtables (containing a Models_DbTable_Abstract that extends Zend_Db_Table_Abstract) in order to decouple my business logic from the Zend framework if I want to move my business logic (services) to a non-Zend framework environment (maybe some other PHP framework). Using a serviceLocator and registering the required services within the bootstrap of each application, I can use different versions of the same service depending on the request and which application is being accessed. Example: all external applications will have a service_auth_External implementing service_auth_Interface registered. Same with internal aplications with Service_Auth_Internal implementing service_auth_Interface Service_Locator::getService('Auth'). I'm concerned I may be missing some possible problems with this. One I'm half-thinking about is a config.ini file for all externals, then a separate application config.ini overriding or adding to the global external config.ini. If anyone has any suggestions I would be greatly appreciative. I have used contextswitching for AJAX functions within the individual applications, but there is a big chance both external and internal will get web services created for them. Again, these will be separated due to authorization and different available services. \applicationstorage\Applications\internal\webservice \applicationstorage\Applications\external\webservice

    Read the article

  • how to make Facebook FBML code - Valid

    - by Athul
    Facebook FBML codes shows invalid The code i need is a Facebook Fan Box widget. Make a code for a sample for facebook fan widget having Stream and Fans Please test the code with http://validator.w3.org/check & share with me You may find that every FBML code is INVALID

    Read the article

  • Compile and optimize for different target architectures

    - by Peter Smit
    Summary: I want to take advantage of compiler optimizations and processor instruction sets, but still have a portable application (running on different processors). Normally I could indeed compile 5 times and let the user choose the right one to run. My question is: how can I can automate this, so that the processor is detected at runtime and the right executable is executed without the user having to chose it? I have an application with a lot of low level math calculations. These calculations will typically run for a long time. I would like to take advantage of as much optimization as possible, preferably also of (not always supported) instruction sets. On the other hand I would like my application to be portable and easy to use (so I would not like to compile 5 different versions and let the user choose). Is there a possibility to compile 5 different versions of my code and run dynamically the most optimized version that's possible at execution time? With 5 different versions I mean with different instruction sets and different optimizations for processors. I don't care about the size of the application. At this moment I'm using gcc on Linux (my code is in C++), but I'm also interested in this for the Intel compiler and for the MinGW compiler for compilation to Windows. The executable doesn't have to be able to run on different OS'es, but ideally there would be something possible with automatically selecting 32 bit and 64 bit as well. Edit: Please give clear pointers how to do it, preferably with small code examples or links to explanations. From my point of view I need a super generic solution, which is applicable on any random C++ project I have later. Edit I assigned the bounty to ShuggyCoUk, he had a great number of pointers to look out for. I would have liked to split it between multiple answers but that is not possible. I'm not having this implemented yet, so the question is still 'open'! Please, still add and/or improve answers, even though there is no bounty to be given anymore. Thanks everybody!

    Read the article

  • Will these optimizations to my Ruby implementation of diff improve performance in a Rails app?

    - by grg-n-sox
    <tl;dr> In source version control diff patch generation, would it be worth it to use the optimizations listed at the very bottom of this writing (see <optimizations>) in my Ruby implementation of diff for making diff patches? </tl;dr> <introduction> I am programming something I have never done before and there might already be tools out there to do the exact thing I am programming but at this point I am having too much fun to care so I am still going to do it from scratch, even if there is a tool for this. So anyways, I am working on a Ruby on Rails app and need a certain feature. Basically I want each entry in a table of mine, let's say for example a table of video games, to have a stored chunk of text that represents a review or something of the sort for that table entry. However, I want this text to be both editable by any registered user and also keep track of different submissions in a version control system. The simplest solution I could think of is just implement a solution that keeps track of the text body and the diff patch history of different versions of the text body as objects in Ruby and then serialize it, preferably in human readable form (so I'll most likely use YAML for this) for editing if needed due to corruption by a software bug or a mistake is made by an admin doing some version editing. So at first I just tried to dive in head first into this feature to find that the problem of generating a diff patch is more difficult that I thought to do efficiently. So I did some research and came across some ideas. Some I have implemented already and some I have not. However, it all pretty much revolves around the longest common subsequence problem, as you would already know if you have already done anything with diff or diff-like features, and optimization the function that solves it. Currently I have it so it truncates the compared versions of the text body from the beginning and end until non-matching lines are found. Then it solves the problem using a comparison matrix, but instead of incrementing the value stored in a cell when it finds a matching line like in most longest common subsequence algorithms I have seen examples of, I increment when I have a non-matching line so as to calculate edit distance instead of longest common subsequence. Although as far as I can tell between the two approaches, they are essentially two sides of the same coin so either could be used to derive an answer. It then back-traces through the comparison matrix and notes when there was an incrementation and in which adjacent cell (West, Northwest, or North) to determine that line's diff entry and assumes all other lines to be unchanged. Normally I would leave it at that, but since this is going into a Rails environment and not just some stand-alone Ruby script, I started getting worried about needing to optimize at least enough so if a spammer that somehow knew how I implemented the version control system and knew my worst case scenario entry still wouldn't be able to hit the server that bad. After some searching and reading of research papers and articles through the internet, I've come across several that seem decent but all seem to have pros and cons and I am having a hard time deciding how well in this situation that the pros and cons balance out. So are the ones listed here worth it? I have listed them with known pros and cons. </introduction> <optimizations> Chop the compared sequences into multiple chucks of subsequences by splitting where lines are unchanged, and then truncating each section of unchanged lines at the beginning and end of each section. Then solve the edit distance of each subsequence. Pro: Changes the time increase as the changed area gets bigger from a quadratic increase to something more similar to a linear increase. Con: Figuring out where to split already seems like you have to solve edit distance except now you don't care how it is changed. Would be fine if this was solvable by a process closer to solving hamming distance but a single insertion would throw this off. Use a cryptographic hash function to both convert all sequence elements into integers and ensure uniqueness. Then solve the edit distance comparing the hash integers instead of the sequence elements themselves. Pro: The operation of comparing two integers is faster than the operation of comparing two strings, so a slight performance gain is received after every comparison, which can be a lot overall. Con: Using a cryptographic hash function takes time to convert all the sequence elements and may end up costing more time to do the conversion that you gain back from the integer comparisons. You could use the built in hash function for a string but that will not guarantee uniqueness. Use lazy evaluation to only calculate the three center-most diagonals of the comparison matrix and then only calculate additional diagonals as needed. And then also use this approach to possibly remove the need on some comparisons to compare all three adjacent cells as desribed here. Pro: Can turn an algorithm that always takes O(n * m) time and make it so only worst case scenario is that time, best case becomes practically linear, and average case is somewhere between the two. Con: It is an algorithm I've only seen implemented in functional programming languages and I am having a difficult time comprehending how to convert this into Ruby based on how it is described at the site linked to above. Make a C module and do the hard work at the native level in C and just make a Ruby wrapper for it so Ruby can make all the calls to it that it needs. Pro: I have to imagine that evaluating something like this in could be a LOT faster. Con: I have no idea how Rails handles apps with ruby code that has C extensions and it hurts the portability of the app. This is an optimization for after the solving of edit distance, but idea is to store additional combined diffs with the ones produced by each version to make a delta-tree data structure with the most recently made diff as the root node of the tree so getting to any version takes worst case time of O(log n) instead of O(n). Pro: Would make going back to an old version a lot faster. Con: It would mean every new commit, the delta-tree would get a new root node that will cost time to reorganize the delta-tree for an operation that will be carried out a lot more often than going back a version, not to mention the unlikelihood it will be an old version. </optimizations> So are these things worth the effort?

    Read the article

  • How can I strip Python logging calls without commenting them out?

    - by cdleary
    Today I was thinking about a Python project I wrote about a year back where I used logging pretty extensively. I remember having to comment out a lot of logging calls in inner-loop-like scenarios (the 90% code) because of the overhead (hotshot indicated it was one of my biggest bottlenecks). I wonder now if there's some canonical way to programmatically strip out logging calls in Python applications without commenting and uncommenting all the time. I'd think you could use inspection/recompilation or bytecode manipulation to do something like this and target only the code objects that are causing bottlenecks. This way, you could add a manipulator as a post-compilation step and use a centralized configuration file, like so: [Leave ERROR and above] my_module.SomeClass.method_with_lots_of_warn_calls [Leave WARN and above] my_module.SomeOtherClass.method_with_lots_of_info_calls [Leave INFO and above] my_module.SomeWeirdClass.method_with_lots_of_debug_calls Of course, you'd want to use it sparingly and probably with per-function granularity -- only for code objects that have shown logging to be a bottleneck. Anybody know of anything like this? Note: There are a few things that make this more difficult to do in a performant manner because of dynamic typing and late binding. For example, any calls to a method named debug may have to be wrapped with an if not isinstance(log, Logger). In any case, I'm assuming all of the minor details can be overcome, either by a gentleman's agreement or some run-time checking. :-)

    Read the article

  • Should not a tail-recursive function also be faster?

    - by Balint Erdi
    I have the following Clojure code to calculate a number with a certain "factorable" property. (what exactly the code does is secondary). (defn factor-9 ([] (let [digits (take 9 (iterate #(inc %) 1)) nums (map (fn [x] ,(Integer. (apply str x))) (permutations digits))] (some (fn [x] (and (factor-9 x) x)) nums))) ([n] (or (= 1 (count (str n))) (and (divisible-by-length n) (factor-9 (quot n 10)))))) Now, I'm into TCO and realize that Clojure can only provide tail-recursion if explicitly told so using the recur keyword. So I've rewritten the code to do that (replacing factor-9 with recur being the only difference): (defn factor-9 ([] (let [digits (take 9 (iterate #(inc %) 1)) nums (map (fn [x] ,(Integer. (apply str x))) (permutations digits))] (some (fn [x] (and (factor-9 x) x)) nums))) ([n] (or (= 1 (count (str n))) (and (divisible-by-length n) (recur (quot n 10)))))) To my knowledge, TCO has a double benefit. The first one is that it does not use the stack as heavily as a non tail-recursive call and thus does not blow it on larger recursions. The second, I think is that consequently it's faster since it can be converted to a loop. Now, I've made a very rough benchmark and have not seen any difference between the two implementations although. Am I wrong in my second assumption or does this have something to do with running on the JVM (which does not have automatic TCO) and recur using a trick to achieve it? Thank you.

    Read the article

  • How to insert zeros between bits in a bitmap?

    - by anatolyg
    I have some performance-heavy code that performs bit manipulations. It can be reduced to the following well-defined problem: Given a 13-bit bitmap, construct a 26-bit bitmap that contains the original bits spaced at even positions. To illustrate: 0000000000000000000abcdefghijklm (input, 32 bits) 0000000a0b0c0d0e0f0g0h0i0j0k0l0m (output, 32 bits) I currently have it implemented in the following way in C: if (input & (1 << 12)) output |= 1 << 24; if (input & (1 << 11)) output |= 1 << 22; if (input & (1 << 10)) output |= 1 << 20; ... My compiler (MS Visual Studio) turned this into the following: test eax,1000h jne 0064F5EC or edx,1000000h ... (repeated 13 times with minor differences in constants) I wonder whether i can make it any faster. I would like to have my code written in C, but switching to assembly language is possible. Can i use some MMX/SSE instructions to process all bits at once? Maybe i can use multiplication? (multiply by 0x11111111 or some other magical constant) Would it be better to use condition-set instruction (SETcc) instead of conditional-jump instruction? If yes, how can i make the compiler produce such code for me? Any other idea how to make it faster? Any idea how to do the inverse bitmap transformation (i have to implement it too, bit it's less critical)?

    Read the article

  • Inline function v. Macro in C -- What's the Overhead (Memory/Speed)?

    - by Jason R. Mick
    I searched Stack Overflow for the pros/cons of function-like macros v. inline functions. I found the following discussion: Pros and Cons of Different macro function / inline methods in C ...but it didn't answer my primary burning question. Namely, what is the overhead in c of using a macro function (with variables, possibly other function calls) v. an inline function, in terms of memory usage and execution speed? Are there any compiler-dependent differences in overhead? I have both icc and gcc at my disposal. My code snippet I'm modularizing is: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = AttractiveTerm * AttractiveTerm; EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); My reason for turning it into an inline function/macro is so I can drop it into a c file and then conditionally compile other similar, but slightly different functions/macros. e.g.: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = pow(SigmaSquared/RadialDistanceSquared,9); EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); (note the difference in the second line...) This function is a central one to my code and gets called thousands of times per step in my program and my program performs millions of steps. Thus I want to have the LEAST overhead possible, hence why I'm wasting time worrying about the overhead of inlining v. transforming the code into a macro. Based on the prior discussion I already realize other pros/cons (type independence and resulting errors from that) of macros... but what I want to know most, and don't currently know is the PERFORMANCE. I know some of you C veterans will have some great insight for me!!

    Read the article

  • Optimal (Time paradigm) solution to check variable within boundary

    - by kumar_m_kiran
    Hi All, Sorry if the question is very naive. I will have to check the below condition in my code 0 < x < y i.e code similar to if(x > 0 && x < y) The basic problem at system level is - currently, for every call (Telecom domain terminology), my existing code is hit (many times). So performance is very very critical, Now, I need to add a check for boundary checking (at many location - but different boundary comparison at each location). At very normal level of coding, the above comparison would look very naive without any issue. However, when added over my statistics module (which is dipped many times), performance will go down. So I would like to know the best possible way to handle the above scenario (kind of optimal way for limits checking technique). Like for example, if bit comparison works better than normal comparison or can both the comparison be evaluation in shorter time span? Other Info x is unsigned integer (which must be checked to be greater than 0 and less than y). y is unsigned integer. y is a non-const and varies for every comparison. Here time is the constraint compared to space. Language - C++. Now, later if I need to change the attribute of y to a float/double, would there be another way to optimize the check (i.e will the suggested optimal technique for integer become non-optimal solution when y is changed to float/double). Thanks in advance for any input. PS : OS used is SUSE 10 64 bit x64_64, AIX 5.3 64 bit, HP UX 11.1 A 64.

    Read the article

  • [PHP] Does unsetting array values during iterating save on memory?

    - by saturn_rising
    Hello fellow code warriors, This is a simple programming question, coming from my lack of knowledge of how PHP handles array copying and unsetting during a foreach loop. It's like this, I have an array that comes to me from an outside source formatted in a way I want to change. A simple example would be: $myData = array('Key1' => array('value1', 'value2')); But what I want would be something like: $myData = array([0] => array('MyKey' => array('Key1' => array('value1', 'value2')))); So I take the first $myData and format it like the second $myData. I'm totally fine with my formatting algorithm. My question lies in finding a way to conserve memory since these arrays might get a little unwieldy. So, during my foreach loop I copy the current array value(s) into the new format, then I unset the value I'm working with from the original array. E.g.: $formattedData = array(); foreach ($myData as $key => $val) { // do some formatting here, copy to $reformattedVal $formattedData[] = $reformattedVal; unset($myData[$key]); } Is the call to unset() a good idea here? I.e., does it conserve memory since I have copied the data and no longer need the original value? Or, does PHP automatically garbage collect the data since I don't reference it in any subsequent code? The code runs fine, and so far my datasets have been too negligible in size to test for performance differences. I just don't know if I'm setting myself up for some weird bugs or CPU hits later on. Thanks for any insights. -sR

    Read the article

  • Can google “see” this custom javascript code which displays links from an external site to mine

    - by webmasters
    I have a javascript code on my site who displays links from another site. This is what I have on my source before: <script language="JavaScript" type="text/javascript">showLink(1);</script> This is what I have copied from my source after the page has loaded: <script language="JavaScript" type="text/javascript">showLink(1);</script><a rel="nofollow" target="_blank" class="anc" href="http://x5.external_site.net/sc/out.php?s=5483&amp;o=http%3A%2F%2Fwww.bluetooth.com">Bluetooth Devices</a> Can google see this link?

    Read the article

  • Upgrade Kubuntu 11.10 to 12.10 I get error code that 11.10 is outdated

    - by Bobby Dedman
    I installed Kubuntu 11.10 on top of Windows 7. Everything works fine. I tried to update 11.10 to 12.10 and I get error Code that 11.10 is outdated and not supported. Update stops. I downloaded 12.10 iso but there is no upgrade option, just install. How can I remove the 11.10 installation and install 12.10. (Without breaking my Windows 7 install.) I need dual boot option. Thanks for the support. Bobby D.

    Read the article

  • How is the Linux repository administrated?

    - by David
    I am amazed by the Linux project and I would like to learn how they administrate the code, given the huge number of developers. I found the Linux repository on GitHub, but I do not understand how it is administrated. For example the following commit: https://github.com/torvalds/linux/commit/31fd84b95eb211d5db460a1dda85e004800a7b52 Notice the following part: So one authored and Torvalds committed. How is this possible. I thought that it was only possible to have either pull or pushing rights, but here it seems like there is an approval stage. I should mention that the specific problem I am trying to solve is that we use pull requests to our repo. The problem we are facing is that while a pull request is waiting to get merged, it is often broken by a commit. This leads to a seemingly never ending work to adapt the fork in order to make the pull request merge smoothly. Do Linux solve this by giving lots of people pushing rights (at least there are currently just three pull requests but hundreds of commits per day).

    Read the article

  • Umbraco Code Garden 2010 - Ticket Auction for Charity

    - by Vizioz Limited
    Hi All,When Code Garden 2010 was first announced I bought two early bird tickets for the conference as at the time I had hoped to offer the ticket to one of my developers, but unfortunately both of them are unable to make the conference so I am left with a spare ticket.Some people would try to sell the ticket to get the money back, but I thought I'd prefer to put the ticket up for auction and donate all the money to a charity called Able Kidz who help children with disabilities by providing them special computers and software.If you would like to bid for the ticket please look at the auction here:Umbraco Codegarden 2010 TicketHappy bidding and hopefully see the winner at Codegarden!

    Read the article

  • Coding standards

    - by Piotr Rodak
    This post will be about coding standards. There are countless articles and blog posts related to this topic, so I know this post will not be too revealing. Yet I would like to mention a few things I came across during my work with the T-SQL code. Naming convention - there are many of them obviously. Too bad if all of them are used in the same database, and sometimes even in the same stored procedure. It is not uncommon to see something like create procedure dbo . Proc1 ( @ParamId int ) as begin declare...(read more)

    Read the article

  • Take Control Of Web Control ClientID Values in ASP.NET 4.0

    Each server-side Web control in an ASP.NET Web Forms application has an <code>ID</code> property that identifies the Web control and is name by which the Web control is accessed in the code-behind class. When rendered into HTML, the Web control turns its server-side <code>ID</code> value into a client-side <code>id</code> attribute. Ideally, there would be a one-to-one correspondence between the value of the server-side <code>ID</code> property and the generated client-side <code>id</code>, but in reality things aren't so simple. By default, the rendered client-side <code>id</code> is formed by taking the Web control's <code>ID</code> property and prefixed it with the <code>ID</code>

    Read the article

  • Open Source Survey: Oracle Products on Top

    - by trond-arne.undheim
    Oracle continues to work with the open source community to bring the most innovative and productive software to market (more). Oracle products received the most votes in several key categories of the 2010 Linux Journal Reader's Choice Awards. With over 12,000 technologists reporting, these product earned top spots: Best Office Suite: OpenOffice.org Best Single Office Program: OpenOffice.org Writer Best Database: MySQL Best Virtualization Solution: VirtualBox "As the leading open source technology and service provider, Oracle continues to work with the community stakeholders to rapidly innovate many open source products for use in fully tested production environments," says Edward Screven, Oracle's chief corporate architect. "Supporting open source is important to Oracle and our customers, and we continue to invest in it." According to a recent report by the Linux Foundation, Oracle is one of the top ten contributors to the Linux Kernel. Oracle also contributes millions of lines of code to these important projects: OpenJDK: 7,002,579 Eclipse: 1,800,000 (#3 in active committers) MySQL: 5,073,113 NetBeans: 7,870,446 JSF: 701,980 Apache MyFaces Trinidad: 1,316,840 Hudson: 1,209,779 OpenOffice.org: 7,500,000

    Read the article

  • Quick example: why coding standards must be in place

    - by DigiMortal
    One quick example why coding standards must be in place. Take a look at the following code – property names are changed but not anything else. public string Property1 { get; set; }   public string Property2 {     get;     set; }   public string Property3 {     get; set; } And yes – it is real-world example.

    Read the article

  • .NETTER Code Starter Pack v1.0.beta Released

    - by Mohammad Ashraful Alam
    .NETTER Code Starter Pack contains a gallery of Visual Studio 2010 solutions leveraging latest and new technologies released by Microsoft. Each Visual Studio solution included here is focused to provide a very simple starting point for cutting edge development technologies and framework, using well known Northwind database. The current release of this project includes starter samples for the following technologies: ASP.NET Dynamic Data QuickStart (TBD) Azure Service Platform Windows Azure Hello World Windows Azure Storage Simple CRUD Database Scripts Entity Framework 4.0 (TBD) SharePoint 2010 Visual Web Part Linq QuickStart Silverlight Business App Hello World WCF RIA Services QuickStart Utility Framework MEF Moq QuickStart T-4 QuickStart Unity QuickStart WCF WCF Data Services QuickStart WCF Hello World WorkFlow Foundation Web API Facebook Toolkit QuickStart Download link: http://codebox.codeplex.com/releases/view/57382 Technorati Tags: release,new release,asp.net,mef,unity,sharepoint,wcf

    Read the article

  • Code excavations, wishful invocations, perimeters and domain specific unit test frameworks

    One of the talks I did at QCON London was about a subject that Ive come across fairly recently , when I was building SilverUnit a pure unit test framework for silverlight objects that depend on the silverlight runtime to run. It is the concept of cogs in the machine when your piece of code needs to run inside a host framework or runtime that you have little or no control over for testability related matters. Examples of such cogs and machines can be: your custom control running inside silverlight...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >