Search Results

Search found 28459 results on 1139 pages for 'task base programming'.

Page 305/1139 | < Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >

  • c# Multi diemention (array, arraylist, or hashtable) ?

    - by Data-Base
    hello, I'm trying to figure out how to build a multi dimensional "array" that is: flexible size use 2 keys 1st key is int (flexible) 2nd key is string (kind of limited) the use will be like console.writelen(array[0]["firstname"]); console.writelen(array[0]["lastname"]); console.writelen(array[0]["phone"]); console.writelen(array[1]["firstname"]); console.writelen(array[1]["lastname"]); console.writelen(array[1]["phone"]); ..... ..... console.writelen(array[x]["firstname"]); console.writelen(array[x]["lastname"]); console.writelen(array[x]["phone"]); something like this

    Read the article

  • How to refresh a parent page #Rails

    - by sameera
    Hi Guys, I have the fllowing requirement, I have a model called Task to display user tasks 1 . Link to add a new task (in the tasks index page) 2 . when a user click the link, 'tasks/new' action will open up inside a popup 3 . when the user save the new task, I want to close 'new task' popup and refresh the parent page 'tasks/index' so that new task will display I guess, i will have to execute a page reload java script at the end of 'tasks/create' action. But i'm not sure how to. can anyone help me out to make this happen, thanks in advance cheers, sameera

    Read the article

  • Dynamic Object Not Creating for Privately Inherited Class.

    - by mahesh
    Hi, What is the reason for the following code that does not let me to create object. class base { public: void foo() { cout << "base::foo()"; } }; class derived : private base { public: void foo() { cout << "deived::foo()"; } }; void main() { base *d = new derived(); d->foo(); } It Gives me error : " 'type cast' : conversion from 'derived *' to 'base *' exists, but is inaccessible" Thanks in advance :)

    Read the article

  • What's the best way to return a subset of a list

    - by Pikrass
    I have a list of tasks. A task is defined by a name, a due date and a duration. My TaskManager class handles a std::list<Task> sorted by due date. It has to provide a way to get the tasks due for a specific date. How would you implement that ? I think a good way (from API point of view) would be to provide a std::list<Task>::iterator pair. So I would have a TaskManager::begin(date) method. Do you think this method should get the iterator by iterating from the start of the list until it finds the first task due on that date, or by getting it from a std::map<date, std::list<Task>::iterator> (but then we have to keep it up-to-date when adding or removing tasks) ? And then, how could I implement the TaskManager::end(date) method ?

    Read the article

  • LINQ to SQL - database relationships won't update after submit

    - by Quantic Programming
    I have a Database with the tables Users and Uploads. The important columns are: Users -> UserID Uploads -> UploadID, UserID The primary key in the relationship is Users -> UserID and the foreign key is Uploads -> UserID. In LINQ to SQL, I do the following operations: Retrieve files var upload = new Upload(); upload.UserID = user.UserID; upload.UploadID = XXX; db.Uploads.InsertOnSubmit(upload) db.SubmitChanges(); If I do that and rerun the application (and the db object is re-built, of course) - if do something like this: foreach(var upload in user.Uploads) I get all the uploads with that user's ID. (like added in the previous example) The problem is, that my application, after adding an upload an submitting changes, doesn't update the user.Uploads collection. i.e - I don't get the newly added uploads. The user object is stored in the Session object. At first, I though that the LINQ to SQL Framework doesn't update the reference of the object, therefore I should simply "reset" the user object from a new SQL request. I mean this: Session["user"] = db.Users.Where(u => u.UserID == user.UserID).SingleOrDefault(); (Where user is the previous user) But it didn't help. Please note: After rerunning the application, user.Uploads does have the new upload! Did anyone experience this type of problem, or is it normal behavior? I am a newbie to this framework. I would gladly take any advice. Thank you!

    Read the article

  • Ant: how do I disable all non-error messages?

    - by java.is.for.desktop
    Hello, everyone! When running ant from command line on my Netbeans projects, I get the following messages hundreds of times, which is very annoying: Trying to override old definition of task http://www.netbeans.org/ns/j2se-project/3:javac Trying to override old definition of task http://www.netbeans.org/ns/j2se-project/3:depend Trying to override old definition of task http://www.netbeans.org/ns/j2se-project/1:nbjpdastart Trying to override old definition of task http://www.netbeans.org/ns/j2se-project/3:debug Trying to override old definition of task http://www.netbeans.org/ns/j2se-project/1:java Depending of the kind of the project, there can be much more of such lines. And this is with the -q or -quiet option. Any idea, how to disable this message? Thank you!

    Read the article

  • openoffice document (odt) to PDF with commad line on Linux?

    - by Data-Base
    Hi, we are building a PHP script that we need at work to create reports in PDFs the reports will be created by using templates from postgrSQL. so far I found that it can be done with the use of php and odt (openoffice) files [http://www.odtphp.com/] (do you have any other suggestions?) now how I can convert the results to PDF so teachers will get the final reports as PDF any tips? the server has no GUI and I want to make it as simple as possible we tried using PHP to PDF directly with FPDF [http://www.fpdf.org/] but it is really a CPU killer! cheers

    Read the article

  • how to make PHP lists all Linux Users?

    - by Data-Base
    Hello I want to build a php based site that (automate) some commands on my Ubuntu Server first thing I did was going to the file (sudoers) and add the user www-data so I can execute php commands with root privileges! # running the web apps with root power!!! www-data ALL=(ALL) NOPASSWD: ALL then my PHP code was <?php $command = "cat /etc/passwd | cut -d\":\" -f1"; echo 'running the command: <b>'.$command."</b><br />"; echo exec($command); ?> it returns only one user (the last user) !!! how to make it return all users? thank you

    Read the article

  • Check if User (live) on the Domain Controller

    - by Data-Base
    Hello, when we connect a machine to AD, user will be able to log into the machine with their AD user-name and password, at home they can do that ( when they are not on the work network) which fine and good we need to have a program Auto-start when the user login BUT only when they are at work and in our network! how can I achieve that? (I can build a checking program in c#) but not sure where to start! cheers

    Read the article

  • c# run process without freezing my App's GUI

    - by Data-Base
    Hello, I want to start a process (calling another program), currently the external program takes time (it is normal)! but it freezes my GUI I saw allot of examples and I'm learning, it is hard to figure it out, trying to read and learn threading, but it is not that easy (at least for me) and good simple tutorial or code sample? cheers

    Read the article

  • Javascript adding two numbers incorrectly

    - by Scott
    Global.alert("base: " + base + ", upfront: " + upfront + ", both: " + (base + upfront)); This code above outputs this: base: 15000, upfront: 36, both: 1500036 Why is it joining the two numbers instead of adding them up? I eventually want to set the value of another field to this amount using this: mainPanel.feesPanel.initialLoanAmount.setValue(Ext.util.Format.number((base + upfront), '$0,000.00')); ...and when I try that now it turns the number into the millions instead of 15,036.00. I have no idea why. Any ideas?

    Read the article

  • Executing legacy MSBuild scripts in TFS 2010 Build

    - by Jakob Ehn
    When upgrading from TFS 2008 to TFS 2010, all builds are “upgraded” in the sense that a build definition with the same name is created, and it uses the UpgradeTemplate  build process template to execute the build. This template basically just runs MSBuild on the existing TFSBuild.proj file. The build definition contains a property called ConfigurationFolderPath that points to the TFSBuild.proj file. So, existing builds will run just fine after upgrade. But what if you want to use the new workflow functionality in TFS 2010 Build, but still have a lot of MSBuild scripts that maybe call custom MSBuild tasks that you don’t have the time to rewrite? Then one option is to keep these MSBuild scrips and call them from a TFS 2010 Build workflow. This can be done using the MSBuild workflow activity that is avaiable in the toolbox in the Team Foundation Build Activities section: This activity wraps the call to MSBuild.exe and has the following parameters: Most of these properties are only relevant when actually compiling projects, for example C# project files. When calling custom MSBuild project files, you should focus on these properties: Property Meaning Example CommandLineArguments Use this to send in/override MSBuild properties in your project “/p:MyProperty=SomeValue” or MSBuildArguments (this will let you define the arguments in the build definition or when queuing the build) LogFile Name of the log file where MSbuild will log the output “MyBuild.log” LogFileDropLocation Location of the log file BuildDetail.DropLocation + “\log” Project The project to execute SourcesDirectory + “\BuildExtensions.targets” ResponseFile The name of the MSBuild response file SourcesDirectory + “\BuildExtensions.rsp” Targets The target(s) to execute New String() {“Target1”, “Target2”} Verbosity Logging verbosity Microsoft.TeamFoundation.Build.Workflow.BuildVerbosity.Normal Integrating with Team Build   If your MSBuild scripts tries to use Team Build tasks, they will most likely fail with the above approach. For example, the following MSBuild project file tries to add a build step using the BuildStep task:   <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\TeamBuild\Microsoft.TeamFoundation.Build.targets" /> <Target Name="MyTarget"> <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Name="MyBuildStep" Message="My build step executed" Status="Succeeded"></BuildStep> </Target> </Project> When executing this file using the MSBuild activity, calling the MyTarget, it will fail with the following message: The "Microsoft.TeamFoundation.Build.Tasks.BuildStep" task could not be loaded from the assembly \PrivateAssemblies\Microsoft.TeamFoundation.Build.ProcessComponents.dll. Could not load file or assembly 'file:///D:\PrivateAssemblies\Microsoft.TeamFoundation.Build.ProcessComponents.dll' or one of its dependencies. The system cannot find the file specified. Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask. You can see that the path to the ProcessComponents.dll is incomplete. This is because in the Microsoft.TeamFoundation.Build.targets file the task is referenced using the $(TeamBuildRegPath) property. Also note that the task needs the TeamFounationServerUrl and BuildUri properties. One solution here is to pass these properties in using the Command Line Arguments parameter:   Here we pass in the parameters with the corresponding values from the curent build. The build log shows that the build step has in fact been inserted:   The problem as you probably spted is that the build step is insert at the top of the build log, instead of next to the MSBuild activity call. This is because we are using a legacy team build task (BuildStep), and that is how these are handled in TFS 2010. You can see the same behaviour when running builds that are using the UpgradeTemplate, that cutom build steps shows up at the top of the build log.

    Read the article

  • Custom ASP.Net MVC 2 ModelMetadataProvider for using custom view model attributes

    - by SeanMcAlinden
    There are a number of ways of implementing a pattern for using custom view model attributes, the following is similar to something I’m using at work which works pretty well. The classes I’m going to create are really simple: 1. Abstract base attribute 2. Custom ModelMetadata provider which will derive from the DataAnnotationsModelMetadataProvider   Base Attribute MetadataAttribute using System; using System.Web.Mvc; namespace Mvc2Templates.Attributes {     /// <summary>     /// Base class for custom MetadataAttributes.     /// </summary>     public abstract class MetadataAttribute : Attribute     {         /// <summary>         /// Method for processing custom attribute data.         /// </summary>         /// <param name="modelMetaData">A ModelMetaData instance.</param>         public abstract void Process(ModelMetadata modelMetaData);     } } As you can see, the class simple has one method – Process. Process accepts the ModelMetaData which will allow any derived custom attributes to set properties on the model meta data and add items to its AdditionalValues collection.   Custom Model Metadata Provider For a quick explanation of the Model Metadata and how it fits in to the MVC 2 framework, it is basically a set of properties that are usually set via attributes placed above properties on a view model, for example the ReadOnly and HiddenInput attributes. When EditorForModel, DisplayForModel or any of the other EditorFor/DisplayFor methods are called, the ModelMetadata information is used to determine how to display the properties. All of the information available within the model metadata is also available through ViewData.ModelMetadata. The following class derives from the DataAnnotationsModelMetadataProvider built into the mvc 2 framework. I’ve overridden the CreateMetadata method in order to process any custom attributes that may have been placed above a property in a view model.   CustomModelMetadataProvider using System; using System.Collections.Generic; using System.Linq; using System.Web.Mvc; using Mvc2Templates.Attributes; namespace Mvc2Templates.Providers {     public class CustomModelMetadataProvider : DataAnnotationsModelMetadataProvider     {         protected override ModelMetadata CreateMetadata(             IEnumerable<Attribute> attributes,             Type containerType,             Func<object> modelAccessor,             Type modelType,             string propertyName)         {             var modelMetadata = base.CreateMetadata(attributes, containerType, modelAccessor, modelType, propertyName);               attributes.OfType<MetadataAttribute>().ToList().ForEach(x => x.Process(modelMetadata));               return modelMetadata;         }     } } As you can see, once the model metadata is created through the base method, a check for any attributes deriving from our new abstract base attribute MetadataAttribute is made, the Process method is then called on any existing custom attributes with the model meta data for the property passed in.   Hooking it up The last thing you need to do to hook it up is set the new CustomModelMetadataProvider as the current ModelMetadataProvider, this is done within the Global.asax Application_Start method. Global.asax protected void Application_Start()         {             AreaRegistration.RegisterAllAreas();               RegisterRoutes(RouteTable.Routes);               ModelMetadataProviders.Current = new CustomModelMetadataProvider();         }   In my next post, I’m going to demonstrate a cool custom attribute that turns a textbox into an ajax driven AutoComplete text box. Hope this is useful. Kind Regards, Sean McAlinden.

    Read the article

  • GPGPU

    WhatGPU obviously stands for Graphics Processing Unit (the silicon powering the display you are using to read this blog post). The extra GP in front of that stands for General Purpose computing.So, altogether GPGPU refers to computing we can perform on GPU for purposes beyond just drawing on the screen. In effect, we can use a GPGPU a bit like we already use a CPU: to perform some calculation (that doesn’t have to have any visual element to it). The attraction is that a GPGPU can be orders of magnitude faster than a CPU.WhyWhen I was at the SuperComputing conference in Portland last November, GPGPUs were all the rage. A quick online search reveals many articles introducing the GPGPU topic. I'll just share 3 here: pcper (ignoring all pages except the first, it is a good consumer perspective), gizmodo (nice take using mostly layman terms) and vizworld (answering the question on "what's the big deal").The GPGPU programming paradigm (from a high level) is simple: in your CPU program you define functions (aka kernels) that take some input, can perform the costly operation and return the output. The kernels are the things that execute on the GPGPU leveraging its power (and hence execute faster than what they could on the CPU) while the host CPU program waits for the results or asynchronously performs other tasks.However, GPGPUs have different characteristics to CPUs which means they are suitable only for certain classes of problem (i.e. data parallel algorithms) and not for others (e.g. algorithms with branching or recursion or other complex flow control). You also pay a high cost for transferring the input data from the CPU to the GPU (and vice versa the results back to the CPU), so the computation itself has to be long enough to justify the overhead transfer costs. If your problem space fits the criteria then you probably want to check out this technology.HowSo where can you get a graphics card to start playing with all this? At the time of writing, the two main vendors ATI (owned by AMD) and NVIDIA are the obvious players in this industry. You can read about GPGPU on this AMD page and also on this NVIDIA page. NVIDIA's website also has a free chapter on the topic from the "GPU Gems" book: A Toolkit for Computation on GPUs.If you followed the links above, then you've already come across some of the choices of programming models that are available today. Essentially, AMD is offering their ATI Stream technology accessible via a language they call Brook+; NVIDIA offers their CUDA platform which is accessible from CUDA C. Choosing either of those locks you into the GPU vendor and hence your code cannot run on systems with cards from the other vendor (e.g. imagine if your CPU code would run on Intel chips but not AMD chips). Having said that, both vendors plan to support a new emerging standard called OpenCL, which theoretically means your kernels can execute on any GPU that supports it. To learn more about all of these there is a website: gpgpu.org. The caveat about that site is that (currently) it completely ignores the Microsoft approach, which I touch on next.On Windows, there is already a cross-GPU-vendor way of programming GPUs and that is the DirectX API. Specifically, on Windows Vista and Windows 7, the DirectX 11 API offers a dedicated subset of the API for GPGPU programming: DirectCompute. You use this API on the CPU side, to set up and execute the kernels that run on the GPU. The kernels are written in a language called HLSL (High Level Shader Language). You can use DirectCompute with HLSL to write a "compute shader", which is the term DirectX uses for what I've been referring to in this post as a "kernel". For a comprehensive collection of links about this (including tutorials, videos and samples) please see my blog post: DirectCompute.Note that there are many efforts to build even higher level languages on top of DirectX that aim to expose GPGPU programming to a wider audience by making it as easy as today's mainstream programming models. I'll mention here just two of those efforts: Accelerator from MSR and Brahma by Ananth. Comments about this post welcome at the original blog.

    Read the article

  • October 2012 Critical Patch Update and Critical Patch Update for Java SE Released

    - by Eric P. Maurice
    Hi, this is Eric Maurice. Oracle has just released the October 2012 Critical Patch Update and the October 2012 Critical Patch Update for Java SE.  As a reminder, the release of security patches for Java SE continues to be on a different schedule than for other Oracle products due to commitments made to customers prior to the Oracle acquisition of Sun Microsystems.  We do however expect to ultimately bring Java SE in line with the regular Critical Patch Update schedule, thus increasing the frequency of scheduled security releases for Java SE to 4 times a year (as opposed to the current 3 yearly releases).  The schedules for the “normal” Critical Patch Update and the Critical Patch Update for Java SE are posted online on the Critical Patch Updates and Security Alerts page. The October 2012 Critical Patch Update provides a total of 109 new security fixes across a number of product families including: Oracle Database Server, Oracle Fusion Middleware, Oracle E-Business Suite, Supply Chain Products Suite, Oracle PeopleSoft Enterprise, Oracle Customer Relationship Management (CRM), Oracle Industry Applications, Oracle FLEXCUBE, Oracle Sun products suite, Oracle Linux and Virtualization, and Oracle MySQL. Out of these 109 new vulnerabilities, 5 affect Oracle Database Server.  The most severe of these Database vulnerabilities has received a CVSS Base Score of 10.0 on Windows platforms and 7.5 on Linux and Unix platforms.  This vulnerability (CVE-2012-3137) is related to the “Cryptographic flaws in Oracle Database authentication protocol” disclosed at the Ekoparty Conference.  Because of timing considerations (proximity to the release date of the October 2012 Critical Patch Update) and the need to extensively test the fixes for this vulnerability to ensure compatibility across the products stack, the fixes for this vulnerability were not released through a Security Alert, but instead mitigation instructions were provided prior to the release of the fixes in this Critical Patch Update in My Oracle Support Note 1492721.1.  Because of the severity of these vulnerabilities, Oracle recommends that this Critical Patch Update be installed as soon as possible. Another 26 vulnerabilities fixed in this Critical Patch Update affect Oracle Fusion Middleware.  The most severe of these Fusion Middleware vulnerabilities has received a CVSS Base Score of 10.0; it affects Oracle JRockit and is related to Java vulnerabilities fixed in the Critical Patch Update for Java SE.  The Oracle Sun products suite gets 18 new security fixes with this Critical Patch Update.  Note also that Oracle MySQL has received 14 new security fixes; the most severe of these MySQL vulnerabilities has received a CVSS Base Score of 9.0. Today’s Critical Patch Update for Java SE provides 30 new security fixes.  The most severe CVSS Base Score for these Java SE vulnerabilities is 10.0 and this score affects 10 vulnerabilities.  As usual, Oracle reports the most severe CVSS Base Score, and these CVSS 10.0s assume that the user running a Java Applet or Java Web Start application has administrator privileges (as is typical on Windows XP). However, when the user does not run with administrator privileges (as is typical on Solaris and Linux), the corresponding CVSS impact scores for Confidentiality, Integrity, and Availability are "Partial" instead of "Complete", typically lowering the CVSS Base Score to 7.5 denoting that the compromise does not extend to the underlying Operating System.  Also, as is typical in the Critical Patch Update for Java SE, most of the vulnerabilities affect Java and Java FX client deployments only.  Only 2 of the Java SE vulnerabilities fixed in this Critical Patch Update affect client and server deployments of Java SE, and only one affects server deployments of JSSE.  This reflects the fact that Java running on servers operate in a more secure and controlled environment.  As discussed during a number of sessions at JavaOne, Oracle is considering security enhancements for Java in desktop and browser environments.  Finally, note that the Critical Patch Update for Java SE is cumulative, in other words it includes all previously released security fixes, including the fix provided through Security Alert CVE-2012-4681, which was released on August 30, 2012. For More Information: The October 2012 Critical Patch Update advisory is located at http://www.oracle.com/technetwork/topics/security/cpuoct2012-1515893.html The October 2012 Critical Patch Update for Java SE advisory is located at http://www.oracle.com/technetwork/topics/security/javacpuoct2012-1515924.html.  An online video about the importance of keeping up with Java releases and the use of the Java auto update is located at http://medianetwork.oracle.com/video/player/1218969104001 More information about Oracle Software Security Assurance is located at http://www.oracle.com/us/support/assurance/index.html  

    Read the article

  • Monitor and Control Memory Usage in Google Chrome

    - by Asian Angel
    Do you want to know just how much memory Google Chrome and any installed extensions are using at a given moment? With just a few clicks you can see just what is going on under the hood of your browser. How Much Memory are the Extensions Using? Here is our test browser with a new tab and the Extensions Page open, five enabled extensions, and one disabled at the moment. You can access Chrome’s Task Manager using the Page Menu, going to Developer, and selecting Task manager… Or by right clicking on the Tab Bar and selecting Task manager. There is also a keyboard shortcut (Shift + Esc) available for the “keyboard ninjas”. Sitting idle as shown above here are the stats for our test browser. All of the extensions are sitting there eating memory even though some of them are not available/active for use on our new tab and Extensions Page. Not so good… If the default layout is not to your liking then you can easily modify the information that is available by right clicking and adding/removing extra columns as desired. For our example we added Shared Memory & Private Memory. Using the about:memory Page to View Memory Usage Want even more detail? Type about:memory into the Address Bar and press Enter. Note: You can also access this page by clicking on the Stats for nerds Link in the lower left corner of the Task Manager Window. Focusing on the four distinct areas you can see the exact version of Chrome that is currently installed on your system… View the Memory & Virtual Memory statistics for Chrome… Note: If you have other browsers running at the same time you can view statistics for them here too. See a list of the Processes currently running… And the Memory & Virtual Memory statistics for those processes. The Difference with the Extensions Disabled Just for fun we decided to disable all of the extension in our test browser… The Task Manager Window is looking rather empty now but the memory consumption has definitely seen an improvement. Comparing Memory Usage for Two Extensions with Similar Functions For our next step we decided to compare the memory usage for two extensions with similar functionality. This can be helpful if you are wanting to keep memory consumption trimmed down as much as possible when deciding between similar extensions. First up was Speed Dial”(see our review here). The stats for Speed Dial…quite a change from what was shown above (~3,000 – 6,000 K). Next up was Incredible StartPage (see our review here). Surprisingly both were nearly identical in the amount of memory being used. Purging Memory Perhaps you like the idea of being able to “purge” some of that excess memory consumption. With a simple command switch modification to Chrome’s shortcut(s) you can add a Purge Memory Button to the Task Manager Window as shown below.  Notice the amount of memory being consumed at the moment… Note: The tutorial for adding the command switch can be found here. One quick click and there is a noticeable drop in memory consumption. Conclusion We hope that our examples here will prove useful to you in managing the memory consumption in your own Google Chrome installation. If you have a computer with limited resources every little bit definitely helps out. Similar Articles Productive Geek Tips Stupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeMonitor CPU, Memory, and Disk IO In Windows 7 with Taskbar MetersFix for Firefox memory leak on WindowsHow to Purge Memory in Google ChromeHow to Make Google Chrome Your Default Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs

    Read the article

  • Things I've noticed with DVCS

    - by Wes McClure
    Things I encourage: Frequent local commits This way you don't have to be bothered by changes others are making to the central repository while working on a handful of related tasks.  It's a good idea to try to work on one task at a time and commit all changes at partitioned stopping points.  A local commit doesn't have to build, just FYI, so a stopping point doesn't mean a build point nor a point that you can push centrally.  There should be several of these in any given day.  2 hours is a good indicator that you might not be leveraging the power of frequent local commits.  Once you have verified a set of changes works, save them away, otherwise run the risk of introducing bugs into it when working on the next task.  The notion of a task By task I mean a related set of changes that can be completed in a few hours or less.  In the same token don’t make your tasks so small that critically related changes aren’t grouped together.  Use your intuition and the rest of these principles and I think you will find what is comfortable for you. Partial commits Sometimes one task explodes or unknowingly encompasses other tasks, at this point, try to get to a stopping point on part of the work you are doing and commit it so you can get that out of the way to focus on the remainder.  This will often entail committing part of the work and continuing on the rest. Outstanding changes as a guide If you don't commit often it might mean you are not leveraging your version control history to help guide your work.  It's a great way to see what has changed and might be causing problems.  The longer you wait, the more that has changed and the harder it is to test/debug what your changes are doing! This is a reason why I am so picky about my VCS tools on the client side and why I talk a lot about the quality of a diff tool and the ability to integrate that with a simple view of everything that has changed.  This is why I love using TortoiseHg and SmartGit: they show changed files, a diff (or two way diff with SmartGit) of the current selected file and a commit message all in one window that I keep maximized on one monitor at all times. Throw away / stash commits There is extreme value in being able to throw away a commit (or stash it) that is getting out of hand.  If you do not commit often you will have to isolate the work you want to commit from the work you want to throw away, which is wasted productivity and highly prone to errors.  I find myself doing this about once a week, especially when doing exploratory re-factoring.  It's much easier if I can just revert all outstanding changes. Sync with the central repository daily The rest of us depend on your changes.  Don't let them sit on your computer longer than they have to.  Waiting increases the chances of merge conflict which just decreases productivity.  It also prohibits us from doing deploys when people say they are done but have not merged centrally.  This should be done daily!  Find a way to partition the work you are doing so that you can sync at least once daily. Things I discourage: Lots of partial commits right at the end of a series of changes If you notice lots of partial commits at the end of a set of changes, it's likely because you weren't frequently committing, nor were you watching for the size of the task expanding beyond a single commit.  Chances are this cost you productivity if you use your outstanding changes as a guide, since you would have an ever growing list of changes. Committing single files Committing single files means you waited too long and no longer understand all the changes involved.  It may mean there were overlapping changes in single files that cannot be isolated.  In either case, go back to the suggestions above to avoid this.  Committing frequently does not mean committing frequently right at the end of a day's work. It should be spaced out over the course of several tasks, not all at the end in a 5 minute window.

    Read the article

  • Algorithm for tracking progress of controller method running in background

    - by SilentAssassin
    I am using Codeigniter framework for PHP on Windows platform. My problem is I am trying to track progress of a controller method running in background. The controller extracts data from the database(MySQL) then does some processing and then stores the results again in the database. The complete aforesaid process can be considered as a single task. A new task can be assigned while another task is running. The newly assigned task will be added in a queue. So if I can track progress of the controller, I can show status for each of these tasks. Like I can show "Pending" status for tasks in the queue, "In Progress" for tasks running and "Done" for tasks that are completed. Main Issue: Now first thing I need to find is an algorithm to track the progress of how much amount of execution the controller method has completed and that means tracking how much amount of method has completed execution. For instance, this PHP script tracks progress of array being counted. Here the current state and state after total execution are known so it is possible to track its progress. But I am not able to devise anything analogous to it in my case. Maybe what I am trying to achieve is programmtically not possible. If its not possible then suggest me a workaround or a completely new approach. If some details are pending you can mention them. Sorry for my ignorance this is my first post here. I welcome you to point out my mistakes. EDIT: Database outline: The URL(s) and keyword(s) are first entered by user which are stored in a database table called link_master and keyword_master respectively. Then keywords are extracted from all the links present in this table and compared with keywords entered by user and their frequency is calculated which is the final result. And the results are stored in another table called link_result. Now sub-links are extracted from the domain links and stored in a table called sub_link_master. Now again the keywords are extracted from these sub-links and the corresponding results are stored in a table called sub_link_result. The number of records cannot be defined beforehand as the number of links on any web page can be different. Only the cardinality of *link_result* table can be known which will be equal to multiplication of number of keyword(s) and URL(s) . I insert multiple records at a time using this resource. Controller outline: The controller extracts keywords from a web page and also extracts keywords from all the links present on that page. There is a method called crawlLink. I used Rolling Curl to extract keywords and web page content. It has callback function which I used for extracting keywords alongwith generating results and extracting valid sub-links. There is a insertResult method which stores results for links and sub-links in the respective tables. Yes, the processing depends on the number of records. The more the number of records, the more time it takes to execute: Consider this scenario: Number of Domain Links = 1 Number of Keywords = 3 Number of Domain Links Result generated = 3 (3 x 1 as described in the question) Number of Sub Links generated = 41 Number of Sub Links Result = 117 (41 x 3 = 123 but some links are not valid or searchable) Approximate time taken for above process to complete = 55 seconds. The above result is for a single link. I want to track the progress of the above results getting stored in database. When all results are stored, the task is complete. If results are getting stored, the task is In Progress. I am not clear how can I track this progress.

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • Problem with ar_mailer

    - by Roberto
    Hi guys, I'm running a strange problem sending emails. Basicly I'm getting this exception: ArgumentError (wrong number of arguments (1 for 0)): /usr/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/base.rb:642:in `initialize' /usr/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/base.rb:642:in `new' /usr/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/base.rb:642:in `create' /usr/lib/ruby/gems/1.8/gems/ar_mailer-1.3.1/lib/action_mailer/ar_mailer.rb:92:in `perform_delivery_activerecord' /usr/lib/ruby/gems/1.8/gems/ar_mailer-1.3.1/lib/action_mailer/ar_mailer.rb:91:in `each' /usr/lib/ruby/gems/1.8/gems/ar_mailer-1.3.1/lib/action_mailer/ar_mailer.rb:91:in `perform_delivery_activerecord' /usr/lib/ruby/gems/1.8/gems/actionmailer-2.1.1/lib/action_mailer/base.rb:508:in `__send__' /usr/lib/ruby/gems/1.8/gems/actionmailer-2.1.1/lib/action_mailer/base.rb:508:in `deliver!' /usr/lib/ruby/gems/1.8/gems/actionmailer-2.1.1/lib/action_mailer/base.rb:383:in `method_missing' /app/controllers/web_reservations_controller.rb:29:in `test_email' In my web_reservations_controller I have a simply method calling TestMailer.deliver_send_email And my TesMailer is something like: class TestMailer < ActionMailer::ARMailer def send_email @recipients = "[email protected]" @from = "[email protected]" @subject = "TEST MAIL SUBJECT" @body = "<br>TEST MAIL MESSAGE" @content_type = "text/html" end end Do you have any idea? Thanks! Roberto

    Read the article

  • Problems running rails server

    - by harristrader
    I just installed the rails env using the Rails installer on my Mac OSX 10.7.4. I create a project using the "rails new" command. When I try to run the "rails server" command I get this message: /usr/local/rvm/gems/ruby-1.9.3-p194/gems/rails-2.3.14/lib/rails_generator/options.rb:32:in `default_options': undefined method `write_inheritable_attribute' for Rails::Generator::Base:Class (NoMethodError) from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/rails-2.3.14/lib/rails_generator/base.rb:90:in `<class:Base>' from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/rails-2.3.14/lib/rails_generator/base.rb:85:in `<module:Generator>' from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/rails-2.3.14/lib/rails_generator/base.rb:48:in `<module:Rails>' from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/rails-2.3.14/lib/rails_generator/base.rb:6:in `<top (required)>' from /usr/local/rvm/rubies/ruby-1.9.3-p194/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:55:in `require' from /usr/local/rvm/rubies/ruby-1.9.3-p194/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:55:in `require' from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/rails-2.3.14/lib/rails_generator.rb:37:in `<top (required)>' from /usr/local/rvm/rubies/ruby-1.9.3-p194/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:55:in `require' from /usr/local/rvm/rubies/ruby-1.9.3-p194/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:55:in `require' from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/rails-2.3.14/bin/rails:15:in `<top (required)>' from /usr/local/rvm/gems/ruby-1.9.3-p194/bin/rails:23:in `load' from /usr/local/rvm/gems/ruby-1.9.3-p194/bin/rails:23:in `<main>' When I run the $ ruby -v and $ gem -v, I get "ruby 1.9.3p194" and "1.8.24" respectively. What am I missing here? How can I get this server to run?

    Read the article

  • undefined method `content_type' for nil:NilClass /opt/local/lib/ruby1.9/gems/1.9.1/gems/actionpack-2

    - by Y Kamesh Rao
    Strange error in diagnostics.erb file about _set_controller_content_type. Please help. NoMethodError in Timelines#public_timeline Showing /opt/local/lib/ruby1.9/gems/1.9.1/gems/actionpack-2.3.5/lib/action_controller/templates/rescues/diagnostics.erb where line # raised: undefined method `content_type' for nil:NilClass Extracted source (around line #): RAILS_ROOT: /Volumes/DATA/Source/Rails/tvider Application Trace | Framework Trace | Full Trace /opt/local/lib/ruby1.9/gems/1.9.1/gems/activesupport-2.3.5/lib/active_support/whiny_nil.rb:52:in method_missing' /opt/local/lib/ruby1.9/gems/1.9.1/gems/actionpack-2.3.5/lib/action_view/base.rb:331:in_set_controller_content_type' /opt/local/lib/ruby1.9/gems/1.9.1/gems/actionpack-2.3.5/lib/action_view/renderable.rb:32:in block in render' /opt/local/lib/ruby1.9/gems/1.9.1/gems/actionpack-2.3.5/lib/action_view/base.rb:306:inwith_template' /opt/local/lib/ruby1.9/gems/1.9.1/gems/actionpack-2.3.5/lib/action_view/renderable.rb:30:in render' /opt/local/lib/ruby1.9/gems/1.9.1/gems/actionpack-2.3.5/lib/action_view/template.rb:205:inrender_template' /opt/local/lib/ruby1.9/gems/1.9.1/gems/actionpack-2.3.5/lib/action_view/base.rb:265:in render' /opt/local/lib/ruby1.9/gems/1.9.1/gems/actionpack-2.3.5/lib/action_controller/rescue.rb:134:inrescue_action_locally' /opt/local/lib/ruby1.9/gems/1.9.1/gems/actionpack-2.3.5/lib/action_controller/rescue.rb:152:in rescue_action_without_handler' /opt/local/lib/ruby1.9/gems/1.9.1/gems/actionpack-2.3.5/lib/action_controller/rescue.rb:74:inrescue_action' /opt/local/lib/ruby1.9/gems/1.9.1/gems/actionpack-2.3.5/lib/action_controller/rescue.rb:162:in rescue in perform_action_with_rescue' /opt/local/lib/ruby1.9/gems/1.9.1/gems/actionpack-2.3.5/lib/action_controller/rescue.rb:160:inperform_action_with_rescue' /opt/local/lib/ruby1.9/gems/1.9.1/gems/actionpack-2.3.5/lib/action_controller/flash.rb:146:in perform_action_with_flash' /opt/local/lib/ruby1.9/gems/1.9.1/gems/actionpack-2.3.5/lib/action_controller/base.rb:532:inprocess' /opt/local/lib/ruby1.9/gems/1.9.1/gems/actionpack-2.3.5/lib/action_controller/filters.rb:606:in process_with_filters' /opt/local/lib/ruby1.9/gems/1.9.1/gems/actionpack-2.3.5/lib/action_controller/base.rb:391:inprocess' /opt/local/lib/ruby1.9/gems/1.9.1/gems/actionpack-2.3.5/lib/action_controller/base.rb:386:in call' /opt/local/lib/ruby1.9/gems/1.9.1/gems/actionpack-2.3.5/lib/action_controller/routing/route_set.rb:437:incall' Request Parameters: None Show session dump Response Headers: {"Cache-Control"="no-cache", "Content-Type"=""}

    Read the article

< Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >