Search Results

Search found 562 results on 23 pages for 'jinxed coder'.

Page 22/23 | < Previous Page | 18 19 20 21 22 23  | Next Page >

  • How can arguments to variadic functions be passed by reference in PHP?

    - by outis
    Assuming it's possible, how would one pass arguments by reference to a variadic function without generating a warning in PHP? We can no longer use the '&' operator in a function call, otherwise I'd accept that (even though it would be error prone, should a coder forget it). What inspired this is are old MySQLi wrapper classes that I unearthed (these days, I'd just use PDO). The only difference between the wrappers and the MySQLi classes is the wrappers throw exceptions rather than returning FALSE. class DBException extends RuntimeException {} ... class MySQLi_throwing extends mysqli { ... function prepare($query) { $stmt = parent::prepare($query); if (!$stmt) { throw new DBException($this->error, $this->errno); } return new MySQLi_stmt_throwing($this, $query, $stmt); } } // I don't remember why I switched from extension to composition, but // it shouldn't matter for this question. class MySQLi_stmt_throwing /* extends MySQLi_stmt */ { protected $_link, $_query, $_delegate; public function __construct($link, $query, $prepared) { //parent::__construct($link, $query); $this->_link = $link; $this->_query = $query; $this->_delegate = $prepared; } function bind_param($name, &$var) { return $this->_delegate->bind_param($name, $var); } function __call($name, $args) { //$rslt = call_user_func_array(array($this, 'parent::' . $name), $args); $rslt = call_user_func_array(array($this->_delegate, $name), $args); if (False === $rslt) { throw new DBException($this->_link->error, $this->errno); } return $rslt; } } The difficulty lies in calling methods such as bind_result on the wrapper. Constant-arity functions (e.g. bind_param) can be explicitly defined, allowing for pass-by-reference. bind_result, however, needs all arguments to be pass-by-reference. If you call bind_result on an instance of MySQLi_stmt_throwing as-is, the arguments are passed by value and the binding won't take. try { $id = Null; $stmt = $db->prepare('SELECT id FROM tbl WHERE ...'); $stmt->execute() $stmt->bind_result($id); // $id is still null at this point ... } catch (DBException $exc) { ... } Since the above classes are no longer in use, this question is merely a matter of curiosity. Alternate approaches to the wrapper classes are not relevant. Defining a method with a bunch of arguments taking Null default values is not correct (what if you define 20 arguments, but the function is called with 21?). Answers don't even need to be written in terms of MySQL_stmt_throwing; it exists simply to provide a concrete example.

    Read the article

  • Cases of companies taking IP rights of your own personal projects developed outside company time

    - by GSS
    Hi, I have heard of cases where a developer working for a company is also making his own personal projects in his own time, using his own equipment yet the company he works for tries to claim ownership for the project. I really find this annoying, and bang out of order. It should also be illegal. I am in this position (work for a company and working on my own systems - from small class libraries used to practise what I learn in my exam revision to a large commercial-scale system). While I don't know if the company will try to take ownership, all I know is they say they do not want a conflict of interest. Fair enough, my system is developed in my own time using my own equipment. They also say that work time should be for work only, which it is. Funny thing that as work is so boring, easy and slow that I have plenty of free time, which I wish I could spend on something productive - said system. The problem is, my company does not take hiring technical talent seriously. This is my first job, I am a junior coder (but my status/position doesn't really reflect what I can do), but I am the only developer. Likewise with the guy who controls Windows Server. As the contract does not say anything about taking ownership, I would assume they would. They would try to milk my success (I've made a good impression so I am sure they would). How can this be allowed? Are there any examples of this happening to any fellow Stacker here? It really makes my blood boil. What I find funny is that my company hardly has the expertise and resources to even be able to successfully run a project of my size. What I do at work is an ASP.NET application consisting of five pages, and even then there are flaws in the project. If I told them that they would also have to take responsibility for flaws in the project, then they would think twice! It's exactly because of this I save the best code for myself and at work I write rubbish code full of code smells. The company don't really care about error handling, as long as the business functionality works (ie a scheduled email sends, but there is no error handling). They'd think twice when they see the embarassment and business cost of a YSOD...

    Read the article

  • How to build Lucene / Solr from source code in windows environment in order to add patches

    - by Simon
    I have successfully implemented Apache’s Solr for free text searching a database driven web site build for windows platforms using Visual Studio in c#. I am trying to get a version Solr working with field collapsing (which is not in the release version). There are patches available from apache and discussions on the web of people successfully doing this for the version I am using but my problem is cannot get the build to work. I am a c# coder on windows platforms so java development is new to me. I understand I need to get the correct source code (and revision) from SVN, add the appropriate patches, then build the war file to deploy to my system. I cannot seem to get the source to build and produce the deployment code including jar (and subsequent war) files. My system is: Windows 7 Ultimate for development Visual Studio 2010 for c# / javascript development MyEclipse 8.6 / Eclipse 3.5 for the java build from source Subecplise 1.6x SVN plugin to get the source from apache’s SVN Apache Solr 1.4.1 So far I have: Found the right patches for the function I need: https://issues.apache.org/jira/browse/SOLR-236 Specifically I need to patch: field_collapsing_1.1.0.patch HTTPS //issues.apache.org/jira/secure/attachment/12357681/field_collapsing_1.1.0.patch and SOLR-236-1_4_1.patch HTTPS //issues.apache.org/jira/secure/attachment/12448216/SOLR-236-1_4_1.patch I downloaded the Lucene trunk version from the day before the patch was released (revision 958303 from 28/6/10) via subeclipse into a java package in myeclipse from: HTTPS //svn.apache.org/repos/asf/lucene/dev/trunk (Solr is the web implementation of Lucene and is in the subfolder solr/) I can apply patches to the solr directory once it has downloaded but the parent Lucene project doesn’t build the war files, copy the jar or other files into the bin folder (it stays empty). The build process starts, but doesn’t do anything apart from creating the folders bin and src. I am building the whole Lucene project, which contains Solr. I have tried building the source without patching and the same happens. If I copy out the Solr directory into a new project, it runs the build and copies all the related files, tests, etc but fails with 4,500 errors and does not produce the jar files or war file, which I assume is because it can’t find the Lucene trunk files which it depends on. I have two interrelated problems 1) I can't get the Lucene downloaded trunk to build 2) The jar, war and associated files are not created Can anyone help with what I am missing to build the war file? I have spent 2 days to get this far as the help online is extremely patchy and I can’t find a walk though tutorial on building a java war file from source in a windows environment. Any help will be much appreciated. Simon

    Read the article

  • iPhone reachability checking

    - by Sneakyness
    I've found several examples of code to do what I want (check for reachability), but none of it seems to be exact enough to be of use to me. I can't figure out why this doesn't want to play nice. I have the reachability.h/m in my project, I'm doing #import <SystemConfiguration/SystemConfiguration.h> And I have the framework added. I also have: #import "Reachability.h" at the top of the .m in which I'm trying to use the reachability. Reachability* reachability = [Reachability sharedReachability]; [reachability setHostName:@"http://www.google.com"]; // set your host name here NetworkStatus remoteHostStatus = [reachability remoteHostStatus]; if(remoteHostStatus == NotReachable) {NSLog(@"no");} else if (remoteHostStatus == ReachableViaWiFiNetwork) {NSLog(@"wifi"); } else if (remoteHostStatus == ReachableViaCarrierDataNetwork) {NSLog(@"cell"); } This is giving me all sorts of problems. What am I doing wrong? I'm an alright coder, I just have a hard time when it comes time to figure out what needs to be put where to enable what I want to do, regardless if I want to know what I want to do or not. (So frustrating) Update: This is what's going on. This is in my viewcontroller, which I have the #import <SystemConfiguration/SystemConfiguration.h> and #import "Reachability.h" set up with. This is my least favorite part of programming by far. FWIW, we never ended up implementing this in our code. The two features that required internet access (entering the sweepstakes, and buying the dvd), were not main features. Nothing else required internet access. Instead of adding more code, we just set the background of both internet views to a notice telling the users they must be connected to the internet to use this feature. It was in theme with the rest of the application's interface, and was done well/tastefully. They said nothing about it during the approval process, however we did get a personal phone call to verify that we were giving away items that actually pertained to the movie. According to their usually vague agreement, you aren't allowed to have sweepstakes otherwise. I would also think this adheres more strictly to their "only use things if you absolutely need them" ideaology as well. Here's the iTunes link to the application, EvoScanner.

    Read the article

  • Parallel.For maintain input list order on output list

    - by romeozor
    I'd like some input on keeping the order of a list during heavy-duty operations that I decided to try to do in a parallel manner to see if it boosts performance. (It did!) I came up with a solution, but since this was my first attempt at anything parallel, I'd need someone to slap my hands if I did something very stupid. There's a query that returns a list of card owners, sorted by name, then by date of birth. This needs to be rendered in a table on a web page (ASP.Net WebForms). The original coder decided he would construct the table cell-by-cell (TableCell), add them to rows (TableRow), then each row to the table. So no GridView, allegedly its performance is bad, but the performance was very poor regardless :). The database query returns in no time, the most time is spent on looping through the results and adding table cells etc. I made the following method to maintain the original order of the list: private TableRow[] ComposeRows(List<CardHolder> queryResult) { int queryElementsCount = queryResult.Count(); // array with the query's size var rowArray = new TableRow[queryElementsCount]; Parallel.For(0, queryElementsCount, i => { var row = new TableRow(); var cell = new TableCell(); // various operations, including simple ones such as: cell.Text = queryResult[i].Name; row.Cells.Add(cell); // here I'm adding the current item to it's original index // to maintain order in the output list rowArray[i] = row; }); return rowArray; } So as you can see, because I'm returning a very different type of data (List<CardHolder> -> TableRow[]), I can't just simply omit the ordering from the original query to do it after the operations. Also, I also thought it would be a good idea to Dispose() the objects at the end of each loop, because the query can return a huge list and letting cell and row objects pile up in the heap could impact performance.(?) How badly did I do? Does anyone have a better solution in case mine is flawed?

    Read the article

  • Release management with a distributed version control system

    - by See Sharp Cheddar
    We're considering a switch from SVN to a distributed VCS at my workplace. I'm familiar with all the reasons for wanting to using a DVCS for day-to-day development: local version control, easier branching and merging, etc., but I haven't seen that much that's compelling in terms of managing software releases. Here's our release process: Discover what changes are available for merging. Run a query to find the defects/tickets associated with these changes. Filter out changes associated with "open" tickets. In our environment, tickets must be in a closed state in order to merged with a release branch. Filter out changes we don't want in the release branch. We are very conservative when it comes to merging changes. If a change isn't absolutely necessary, it doesn't get merged. Merge available changes, preferably in chronological order. We group changes together if they're associated with the same ticket. Block unwanted changes from the release branch (svnmerge block) so we don't have to deal with them again. Sometimes we can be juggling 3-5 different milestones at a time. Some milestones have very different constraints, and the block list can get quite long. I've been messing around with git, mercurial and plastic, and as far as I can tell none of them address this model very well. It seems like they would work very well when you have only one product you're releasing, but I can't imagine using them for juggling multiple, very different products from the same codebase. For example, cherry-picking seems to be an afterthought in mercurial. (You have to use the 'transplant' command). After you cherry-pick a change into a branch it still shows up as an available integration. Cherry-picking breaks the mercurial way of working. DVCS seems to be better suited for feature branches. There's no need for cherry-picking if you merge directly from a feature branch to trunk and the release branch. But who wants to do all that merging all the time? And how do you query for what's available to merge? And how do you make sure all the changes in a feature branch belong together? It sounds like total chaos. I'm torn because the coder in me wants DVCS for day-to-day work. I really want it. But I fear the day when I have to put the release manager hat and sort out what needs to be merged and what doesn't. I want to write code, I don't want to be a merge monkey.

    Read the article

  • How to replicate this button in CSS

    - by jasondavis
    I am trying to create a CSS theme switcher button like below. The top image shows what I have so far and the bottom image shows what I am trying to create. I am not the best at this stuff I am more of a back-end coder. I could really use some help. I have a live demo of the code here http://dabblet.com/gist/2230656 Just looking at what I have and the goal image, some differences. I need to add a gradient The border is not right on mine Radius is a little off Possibly some other stuff? Also here is the code...it can be changed anyway to improve this, the naming and stuff could be improved I am sure but I can use any help I can get. HTML <div class="switch-wrapper"> <div class="switcher left selected"> <span id="left">....</span> </div> <div class="switcher right"> <span id="right">....</span> </div> </div> CSS /* begin button styles */ .switch-wrapper{ width:400px; margin:220px; } .switcher { background:#507190; display: inline-block; max-width: 100%; box-shadow: 1px 1px 1px rgba(0,0,0,.3); position:relative; } #left, #right{ width:17px; height:11px; overflow:hidden; position:absolute; top:50%; left:50%; margin-top:-5px; margin-left:-8px; font: 0/0 a; } #left{ background-image: url(http://www.codedevelopr.com/assets/images/switcher.png); background-position: 0px px; } #right{ background-image: url(http://www.codedevelopr.com/assets/images/switcher.png); background-position: -0px -19px; } .left, .right{ width: 30px; height: 25px; border: 1px solid #3C5D7E; } .left{ border-radius: 6px 0px 0px 6px; } .right{ border-radius: 0 6px 6px 0; margin: 0 0 0 -6px } .switcher:hover, .selected { background: #27394b; box-shadow: -1px 1px 0px rgba(255,255,255,.4), inset 0 4px 5px rgba(0,0,0,.6), inset 0 1px 2px rgba(0,0,0,.6); }

    Read the article

  • How to manipulate file paths intelligently in .Net 3.0?

    - by Hamish Grubijan
    Scenario: I am maintaining a function which helps with an install - copies files from PathPart1/pending_install/PathPart2/fileName to PathPart1/PathPart2/fileName. It seems that String.Replace() and Path.Combine() do not play well together. The code is below. I added this section: // The behavior of Path.Combine is weird. See: // http://stackoverflow.com/questions/53102/why-does-path-combine-not-properly-concatenate-filenames-that-start-with-path-dir while (strDestFile.StartsWith(@"\")) { strDestFile = strDestFile.Substring(1); // Remove any leading backslashes } Debug.Assert(!Path.IsPathRooted(strDestFile), "This will make the Path.Combine(,) fail)."); in order to take care of a bug (code is sensitive to a constant @"pending_install\" vs @"pending_install" which I did not like and changed (long story, but there was a good opportunity for constant reuse). Now the whole function: //You want to uncompress only the files downloaded. Not every file in the dest directory. private void UncompressFiles() { string strSrcDir = _application.Client.TempDir; ArrayList arrFiles = new ArrayList(); GetAllCompressedFiles(ref arrFiles, strSrcDir); IEnumerator enumer = arrFiles.GetEnumerator(); while (enumer.MoveNext()) { string strDestFile = enumer.Current.ToString().Replace(_application.Client.TempDir, String.Empty); // The behavior of Path.Combine is weird. See: // http://stackoverflow.com/questions/53102/why-does-path-combine-not-properly-concatenate-filenames-that-start-with-path-dir while (strDestFile.StartsWith(@"\")) { strDestFile = strDestFile.Substring(1); // Remove any leading backslashes } Debug.Assert(!Path.IsPathRooted(strDestFile), "This will make the Path.Combine(,) fail)."); strDestFile = Path.Combine(_application.Client.BaseDir, strDestFile); strDestFile = strDestFile.Replace(Path.GetExtension(strDestFile), String.Empty); ZSharpLib.ZipExtractor.ExtractZip(enumer.Current.ToString(), strDestFile); FileUtility.DeleteFile(enumer.Current.ToString()); } } Please do not laugh at the use of ArrayList and the way it is being iterated - it was pioneered by a C++ coder during a .Net 1.1 era. I will change it. What I am interested in: what is a better way of replacing PathPart1/pending_install/PathPart2/fileName with PathPart1/PathPart2/fileName within the current code. Note that _application.Client.TempDir is just _application.Client.BaseDir + @"\pending_install". While there are many ways to improve the code, I am mainly concerned with the part which has to do with String.Replace(...) and Path.Combine(,). I do not want to make changes outside of this function. I wish Path.Combine(,) took an optional bool flag, but it does not. So ... given my constraints, how can I rework this so that it starts to sucks less? Thanks!

    Read the article

  • How do you unit test the real world?

    - by Kim Sun-wu
    I'm primarily a C++ coder, and thus far, have managed without really writing tests for all of my code. I've decided this is a Bad Idea(tm), after adding new features that subtly broke old features, or, depending on how you wish to look at it, introduced some new "features" of their own. But, unit testing seems to be an extremely brittle mechanism. You can test for something in "perfect" conditions, but you don't get to see how your code performs when stuff breaks. A for instance is a crawler, let's say it crawls a few specific sites, for data X. Do you simply save sample pages, test against those, and hope that the sites never change? This would work fine as regression tests, but, what sort of tests would you write to constantly check those sites live and let you know when the application isn't doing it's job because the site changed something, that now causes your application to crash? Wouldn't you want your test suite to monitor the intent of the code? The above example is a bit contrived, and something I haven't run into (in case you haven't guessed). Let me pick something I have, though. How do you test an application will do its job in the face of a degraded network stack? That is, say you have a moderate amount of packet loss, for one reason or the other, and you have a function DoSomethingOverTheNetwork() which is supposed to degrade gracefully when the stack isn't performing as it's supposed to; but does it? The developer tests it personally by purposely setting up a gateway that drops packets to simulate a bad network when he first writes it. A few months later, someone checks in some code that modifies something subtly, so the degradation isn't detected in time, or, the application doesn't even recognize the degradation, this is never caught, because you can't run real world tests like this using unit tests, can you? Further, how about file corruption? Let's say you're storing a list of servers in a file, and the checksum looks okay, but the data isn't really. You want the code to handle that, you write some code that you think does that. How do you test that it does exactly that for the life of the application? Can you? Hence, brittleness. Unit tests seem to test the code only in perfect conditions(and this is promoted, with mock objects and such), not what they'll face in the wild. Don't get me wrong, I think unit tests are great, but a test suite composed only of them seems to be a smart way to introduce subtle bugs in your code while feeling overconfident about it's reliability. How do I address the above situations? If unit tests aren't the answer, what is? Thanks!

    Read the article

  • How to manipulate file paths intelligently in .Net 3.5?

    - by Hamish Grubijan
    Scenario: I am maintaining a function which helps with an install - copies files from PathPart1/pending_install/PathPart2/fileName to PathPart1/PathPart2/fileName. It seems that String.Replace() and Path.Combine() do not play well together. The code is below. I added this section: // The behavior of Path.Combine is weird. See: // http://stackoverflow.com/questions/53102/why-does-path-combine-not-properly-concatenate-filenames-that-start-with-path-dir while (strDestFile.StartsWith(@"\")) { strDestFile = strDestFile.Substring(1); // Remove any leading backslashes } Debug.Assert(!Path.IsPathRooted(strDestFile), "This will make the Path.Combine(,) fail)."); in order to take care of a bug (code is sensitive to a constant @"pending_install\" vs @"pending_install" which I did not like and changed (long story, but there was a good opportunity for constant reuse). Now the whole function: //You want to uncompress only the files downloaded. Not every file in the dest directory. private void UncompressFiles() { string strSrcDir = _application.Client.TempDir; ArrayList arrFiles = new ArrayList(); GetAllCompressedFiles(ref arrFiles, strSrcDir); IEnumerator enumer = arrFiles.GetEnumerator(); while (enumer.MoveNext()) { string strDestFile = enumer.Current.ToString().Replace(_application.Client.TempDir, String.Empty); // The behavior of Path.Combine is weird. See: // http://stackoverflow.com/questions/53102/why-does-path-combine-not-properly-concatenate-filenames-that-start-with-path-dir while (strDestFile.StartsWith(@"\"")) { strDestFile = strDestFile.Substring(1); // Remove any leading backslashes } Debug.Assert(!Path.IsPathRooted(strDestFile), "This will make the Path.Combine(,) fail)."); strDestFile = Path.Combine(_application.Client.BaseDir, strDestFile); strDestFile = strDestFile.Replace(Path.GetExtension(strDestFile), String.Empty); ZSharpLib.ZipExtractor.ExtractZip(enumer.Current.ToString(), strDestFile); FileUtility.DeleteFile(enumer.Current.ToString()); } } Please do not laugh at the use of ArrayList and the way it is being iterated - it was pioneered by a C++ coder during a .Net 1.1 era. I will change it. What I am interested in: what is a better way of replacing PathPart1/pending_install/PathPart2/fileName with PathPart1/PathPart2/fileName within the current code. Note that _application.Client.TempDir is just _application.Client.BaseDir + @"\pending_install". While there are many ways to improve the code, I am mainly concerned with the part which has to do with String.Replace(...) and Path.Combine(,). I do not want to make changes outside of this function. I wish Path.Combine(,) took an optional bool flag, but it does not. So ... given my constraints, how can I rework this so that it starts to suck less?

    Read the article

  • What is wrong in this c++ code?

    - by narayanpatra
    Why this coder do not show error #include <iostream> int main() { using namespace std; unsigned short int myInt = 99; unsigned short int * pMark = 0; cout << myInt << endl; pMark = &myInt; *pMark = 11; cout << "*pMark:\t" << *pMark << "\nmyInt:\t" << myInt << endl; return 0; } But this one shows : #include<iostream> using namespace std; int addnumber(int *p, int *q){ cout << *p = 12 << endl; cout << *q = 14 << endl; } #include<iostream> using namespace std; int addnumber(int *p, int *q){ cout << *p = 12 << endl; cout << *q = 14 << endl; } int main() { int i , j; cout << "enter the value of first number"; cin >> i; cout << "enter the value of second number"; cin >> j; addnumber(&i, &j); cout << i << endl; cout << j << endl; } In both the code snippets, I am assigning *pointer=somevalue. In first code it do not show any error but it shows error in the line cout << *p = 12 << endl; cout << *q = 14 << endl; What mistake I am doing ?

    Read the article

  • Enterprise Library Logging / Exception handling and Postsharp

    - by subodhnpushpak
    One of my colleagues came-up with a unique situation where it was required to create log files based on the input file which is uploaded. For example if A.xml is uploaded, the corresponding log file should be A_log.txt. I am a strong believer that Logging / EH / caching are cross-cutting architecture aspects and should be least invasive to the business-logic written in enterprise application. I have been using Enterprise Library for logging / EH (i use to work with Avanade, so i have affection towards the library!! :D ). I have been also using excellent library called PostSharp for cross cutting aspect. Here i present a solution with and without PostSharp all in a unit test. Please see full source code at end of the this blog post. But first, we need to tweak the enterprise library so that the log files are created at runtime based on input given. Below is Custom trace listner which writes log into a given file extracted out of Logentry extendedProperties property. using Microsoft.Practices.EnterpriseLibrary.Common.Configuration; using Microsoft.Practices.EnterpriseLibrary.Logging.Configuration; using Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners; using Microsoft.Practices.EnterpriseLibrary.Logging; using System.IO; using System.Text; using System; using System.Diagnostics;   namespace Subodh.Framework.Logging { [ConfigurationElementType(typeof(CustomTraceListenerData))] public class LogToFileTraceListener : CustomTraceListener {   private static object syncRoot = new object();   public override void TraceData(TraceEventCache eventCache, string source, TraceEventType eventType, int id, object data) {   if ((data is LogEntry) & this.Formatter != null) { WriteOutToLog(this.Formatter.Format((LogEntry)data), (LogEntry)data); } else { WriteOutToLog(data.ToString(), (LogEntry)data); } }   public override void Write(string message) { Debug.Print(message.ToString()); }   public override void WriteLine(string message) { Debug.Print(message.ToString()); }   private void WriteOutToLog(string BodyText, LogEntry logentry) { try { //Get the filelocation from the extended properties if (logentry.ExtendedProperties.ContainsKey("filelocation")) { string fullPath = Path.GetFullPath(logentry.ExtendedProperties["filelocation"].ToString());   //Create the directory where the log file is written to if it does not exist. DirectoryInfo directoryInfo = new DirectoryInfo(Path.GetDirectoryName(fullPath));   if (directoryInfo.Exists == false) { directoryInfo.Create(); }   //Lock the file to prevent another process from using this file //as data is being written to it.   lock (syncRoot) { using (FileStream fs = new FileStream(fullPath, FileMode.Append, FileAccess.Write, FileShare.Write, 4096, true)) { using (StreamWriter sw = new StreamWriter(fs, Encoding.UTF8)) { Log(BodyText, sw); sw.Close(); } fs.Close(); } } } } catch (Exception ex) { throw new LoggingException(ex.Message, ex); } }   /// <summary> /// Write message to named file /// </summary> public static void Log(string logMessage, TextWriter w) { w.WriteLine("{0}", logMessage); } } }   The above can be “plugged into” the code using below configuration <loggingConfiguration name="Logging Application Block" tracingEnabled="true" defaultCategory="Trace" logWarningsWhenNoCategoriesMatch="true"> <listeners> <add listenerDataType="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.CustomTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" traceOutputOptions="None" filter="All" type="Subodh.Framework.Logging.LogToFileTraceListener, Subodh.Framework.Logging, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="Subodh Custom Trace Listener" initializeData="" formatter="Text Formatter" /> </listeners> Similarly we can use PostSharp to expose the above as cross cutting aspects as below using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Reflection; using PostSharp.Laos; using System.Diagnostics; using GC.FrameworkServices.ExceptionHandler; using Subodh.Framework.Logging;   namespace Subodh.Framework.ExceptionHandling { [Serializable] public sealed class LogExceptionAttribute : OnExceptionAspect { private string prefix; private MethodFormatStrings formatStrings;   // This field is not serialized. It is used only at compile time. [NonSerialized] private readonly Type exceptionType; private string fileName;   /// <summary> /// Declares a <see cref="XTraceExceptionAttribute"/> custom attribute /// that logs every exception flowing out of the methods to which /// the custom attribute is applied. /// </summary> public LogExceptionAttribute() { }   /// <summary> /// Declares a <see cref="XTraceExceptionAttribute"/> custom attribute /// that logs every exception derived from a given <see cref="Type"/> /// flowing out of the methods to which /// the custom attribute is applied. /// </summary> /// <param name="exceptionType"></param> public LogExceptionAttribute( Type exceptionType ) { this.exceptionType = exceptionType; }   public LogExceptionAttribute(Type exceptionType, string fileName) { this.exceptionType = exceptionType; this.fileName = fileName; }   /// <summary> /// Gets or sets the prefix string, printed before every trace message. /// </summary> /// <value> /// For instance <c>[Exception]</c>. /// </value> public string Prefix { get { return this.prefix; } set { this.prefix = value; } }   /// <summary> /// Initializes the current object. Called at compile time by PostSharp. /// </summary> /// <param name="method">Method to which the current instance is /// associated.</param> public override void CompileTimeInitialize( MethodBase method ) { // We just initialize our fields. They will be serialized at compile-time // and deserialized at runtime. this.formatStrings = Formatter.GetMethodFormatStrings( method ); this.prefix = Formatter.NormalizePrefix( this.prefix ); }   public override Type GetExceptionType( MethodBase method ) { return this.exceptionType; }   /// <summary> /// Method executed when an exception occurs in the methods to which the current /// custom attribute has been applied. We just write a record to the tracing /// subsystem. /// </summary> /// <param name="context">Event arguments specifying which method /// is being called and with which parameters.</param> public override void OnException( MethodExecutionEventArgs context ) { string message = String.Format("{0}Exception {1} {{{2}}} in {{{3}}}. \r\n\r\nStack Trace {4}", this.prefix, context.Exception.GetType().Name, context.Exception.Message, this.formatStrings.Format(context.Instance, context.Method, context.GetReadOnlyArgumentArray()), context.Exception.StackTrace); if(!string.IsNullOrEmpty(fileName)) { ApplicationLogger.LogException(message, fileName); } else { ApplicationLogger.LogException(message, Source.UtilityService); } } } } To use the above below is the unit test [TestMethod] [ExpectedException(typeof(NotImplementedException))] public void TestMethod1() { MethodThrowingExceptionForLog(); try { MethodThrowingExceptionForLogWithPostSharp(); } catch (NotImplementedException ex) { throw ex; } }   private void MethodThrowingExceptionForLog() { try { throw new NotImplementedException(); } catch (NotImplementedException ex) { // create file and then write log ApplicationLogger.TraceMessage("this is a trace message which will be logged in Test1MyFile", @"D:\EL\Test1Myfile.txt"); ApplicationLogger.TraceMessage("this is a trace message which will be logged in YetAnotherTest1Myfile", @"D:\EL\YetAnotherTest1Myfile.txt"); } }   // Automatically log details using attributes // Log exception using attributes .... A La WCF [FaultContract(typeof(FaultMessage))] style] [Log(@"D:\EL\Test1MyfileLogPostsharp.txt")] [LogException(typeof(NotImplementedException), @"D:\EL\Test1MyfileExceptionPostsharp.txt")] private void MethodThrowingExceptionForLogWithPostSharp() { throw new NotImplementedException(); } The good thing about the approach is that all the logging and EH is done at centralized location controlled by PostSharp. Of Course, if some other library has to be used instead of EL, it can easily be plugged in. Also, the coder ARE ONLY involved in writing business code in methods, which makes code cleaner. Here is the full source code. The third party assemblies provided are from EL and PostSharp and i presume you will find these useful. Do let me know your thoughts / ideas on the same. Technorati Tags: PostSharp,Enterprize library,C#,Logging,Exception handling

    Read the article

  • CodePlex Daily Summary for Monday, March 22, 2010

    CodePlex Daily Summary for Monday, March 22, 2010New Projects[Tool] Vczh Non-public DLL Classes Caller: Generate C# code for you to call non-public classes in DLLs very easily.Artefact Animator: Artefact Animator provides an easy to use framework for procedural time-based animations in Silverlight and WPF.cacheroo: Cacheroo is a social networking community that will make it easier for people who love geocaching to get connected.Data Processing Toolkit: An utility app to collected data from different sources (i.e. bugzilla bug reports) in a structured way. We are currently setting up the site. Mo...eXternal SQL Bridge (PHP): The eXternal SQL Bridge (XSB) allows you to bridge two websites together in a secure manner through pre-shared keys. XSB is resilient against repla...'G' - Language to Define Gestures for Touch Based Applications: A cross plat form multi-touch application framework with a language to define gestures. The application is build on Silverlight 4.0 and the languag...IIS Network Diagnostic Tools: Web implementation of "looking glass" like services (ping, traceroute) as HTTP modules for Internet Information Services.Interop Router: This project establishes a communication framework and job dispatcher for a mixed operating system cluster environment.L2 Commander: L2Commander makes it easier for both new and old l2j users to manage your server.You no longer have to waste time on finding the files you need and...MediaHelper: A utility to help clean up empty/unwanted files and folders in your filesystem.mhinze: matt hinze stuffOneMan: Focus on Silverlight and WCF technology.Rss Photo Frame Android Widget: RSS Photo Frame Android Widget permits showing pictures from any RSS feed on your Android device's desktopSingle Web Session: Web Tool Kits Current project provide developer with different tools that help to enhance web site performance, security, and other common functio...Work Item Visualization: Use DGML to visualize and analyze your TFS Work Items. Included is the ability to perform basic risk/impact analysis. It helps answer the question,...New Releases[Tool] Vczh Non-public DLL Classes Caller: Wrapper Coder (beta): Click "<Click Me To Open Assembly File>", WrapperCoder will load the assembly and referenced assembly. Check the non-public classes that you want...APS - Automatic Print Screen: APS 1.0: APS automatizes the tasks of paste the image in Paint and save it after print screen or alt+print screen. Choose directory, name and file extension...BTP Tools: e-Sword generator build 20100321: 1. Modify the indent after subtitle. 2. Add 2 spaces after subtitle.Combres - WebForm & MVC Client-side Resource Combine Library: Combres 2.0: Changes since last version (1.2) Support ignore Combres pipeline in debug mode - see issue #6088 Debug mode generates comment helping identify in...Desafio Office 2010 Brasil: DesafioOutlook: Controlando um robo com o Outlook 2010dylan.NET: dylan.NET v. 9.4: Adding Platform Invocation Services Support, full Managed Pointer Support, Charset,Dllimport,Callconv setting for P/Invoke, MarshalAs for parametersFamily Tree Analyzer: Version 1.3.2.0: Version 1.3.2.0 Add open folder button to IGI Search Form Fixes to Fact Location processing - IGIName renamed to RegionID Fix if Region ID not fou...Fasterflect - A Fast and Simple Reflection API: Fasterflect 2.0: We are pleased to release version 2.0 of Fasterflect, which contains a lot of additions and improvements from the previous version. Please refer t...IIS Network Diagnostic Tools: 1.0: Initial public release.Informant: Informant (Desktop) v0.1: This release allows users to send sms messages to 1-Many Groups or 1-Many contacts. It is a very basic release of the application. No styling has b...InfoService: InfoService v1.5 - MPE1 Package: InfoService Release v1.5.0.65 Please read Plugin installation for installation instructions.InfoService: InfoService v1.5 - RAR Package: InfoService Release v1.5.0.65 Please read Plugin installation for installation instructions.L2 Commander: Source Code Link: Where to find our source.ModularCMS: ModularCMS 1.2: Minor bug fixes.NMTools: NMTools-v40b0-20100321-0: The most noticeable aspect of this release is that NMTools is now an independent project. It will no longer tied to OpenSLIM. Nevertheless, OpenSLI...SharePoint LogViewer: SharePoint LogViewer 1.5.3: Log loading performance enhanced. Search text box now has auto complete feature.Single Web Session: Single Web Session: !Single Web Session! <httpModules> <add name="SingleSession" type="SingleWebSession.Model.WebSessionModule, SingleWebSession"/> </httpModules>Sprite Sheet Packer: 2.1 Release: Made a few crucial fixes from 2.0: - Fixed error with paths having spaces. - Fixed error with UI not unlocking. - Fixed NullReferenceException on ...uManage - AD Self-Service Portal: uManage v1.1 (.NET 4.0 RC): Updated Releasev1.1 Adds the primary ability to setup and configure the application through a setup wizard. The setup wizard will continue to evol...VCC: Latest build, v2.1.30321.0: Automatic drop of latest buildVS ChessMania: VS ChessMania V2 March Beta: Second Beta Release with move correction and making application more safe for user. New features will be added soon.WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.9.00: Whats New Added New Toolbar Plugin (By Kent Safransk) 'MediaEmbed' to Include Embed Media from Youtube, Vimeo, etc. Media Embed Plugin Added New ...WeatherBar: WeatherBar 1.0 [No Installation]: Extract the ZIP archive and run WeatherBar.exe. Current release contains some bugs that will be fixed in the next version. Check the Issue Tracker...Work Item Visualization: Release 1.0: This is the initial release of the Work Item Visualization tool. There are no known issues when it comes to the visualization aspects of the tool b...WPF Application Framework (WAF): WPF Application Framework (WAF) 1.0.0.10: Version: 1.0.0.10 (Milestone 10): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requi...WPF AutoComplete TextBox Control: Version 1.2: What's Newadds AutoAppend feature adds a new provider: UrlHistoryDataProvider sample application is updated to reflect the new things Bug Fixe...ZoomBarPlus: V2 (Beta): - Fixed bug: if the active window changed while you were in the middle of a single tap delay, long tap delay, or swipe-repeat, it would continue re...Most Popular ProjectsMetaSharpSavvy DateTimeRawrWBFS ManagerSilverlight ToolkitASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitLiveUpload to FacebookWindows Presentation Foundation (WPF)Most Active ProjectsLINQ to TwitterRawrOData SDK for PHPjQuery Library for SharePoint Web ServicesDirectQPHPExcelFarseer Physics Enginepatterns & practices – Enterprise LibraryBlogEngine.NETNB_Store - Free DotNetNuke Ecommerce Catalog Module

    Read the article

  • Too Many Kittens To Juggle At Once

    - by Bil Simser
    Ahh, the Internet. That crazy, mixed up place where one tweet turns into a conversation between dozens of people and spawns a blogpost. This is the direct result of such an event this morning. It started innocently enough, with this: Then followed up by a blog post by Joel here. In the post, Joel introduces us to the term Business Solutions Architect with mad skillz like InfoPath, Access Services, Excel Services, building Workflows, and SSRS report creation, all while meeting the business needs of users in a SharePoint environment. I somewhat disagreed with Joel that this really wasn’t a new role (at least IMHO) and that a good Architect or BA should really be doing this job. As Joel pointed out when you’re building a SharePoint team this kind of role is often overlooked. Engineers might be able to build workflows but is the right workflow for the right problem? Michael Pisarek wrote about a SharePoint Business Architect a few months ago and it’s a pretty solid assessment. Again, I argue you really shouldn’t be looking for roles that don’t exist and I don’t suggest anyone create roles to hire people to fill them. That’s basically creating a solution looking for problems. Michael’s article does have some great points if you’re lost in the quagmire of SharePoint duties though (and I especially like John Ross’ quote “The coolest shit is worthless if it doesn’t meet business needs”). SharePoinTony summed it up nicely with “SharePoint Solutions knowledge is both lacking and underrated in most environments. Roles help”. Having someone on the team who can dance between a business user and a coder can be difficult. Remember the idea of telling something to someone and them passing it on to the next person. By the time the story comes round the circle it’s a shadow of it’s former self with little resemblance to the original tale. This is very much business requirements as they’re told by the user to a business analyst, written down on paper, read by an architect, tuned into a solution plan, and implemented by a developer. Transformations between what was said, what was heard, what was written down, and what was developed can be distant cousins. Not everyone has the skill of communication and even less have negotiation skills to suit the SharePoint platform. Negotiation is important because not everything can be (or should be) done in SharePoint. Sometimes it’s just not appropriate to build it on the SharePoint platform but someone needs to know enough about the platform and what limitations it might have, then communicate that (and/or negotiate) with a customer or user so it’s not about “You can’t have this” to “Let’s try it this way”. Visualize the possible instead of denying the impossible. So what is the right SharePoint team? My cromag brain came with a fairly simpleton answer (and I’m sure people will just say this is a cop-out). The perfect SharePoint team is just enough people to do the job that know the technology and business problem they’re solving. Bridge the gap between business need and technology platform and you have an architect. Communicate the needs of the business effectively so the entire team understands it and you have a business analyst. Can you get this with full time workers? Maybe but don’t expect miracles out of the gate. Also don’t take a consultant’s word as gospel. Some consultants just don’t have the diversity of the SharePoint platform to be worth their value so be careful. You really need someone who knows enough about SharePoint to be able to validate a consultants knowledge level. This is basically try for any consultant, not just a SharePoint one. Specialization is good and needed. A good, well-balanced SharePoint team is one of people that can solve problems with work with the technology, not against it. Having a top developer is great, but don’t rely on them to solve world hunger if they can’t communicate very well with users. An expert business analyst might be great at gathering requirements so the entire team can understand them, but if it means building 100% custom solutions because they don’t fit inside the SharePoint boundaries isn’t of much value. Just repeat. There is no silver bullet. There is no silver bullet. There is no silver bullet. A few people pointed out Nick Inglis’ article Excluding The Information Professional In SharePoint. It’s a good read too and hits home that maybe some developers and IT pros need some extra help in the information space. If you’re in an organization that needs labels on people, come up with something everyone understands and go with it. If that’s Business Solutions Architect, SharePoint Advisor, or Guy Who Knows A Lot About Portals, make it work for you. We all wish that one person could master all that is SharePoint but we also know that doesn’t scale very well and you quickly get into the hit-by-a-bus syndrome (with the organization coming to a full crawl when the guy or girl goes on vacation, gets sick, or pops out a baby). There are too many gaps in SharePoint knowledge to have any one person know it all and too many kittens to juggle all at once. We like to consider ourselves experts in our field, but trying to tackle too many roles at once and we end up being mediocre jack of all trades, master of none. Don't fall into this pit. It's a deep, dark hole you don't want to try to claw your way out of. Trust me. Been there. Done that. Got the t-shirt. In the end I don’t disagree with Joel. SharePoint is a beast and not something that should be taken on by newbies. If you just read “Teach Yourself SharePoint in 24 Hours” and want to go build your corporate intranet or the next killer business solution with all your new found knowledge plan to pony up consultant dollars a few months later when everything goes to Hell in a handbasket and falls over. I’m not saying don’t build solutions in SharePoint. I’m just saying that building effective ones takes skill like any craft and not something you can just cobble together with a little bit of cursory knowledge. Thanks to *everyone* who participated in this tweet rush. It was fun and educational.

    Read the article

  • A Good Developer is So Hard to Find

    - by James Michael Hare
    Let me start out by saying I want to damn the writers of the Toughest Developer Puzzle Ever – 2. It is eating every last shred of my free time! But as I've been churning through each puzzle and marvelling at the brain teasers and trivia within, I began to think about interviewing developers and why it seems to be so hard to find good ones.  The problem is, it seems like no matter how hard we try to find the perfect way to separate the chaff from the wheat, inevitably someone will get hired who falls far short of expectations or someone will get passed over for missing a piece of trivia or a tricky brain teaser that could have been an excellent team member.   In shops that are primarily software-producing businesses or other heavily IT-oriented businesses (Microsoft, Amazon, etc) there often exists a much tighter bond between HR and the hiring development staff because development is their life-blood. Unfortunately, many of us work in places where IT is viewed as a cost or just a means to an end. In these shops, too often, HR and development staff may work against each other due to differences in opinion as to what a good developer is or what one is worth.  It seems that if you ask two different people what makes a good developer, often you will get three different opinions.   With the exception of those shops that are purely development-centric (you guys have it much easier!), most other shops have management who have very little knowledge about the development process.  Their view can often be that development is simply a skill that one learns and then once aquired, that developer can produce widgets as good as the next like workers on an assembly-line floor.  On the other side, you have many developers that feel that software development is an art unto itself and that the ability to create the most pure design or know the most obscure of keywords or write the shortest-possible obfuscated piece of code is a good coder.  So is it a skill?  An Art?  Or something entirely in between?   Saying that software is merely a skill and one just needs to learn the syntax and tools would be akin to saying anyone who knows English and can use Word can write a 300 page book that is accurate, meaningful, and stays true to the point.  This just isn't so.  It takes more than mere skill to take words and form a sentence, join those sentences into paragraphs, and those paragraphs into a document.  I've interviewed candidates who could answer obscure syntax and keyword questions and once they were hired could not code effectively at all.  So development must be more than a skill.   But on the other end, we have art.  Is development an art?  Is our end result to produce art?  I can marvel at a piece of code -- see it as concise and beautiful -- and yet that code most perform some stated function with accuracy and efficiency and maintainability.  None of these three things have anything to do with art, per se.  Art is beauty for its own sake and is a wonderful thing.  But if you apply that same though to development it just doesn't hold.  I've had developers tell me that all that matters is the end result and how you code it is entirely part of the art and I couldn't disagree more.  Yes, the end result, the accuracy, is the prime criteria to be met.  But if code is not maintainable and efficient, it would be just as useless as a beautiful car that breaks down once a week or that gets 2 miles to the gallon.  Yes, it may work in that it moves you from point A to point B and is pretty as hell, but if it can't be maintained or is not efficient, it's not a good solution.  So development must be something less than art.   In the end, I think I feel like development is a matter of craftsmanship.  We use our tools and we use our skills and set about to construct something that satisfies a purpose and yet is also elegant and efficient.  There is skill involved, and there is an art, but really it boils down to being able to craft code.  Crafting code is far more than writing code.  Anyone can write code if they know the syntax, but so few people can actually craft code that solves a purpose and craft it well.  So this is what I want to find, I want to find code craftsman!  But how?   I used to ask coding-trivia questions a long time ago and many people still fall back on this.  The thought is that if you ask the candidate some piece of coding trivia and they know the answer it must follow that they can craft good code.  For example:   What C++ keyword can be applied to a class/struct field to allow it to be changed even from a const-instance of that class/struct?  (answer: mutable)   So what do we prove if a candidate can answer this?  Only that they know what mutable means.  One would hope that this would infer that they'd know how to use it, and more importantly when and if it should ever be used!  But it rarely does!  The problem with triva questions is that you will either: Approve a really good developer who knows what some obscure keyword is (good) Reject a really good developer who never needed to use that keyword or is too inexperienced to know how to use it (bad) Approve a really bad developer who googled "C++ Interview Questions" and studied like hell but can't craft (very bad) Many HR departments love these kind of tests because they are short and easy to defend if a legal issue arrises on hiring decisions.  After all it's easy to say a person wasn't hired because they scored 30 out of 100 on some trivia test.  But unfortunately, you've eliminated a large part of your potential developer pool and possibly hired a few duds.  There are times I've hired candidates who knew every trivia question I could throw out them and couldn't craft.  And then there are times I've interviewed candidates who failed all my trivia but who I took a chance on who were my best finds ever.    So if not trivia, then what?  Brain teasers?  The thought is, these type of questions measure the thinking power of a candidate.  The problem is, once again, you will either: Approve a good candidate who has never heard the problem and can solve it (good) Reject a good candidate who just happens not to see the "catch" because they're nervous or it may be really obscure (bad) Approve a candidate who has studied enough interview brain teasers (once again, you can google em) to recognize the "catch" or knows the answer already (bad). Once again, you're eliminating good candidates and possibly accepting bad candidates.  In these cases, I think testing someone with brain teasers only tests their ability to answer brain teasers, not the ability to craft code. So how do we measure someone's ability to craft code?  Here's a novel idea: have them code!  Give them a computer and a compiler, or a whiteboard and a pen, or paper and pencil and have them construct a piece of code.  It just makes sense that if we're going to hire someone to code we should actually watch them code.  When they're done, we can judge them on several criteria: Correctness - does the candidate's solution accurately solve the problem proposed? Accuracy - is the candidate's solution reasonably syntactically correct? Efficiency - did the candidate write or use the more efficient data structures or algorithms for the job? Maintainability - was the candidate's code free of obfuscation and clever tricks that diminish readability? Persona - are they eager and willing or aloof and egotistical?  Will they work well within your team? It may sound simple, or it may sound crazy, but when I'm looking to hire a developer, I want to see them actually develop well-crafted code.

    Read the article

  • DRY and SRP

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/06/11/dry-and-srp.aspxKent Beck’s XP Simplicity Rules (aka Four Rules of Simple Design) are a prioritized list of rules that when applied to your code generally yield a great design.  As you’ll see from the above link the list has slightly evolved over time.  I find today they are usually listed as: All Tests Pass Don’t Repeat Yourself (DRY) Express Intent Minimalistic These are prioritized.  If your code doesn’t work (rule 1) then everything else is forfeit.  Go back to rule one and get the code working before worrying about anything else. Over the years the community have debated whether the priority of rules 2 and 3 should be reversed.  Some say a little duplication in the code is OK as long as it helps express intent.  I’ve debated it myself.  This recent post got me thinking about this again, hence this post.   I don’t think it is fair to compare “Expressing Intent” against “DRY”.  This is a comparison of apples to oranges.  “Expressing Intent” is a principal of code quality.  “Repeating Yourself” is a code smell.  A code smell is merely an indicator that there might be something wrong with the code.  It takes further investigation to determine if a violation of an underlying principal of code quality has actually occurred. For example “using nouns for method names”, “using verbs for property names”, or “using Booleans for parameters” are all code smells that indicate that code probably isn’t doing a good job at expressing intent.  They are usually very good indicators.  But what principle is the code smell of Duplication pointing to and how good of an indicator is it? Duplication in the code base is bad for a couple reasons.  If you need to make a change and that needs to be made in a number of locations it is difficult to know if you have caught all of them.  This can lead to bugs if/when one of those locations is overlooked.  By refactoring the code to remove all duplication there will be left with only one place to change, thereby eliminating this problem. With most projects the code becomes the single source of truth for a project.  If a production code base is inconsistent with a five year old requirements or design document the production code that people are currently living with is usually declared as the current reality (or truth).  Requirement or design documents at this age in a project life cycle are usually of little value. Although comparing production code to external documentation is usually straight forward, duplication within the code base muddles this declaration of truth.  When code is duplicated small discrepancies will creep in between the two copies over time.  The question then becomes which copy is correct?  As different factions debate how the software should work, trust in the software and the team behind it erodes. The code smell of Duplication points to a violation of the “Single Source of Truth” principle.  Let me define that as: A stakeholder’s requirement for a software change should never cause more than one class to change. Violation of the Single Source of Truth principle will always result in duplication in the code.  However, the inverse is not always true.  Duplication in the code does not necessarily indicate that there is a violation of the Single Source of Truth principle. To illustrate this, let’s look at a retail system where the system will (1) send a transaction to a bank and (2) print a receipt for the customer.  Although these are two separate features of the system, they are closely related.  The reason for printing the receipt is usually to provide an audit trail back to the bank transaction.  Both features use the same data:  amount charged, account number, transaction date, customer name, retail store name, and etcetera.  Because both features use much of the same data, there is likely to be a lot of duplication between them.  This duplication can be removed by making both features use the same data access layer. Then start coming the divergent requirements.  The receipt stakeholder wants a change so that the account number has the last few digits masked out to protect the customer’s privacy.  That can be solve with a small IF statement whilst still eliminating all duplication in the system.  Then the bank wants to take a picture of the customer as well as capture their signature and/or PIN number for enhanced security.  Then the receipt owner wants to pull data from a completely different system to report the customer’s loyalty program point total. After a while you realize that the two stakeholders have somewhat similar, but ultimately different responsibilities.  They have their own reasons for pulling the data access layer in different directions.  Then it dawns on you, the Single Responsibility Principle: There should never be more than one reason for a class to change. In this example we have two stakeholders giving two separate reasons for the data access class to change.  It is clear violation of the Single Responsibility Principle.  That’s a problem because it can often lead the project owner pitting the two stakeholders against each other in a vein attempt to get them to work out a mutual single source of truth.  But that doesn’t exist.  There are two completely valid truths that the developers need to support.  How is this to be supported and honour the Single Responsibility Principle?  The solution is to duplicate the data access layer and let each stakeholder control their own copy. The Single Source of Truth and Single Responsibility Principles are very closely related.  SST tells you when to remove duplication; SRP tells you when to introduce it.  They may seem to be fighting each other, but really they are not.  The key is to clearly identify the different responsibilities (or sources of truth) over a system.  Sometimes there is a single person with that responsibility, other times there are many.  This can be especially difficult if the same person has dual responsibilities.  They might not even realize they are wearing multiple hats. In my opinion Single Source of Truth should be listed as the second rule of simple design with Express Intent at number three.  Investigation of the DRY code smell should yield to the proper application SST, without violating SRP.  When necessary leave duplication in the system and let the class names express the different people that are responsible for controlling them.  Knowing all the people with responsibilities over a system is the higher priority because you’ll need to know this before you can express it.  Although it may be a code smell when there is duplication in the code, it does not necessarily mean that the coder has chosen to be expressive over DRY or that the code is bad.

    Read the article

  • Unable to set nginx to serve my staging website

    - by user100778
    I'm having some troubles setting up nginx to serve my staging website. What I did is change the server_name but for some reasons it just doesn't work. The url scheme is "domain.foo" is production, "staging.domain.foo" is staging, "foobar.domain.foo" is a web service, "foobar.staging.domain.foo" is the staging version of the same webserver, ".domain.foo" is routed to serve some s3 static HTML, ".staging.domain.foo" is routed to serve some s3 static HTML in another bucket. All production urls work and are correctly configured, all staging urls doesn't work. Here is my conf file. You will see some duplication, I will gladly accept any correction/optimization, I'm a coder and configuring servers is definitely not my thing (but I'm eager to learn and improve...). server { listen 80; ## listen for ipv4 server_name "domain.foo" "www.domain.foo" default_server; access_log /var/log/nginx/access.log; client_max_body_size 5M; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; location ~* \.(jpg|jpeg|gif|png|ico|css|bmp|js|html)$ { access_log off; expires max; root /home/foo/Foo/current/public; break; } if ($host ~ 'www.domain.foo') { rewrite ^/(.*)$ http://domain/foo/$1 permanent; } proxy_pass http://production; break; } } server { listen 80; server_name "staging.domain.foo"; access_log /var/log/nginx/access.staging.log; error_log /var/log/nginx/error.staging.log; client_max_body_size 5M; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://staging; break; } } server { listen 80; ## listen for ipv4 server_name "foobar.domain.foo"; access_log /var/log/nginx/access.log; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if ($host = 'foobar.domain.foo') { proxy_pass http://foobar; break; } } } server { listen 80; ## listen for ipv4 server_name foobar.staging.domain.foo; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://foobar_staging; break; } } server { listen 80; server_name "~^(.+)\.domain\.foo$"; location / { proxy_intercept_errors on; error_page 404 = http://domain.foo/404; set $subdomain $1; rewrite /$ "/$subdomain/index.html" break; rewrite ^ /$subdomain$request_uri? break; proxy_pass http://bucket.domain.foo.s3.amazonaws.com; } } server { listen 80; server_name "~^(.+)\.staging\.domain\.foo$"; location / { proxy_intercept_errors on; set $subdomain $1; rewrite /$ "/$subdomain/index.html" break; rewrite ^ /$subdomain$request_uri? break; proxy_pass http://bucket.staging.domain.foo.s3.amazonaws.com; } } upstream production { server 111.255.111.110:8000; server 111.255.111.110:8001; server 111.255.111.110:8002; server 111.255.111.110:8003; } upstream staging { server 222.255.222.222:8000; server 222.255.222.222:8001; } upstream foobar { server 111.255.222.165:9000; server 111.255.222.165:9001; server 111.255.222.165:9002; } upstream foobar_staging { server 222.255.222.222:9000; } What happens now when I point my browser to staging.domain.foo is that it hangs. Can't find anything in the logs, but for example the access.staging.log and errors.staging.log are created. Anybody has an idea? :)

    Read the article

  • ASP.net FFMPEG video conversion receiving error: "Error number -2 occurred"

    - by Pete
    Hello, I am attempting to integrate FFMPEG into my asp.net website. The process I am trying to complete is to upload a video, check if it is .avi, .mov, or .wmv and then convert this video into an mp4 using x264 so my flash player can play it. I am using an http handler (ashx) file to handle my upload. This is where I am also putting my conversion code. I am not sure if this is the best place to put it, but I wanted to see if i could at least get it working. Additionally, I was able to complete the conversion manually through cmd line. The error -2 comes up when i output the standard error from the process I executed. This is the error i receive: FFmpeg version SVN-r23001, Copyright (c) 2000-2010 the FFmpeg developers built on May 1 2010 06:06:15 with gcc 4.4.2 configuration: --enable-memalign-hack --cross-prefix=i686-mingw32- --cc=ccache-i686-mingw32-gcc --arch=i686 --target-os=mingw32 --enable-runtime-cpudetect --enable-avisynth --enable-gpl --enable-version3 --enable-bzlib --enable-libgsm --enable-libfaad --enable-pthreads --enable-libvorbis --enable-libtheora --enable-libspeex --enable-libmp3lame --enable-libopenjpeg --enable-libxvid --enable-libschroedinger --enable-libx264 --enable-libopencore_amrwb --enable-libopencore_amrnb libavutil 50.15. 0 / 50.15. 0 libavcodec 52.66. 0 / 52.66. 0 libavformat 52.61. 0 / 52.61. 0 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0.10. 0 / 0.10. 0 532010_Robotica_720.wmv: Error number -2 occurred here is the code below: <%@ WebHandler Language="VB" Class="upload" %> Imports System Imports System.Web Imports System.IO Imports System.Diagnostics Imports System.Threading Public Class upload : Implements IHttpHandler Public currentTime As System.DateTime Public Sub ProcessRequest(ByVal context As HttpContext) Implements IHttpHandler.ProcessRequest currentTime = System.DateTime.Now If (Not context.Request.Files("Filedata") Is Nothing) Then Dim file As HttpPostedFile : file = context.Request.Files("Filedata") Dim targetDirectory As String : targetDirectory = HttpContext.Current.Server.MapPath(context.Request("folder")) Dim targetFilePath As String : targetFilePath = Path.Combine(targetDirectory, currentTime.Month & currentTime.Day & currentTime.Year & "_" & file.FileName) Dim fileNameArray As String() fileNameArray = Split(file.FileName, ".") If (System.IO.File.Exists(targetFilePath)) Then System.IO.File.Delete(targetFilePath) End If file.SaveAs(targetFilePath) Select Case fileNameArray(UBound(fileNameArray)) Case "avi", "mov", "wmv" Dim fileargs As String = fileargs = "-y -i " & currentTime.Month & currentTime.Day & currentTime.Year & "_" & file.FileName & " -ab 96k -vcodec libx264 -vpre normal -level 41 " fileargs += "-crf 25 -bufsize 20000k -maxrate 25000k -g 250 -r 20 -s 900x506 -coder 1 -flags +loop " fileargs += "-cmp +chroma -partitions +parti4x4+partp8x8+partb8x8 -subq 7 -me_range 16 -keyint_min 25 " fileargs += "-sc_threshold 40 -i_qfactor 0.71 -rc_eq 'blurCplx^(1-qComp)' -bf 16 -b_strategy 1 -bidir_refine 1 " fileargs += "-refs 6 -deblockalpha 0 -deblockbeta 0 -f mp4 " & currentTime.Month & currentTime.Day & currentTime.Year & "_" & file.FileName & ".mp4" Dim proc As New Diagnostics.Process() proc.StartInfo.FileName "ffmpeg.exe" proc.StartInfo.Arguments = fileargs proc.StartInfo.UseShellExecute = False proc.StartInfo.CreateNoWindow = True proc.StartInfo.RedirectStandardOutput = True proc.StartInfo.RedirectStandardError = True AddHandler proc.OutputDataReceived, AddressOf SaveTextToFile proc.Start() SaveTextToFile2(proc.StandardError.ReadToEnd()) proc.WaitForExit() proc.Close() End Select Catch ex As System.IO.IOException Thread.Sleep(2000) GoTo Conversion Finally context.Response.Write("1") End Try End If End Sub Public ReadOnly Property IsReusable() As Boolean Implements IHttpHandler.IsReusable Get Return False End Get End Property Private Shared Sub SaveTextToFile(ByVal sendingProcess As Object, ByVal strData As DataReceivedEventArgs) Dim FullPath As String = "text.txt" Dim Contents As String = "" Dim objReader As StreamWriter objReader = New StreamWriter(FullPath) If Not String.IsNullOrEmpty(strData.Data) Then objReader.Write(Environment.NewLine + strData.Data) End If objReader.Close() End Sub Private Sub SaveTextToFile2(ByVal strData As String) Dim FullPath As String = "texterror.txt" Dim Contents As String = "" Dim objReader As StreamWriter objReader = New StreamWriter(FullPath) objReader.Write(Environment.NewLine + strData) objReader.Close() End Sub End Class

    Read the article

  • How to make a tree view from MySQL and PHP and jquery

    - by Mac Taylor
    hey guys i need to show a treeview of my categories , saved in my mysql database . Database table : table : cats : columns: id,name,parent Here is a sample of what I want the markup to be like: <ul id="browser" class="filetree"> <li><span class="folder">Folder 1</span> <ul> <li><span class="file">Item 1.1</span></li> </ul> </li> <li><span class="folder">Folder 2</span> <ul> <li><span class="folder">Subfolder 2.1</span> <ul id="folder21"> <li><span class="file">File 2.1.1</span></li> <li><span class="file">File 2.1.2</span></li> </ul> </li> <li><span class="file">File 2.2</span></li> </ul> </li> <li><span class="file">File 4</span></li> </ul> i used this script to show treeview : http://www.dynamicdrive.com/dynamicindex1/treeview now problem is in php part : //function to build tree menu from db table test1 function tree_set($index) { global $menu; $q=mysql_query("select * from cats where parent='$index'"); if(!mysql_num_rows($q)) return; $menu .= '<ul>'."\n"; while($arr=mysql_fetch_assoc($q)) { $menu .= '<li>'; $menu .= '<span class="file">'.$arr['name'].'</span>';//you can add another output there $menu .=tree_set("".$arr['id'].""); $menu .= '</li>'."\n"; } $menu.= '</ul>'."\n"; return $menu; } //variable $menu must be defined before the function call $menu = ' <link rel="stylesheet" href="modules/Topics/includes/jquery.treeview.css" /> <script src="modules/Topics/includes/lib/jquery.cookie.js" type="text/javascript"></script> <script src="modules/Topics/includes/jquery.treeview.js" type="text/javascript"></script> <script type="text/javascript" src="modules/Topics/includes/demo/demo.js"></script> <ul id="browser" class="filetree">'."\n"; $menu .= tree_set(0); $menu .= '</ul>'; echo $menu; i even asked in this forum : http://forums.tizag.com/showthread.php?p=60649 problem is in php part of my codes that i mentioned . i cant show sub menus , i mean , really i dont know how to show sub menus is there any chance of a pro php coder helping me here ?

    Read the article

  • Should we hire someone who writes C in Perl?

    - by paxdiablo
    One of my colleagues recently interviewed some candidates for a job and one said they had very good Perl experience. Since my colleague didn't know Perl, he asked me for a critique of some code written (off-site) by that potential hire, so I had a look and told him my concerns (the main one was that it originally had no comments and it's not like we gave them enough time). However, the code works so I'm loathe to say no-go without some more input. Another concern is that this code basically looks exactly how I'd code it in C. It's been a while since I did Perl (and I didn't do a lot, I'm more a Python bod for quick scripts) but I seem to recall that it was a much more expressive language than what this guy used. I'm looking for input from real Perl coders, and suggestions for how it could be improved (and why a Perl coder should know that method of improvement). You can also wax lyrical about whether people who write one language in a totally different language should (or shouldn't be hired). I'm interested in your arguments but this question is primarily for a critique of the code. The spec was to successfully process a CSV file as follows and output the individual fields: User ID,Name , Level,Numeric ID pax, Pax Morgan ,admin,0 gt," Turner, George" rubbish,user,1 ms,"Mark \"X-Men\" Spencer","guest user",2 ab,, "user","3" The output was to be something like this (the potential hire's code actually output this): User ID,Name , Level,Numeric ID: [User ID] [Name] [Level] [Numeric ID] pax, Pax Morgan ,admin,0: [pax] [Pax Morgan] [admin] [0] gt," Turner, George " rubbish,user,1: [gt] [ Turner, George ] [user] [1] ms,"Mark \"X-Men\" Spencer","guest user",2: [ms] [Mark "X-Men" Spencer] [guest user] [2] ab,, "user","3": [ab] [] [user] [3] Here is the code they submitted: #!/usr/bin/perl # Open file. open (IN, "qq.in") || die "Cannot open qq.in"; # Process every line. while (<IN>) { chomp; $line = $_; print "$line:\n"; # Process every field in line. while ($line ne "") { # Skip spaces and start with empty field. if (substr ($line,0,1) eq " ") { $line = substr ($line,1); next; } $field = ""; $minlen = 0; # Detect quoted field or otherwise. if (substr ($line,0,1) eq "\"") { $line = substr ($line,1); $pastquote = 0; while ($line ne "") { # Special handling for quotes (\\ and \"). if (length ($line) >= 2) { if (substr ($line,0,2) eq "\\\"") { $field = $field . "\""; $line = substr ($line,2); next; } if (substr ($line,0,2) eq "\\\\") { $field = $field . "\\"; $line = substr ($line,2); next; } } # Detect closing quote. if (($pastquote == 0) && (substr ($line,0,1) eq "\"")) { $pastquote = 1; $line = substr ($line,1); $minlen = length ($field); next; } # Only worry about comma if past closing quote. if (($pastquote == 1) && (substr ($line,0,1) eq ",")) { $line = substr ($line,1); last; } $field = $field . substr ($line,0,1); $line = substr ($line,1); } } else { while ($line ne "") { if (substr ($line,0,1) eq ",") { $line = substr ($line,1); last; } if ($pastquote == 0) { $field = $field . substr ($line,0,1); } $line = substr ($line,1); } } # Strip trailing space. while ($field ne "") { if (length ($field) == $minlen) { last; } if (substr ($field,length ($field)-1,1) eq " ") { $field = substr ($field,0, length ($field)-1); next; } last; } print " [$field]\n"; } } close (IN);

    Read the article

  • JS: using 'var me = this' to reference an object instead of using a global array

    - by Marco Demaio
    The example below, is just an example, I know that I don't need an object to show an alert box when user clicks on div blocks, but it's just a simple example to explain a situation that frequently happens when writing JS code. In the example below I use a globally visible array of objects to keep a reference to each new created HelloObject, in this way events called when clicking on a div block can use the reference in the arry to call the HelloObject's public function hello(). 1st have a look at the code: <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html><head><meta http-equiv="Content-Type" content="text/html; charset=windows-1252"> <title>Test </title> <script type="text/javascript"> /***************************************************** Just a cross browser append event function, don't need to understand this one to answer my question *****************************************************/ function AppendEvent(html_element, event_name, event_function) {if(html_element) {if(html_element.attachEvent) html_element.attachEvent("on" + event_name, event_function); else if(html_element.addEventListener) html_element.addEventListener(event_name, event_function, false); }} /****************************************************** Just a test object ******************************************************/ var helloobjs = []; var HelloObject = function HelloObject(div_container) { //Adding this object in helloobjs array var id = helloobjs.length; helloobjs[id] = this; //Appending click event to show the hello window AppendEvent(div_container, 'click', function() { helloobjs[id].hello(); //THIS WORKS! }); /***************************************************/ this.hello = function() { alert('hello'); } } </script> </head><body> <div id="one">click me</div> <div id="two">click me</div> <script type="text/javascript"> var t = new HelloObject(document.getElementById('one')); var t = new HelloObject(document.getElementById('two')); </script> </body></html> In order to achive the same result I could simply replace the code //Appending click event to show the hello window AppendEvent(div_container, 'click', function() { helloobjs[id].hello(); //THIS WORKS! }); with this code: //Appending click event to show the hello window var me = this; AppendEvent(div_container, 'click', function() { me.hello(); //THIS WORKS TOO AND THE GLOBAL helloobjs ARRAY BECOMES SUPEFLOUS! }); thus would make the helloobjs array superflous. My question is: does this 2nd option in your opinion create memoy leaks on IE or strange cicular references that might lead to browsers going slow or to break??? I don't know how to explain, but coming from a background as a C/C++ coder, doing in this 2nd way sounds like a some sort of circular reference that might break memory at some point. I also read on internet about the IE closures memory leak issue http://jibbering.com/faq/faq_notes/closures.html (I don't know if it was fixed in IE7 and if yes, I hope it does not come out again in IE8). Thanks

    Read the article

  • Chaining CSS classes in IE6 - Trying to find a jQuery solution?

    - by Mike Baxter
    Right, perhaps I ask the impossible? I consider myself fairly new to Javscript and jQuery, but that being said, I have written some fairly complex code recently so I am definitely getting there... however I am now possed with a rather interesting issue at my current freelance contract. The previous web coder has taken a Grid-960 approach to the HTML and as a result has used chained classes to style many of the elements. The example below is typical of what can be found in the code: <div class='blocks four-col-1 orange highlight'>Some content</div> And in the css there will be different declarations for: (not actual css... but close enough) .blocks {margin-right:10px;} .orange {background-image:url(someimage.jpg);} .highlight {font-weight:bold;} .four-col-1 {width:300px;} and to make matters worse... this is in the CSS: .blocks.orange.highlight {background-colour:#dd00ff;} Anyone not familiar with this particular bug can read more on it here: http://www.ryanbrill.com/archives/multiple-classes-in-ie/ it is very real and very annoying. Without wanting to go into the merrits of not chaining classes (I told them this, but it is no longer feasible to change their approach... 100 hand coded pages into a 150 page website, no CMS... sigh) and without the luxury of being able to change the way these blocks are styled... can anyone advise me on the complexity and benefits between any of my below proposed approaches or possible other options that would adequately solve this problem. Potential Solution 1 Using conditional comments I am considering loading a jquery script only for IE6 that: Reads the class of all divs in a certain section of the page and pushes to an array creates empty boxes off screen with only one of the classes applied at a time Reads the applied CSS values for each box Re-applies these styles to the individual box, somehow bearing in mind the order in which they are called and overwriting conflicting instructions as required Potential Solution 2 read the class of all divs in a certain section of the page and push to an array Scan the document for links to style sheets Ajax grab the stylesheets and traverse looking for matching names to those in class array Apply styles as needed Potential Solution 3 Create an IE6 only stylesheet containing the exact style to be applied as a unique name (ie: class='blocks orange highlight' becomes class='blocks-orange-highlight') Traverse the document in IE6 and convert all spaces in class declarations to hyphens and reapply classes based on new style name Summary: Solution 1 allows the people at this company to apply any styles in the future and the script will adjust as needed. However it does not allow for the chained style to be added, only the individual style... it is also processor intensive and time consuming, but also the most likely to be converted into a plugin that could be used the world over Solution 2 is a potential nightmare to code. But again will allow for an endless number of updates without breaking Solution 3 will require someone at the companty to hardcode the new styles every time they make a change, and if they don't, IE6 will break. Ironically the site, whilst needing to conform to IE6 in a limited manner, does not need to run wihtout javascript (they've made the call... have JS or go away), so consider all jQuery and JS solutions to be 'game on'. Did I mention how much i hate IE6? Anyway... any thoughts or comments would be appreciated. I will continue to develop my own solution and if I discover one that can be turned into a jQuery plugin I will post it here in the comments. Regards, Mike.

    Read the article

  • If 'Architect' is a dirty word - what's the alternative; when not everyone can actually design a goo

    - by Andras Zoltan
    Now - I'm a developer first and foremost; but whenever I sit down to work on a big project with lots of interlinking components and areas, I will forward-plan my interfaces, base classes etc as best I can - putting on my Architect hat. For a few weeks I've been doing this for a huge project - designing whole swathes of interfaces etc for a business-wide platform that we're developing. The basic structure is a couple of big projects that consists of service and data interfaces, with some basic implementations of all of these. On their own, these assemblies are useless though, as they are simply intended intended as a scaffold on which to build a business-specific implementation (we have a lot of businesses). Therefore, the design of the core platform is absolutely crucial, since consumers of the system are not intended to know which implementation they are actually using. In the past it's not worked so well, but after a few proof-of-concepts and R&D projects this new platform is now growing nicely and is already proving itself. Then somebody else gets involved in the project - he's a TDD man who sees code-level architecture as an irrelevance and is definitely from the camp that 'architect' is a dirty word - I should add that our working relationship is very good despite this :) He's open about the fact that he can't architect in advance and obviously TDD really helps him because it allows him to evolve his systems over time. That I get, and totally understand; but it means that his coding style, basically, doesn't seem to be able to honour the architecture that I've been putting in place. Now don't get me wrong - he's an awesome coder; but the other day he needed to extend one of his components (an implementation of a core interface) to bring in an extra implementation-specific dependency; and in doing so he extended the core interface as well as his implementation (he uses ReSharper), thus breaking the independence of the whole interface. When I pointed out his error to him, he was dismayed. Being test-first, all that mattered to him was that he'd made his tests pass, and just said 'well, I need that dependency, so can't we put it in?'. Of course we could put it in, but I was frustrated that he couldn't see that refactoring the generic interface to incorporate an implementation-specific feature was just wrong! But it is all very Charlie Brown to him (you know the sound the adults make when they're talking to the children) - as far as he's concerned we don't need to worry about it because we can always refactor. The problem is, the culture of test-write-refactor is all very well and good - but not when you're dealing with a platform that is going to be shared out among so many projects that you could never get them all in one place to make the refactorings work. In my opinion, sometimes you actually have to think about what you're doing, and not just let nature take its course. Am I simply fulfilling the role of Architect as a dirty word here? I believe that architecture is important and should be thought about before code gets written; unless it's a particularly small project. But when you're working in a team of people who don't think that way, or even can't think that way how can you actually get this across? Is it a case of simply making the architecture off-limits to changes by other people? I don't want to start having bloody committees just to be able to grow the system; but equally I don't want to be the only one responsible for it all. Do you think the architect role is a waste of time? Is it at odds with TDD and other practises? Can this mix of different practises be made to work, or should I just be a lot less precious (and in so doing allow a generic platform become useless!)? Or do I just lay down the law? Any ideas/experiences/views gratefully received.

    Read the article

  • Is the Unix Philosophy still relevant in the Web 2.0 world?

    - by David Titarenco
    Introduction Hello, let me give you some background before I begin. I started programming when I was 5 or 6 on my dad's PSION II (some primitive BASIC-like language), then I learned more and more, eventually inching my way up to C, C++, Java, PHP, JS, etc. I think I'm a pretty decent coder. I think most people would agree. I'm not a complete social recluse, but I do stuff like write a virtual machine for fun. I've never taken a computer course in college because I've been in and out for the past couple of years and have only been taking core classes; never having been particularly amazing at school, perhaps I'm missing some basic tenet that most learn in CS101. I'm currently reading Coders at Work and this question is based on some ideas I read in there. A Brief (Fictionalized) Example So a certain sunny day I get an idea. I hire a designer and hammer away at some C/C++ code for a couple of months, soon thereafter releasing silvr.com, a website that transmutes lead into silver. Yep, I started my very own start-up and even gave it a clever web 2.0 name with a vowel missing. Mom and dad are proud. I come up with some numbers I should be seeing after 1, 2, 3, 6, 9, 12 months and set sail. Obviously, my transmuting server isn't perfect, sometimes it segfaults, sometimes it leaks memory. I fix it and keep truckin'. After all, gdb is my best friend. Eventually, I'm at a position where a very small community of people are happily transmuting lead into silver on a semi-regular basis, but they want to let their friends on MySpace know how many grams of lead they transmuted today. And they want to post images of their lead and silver nuggets on flickr. I'm losing out on potential traffic unless I let them log in with their Yahoo, Google, and Facebook accounts. They want webcam support and live cock fighting, merry-go-rounds and Jabberwockies. All these things seem necessary. The Aftermath Of course, I have to re-write the transmuting server! After all, I've been losing money all these months. I need OAuth libraries and OpenID libraries, JSON support, and the only stable Jabberwocky API is for Java. C++ isn't even an option anymore. I'm just one guy! The Java binary just grows and grows since I need some legacy Apache include for the JSON library, and some antiquated Sun dependency for OAuth support. Then I pick up a book like Coders at Work and read what people like jwz say about complexity... I think to myself.. Keep it simple, stupid. I like simple things. I've always loved the Unix Philosophy but even after trying to keep the new server source modular and sleek, I loathe having to write one more line of code. It feels that I'm just piling crap on top of other crap. Maybe I'm naive thinking every piece of software can be simple and clever. Maybe it's just a phase.. or is the Unix Philosophy basically dead when it comes to the current state of (web) development? I'm just kind of disheartened :(

    Read the article

  • CustomListAdapter Problem in Android? Getting ClassCast Exception? How???

    - by Praveen Chandrasekaran
    i want to improve the list view's performance. this is the code for my getView method in my Adapter? public View getView(int arg0, View text_view_name, ViewGroup parent) { try { if (text_view_name == null) { text_view_name = mInflater.inflate( R.layout.bs_content_list_item1, null); text_view_name.setTag(R.id.text1_detail1, text_view_name .findViewById(R.id.text1_detail1)); text_view_name.setTag(R.id.text3_detail1, text_view_name .findViewById(R.id.text3_detail1)); text_view_name.setTag(R.id.eve_img_detail1, text_view_name .findViewById(R.id.eve_img_detail1)); } text1 = (TextView) text_view_name.getTag(R.id.text1_detail1); // text2 = (TextView) text_view_name.getTag(R.id.text2); text3 = (TextView) text_view_name.getTag(R.id.text3_detail1); img = (ImageView) text_view_name.getTag(R.id.eve_img_detail1); text1.setText(VAL1[arg0]); text3.setText(VAL3[arg0]); if (!mBusy) { img_value = new URL(VAL4[arg0]); mIcon11 = BitmapFactory.decodeStream(img_value.openConnection() .getInputStream()); img.setImageBitmap(mIcon11); text_view_name.setTag(R.id.eve_img_detail1, null); } else { img.setImageResource(R.drawable.icon); text_view_name.setTag(R.id.eve_img_detail1, text_view_name .findViewById(R.id.eve_img_detail1)); } } catch (Exception e) { name = "Exception in MultiLine_bar_details1 getView"; Log.v(TAG, name + e); } return text_view_name; } this is the code for scrollstatechanged method: public void onScrollStateChanged(AbsListView view, int scrollState) { switch (scrollState) { case OnScrollListener.SCROLL_STATE_IDLE: try { MultiLine_bar_details1.mBusy = false; int first = view.getFirstVisiblePosition(); int count = view.getCount(); for (int i = 0; i < count; i++) { ImageView t = (ImageView) view.getChildAt(i);// here getting the ClassCastException if (t.getTag(R.id.eve_img_detail1) != null) { MultiLine_bar_details1.img_value = new URL( MultiLine_bar_details1.VAL4[first + i]); MultiLine_bar_details1.mIcon11 = BitmapFactory .decodeStream(MultiLine_bar_details1.img_value .openConnection().getInputStream()); MultiLine_bar_details1.img.setImageBitmap(MultiLine_bar_details1.mIcon11); t.setTag(R.id.eve_img_detail1, null); } } } catch (Exception e) { Log.v(TAG, "Idle" + e); } // mStatus.setText("Idle"); break; case OnScrollListener.SCROLL_STATE_TOUCH_SCROLL: MultiLine_bar_details1.mBusy = true; break; case OnScrollListener.SCROLL_STATE_FLING: MultiLine_bar_details1.mBusy = true; break; } } getting the Exception in Idle state: 05-03 16:47:15.201: VERBOSE/BS_Bars(258): Idlejava.lang.ClassCastException: android.widget.LinearLayout this is very complicated for me to get the output properly. actually i have the listview with custom adapter. that icons makes the listview scroll very slow.i am getting the images for icons from the image urls. upto this(above code) i can improve the scroll performance of my list view. but the image icons are not proper in the corresponding order. its dynamically changing when i scroll the listview.. i refered the commonsware busy coder guide and this blog. My very Big question is "How can we access the single view in scrollstatechanged parameter AbsListView? " what is the problem in it? how to do it better? Any Idea?

    Read the article

< Previous Page | 18 19 20 21 22 23  | Next Page >