Search Results

Search found 14 results on 1 pages for 'kaleb brasee'.

Page 1/1 | 1 

  • Effective methods for managing work tasks? (documenting/remembering/prioritizing)

    - by Kaleb Brasee
    I'm looking for suggestions on effective methods that I can use to document, remember and prioritize tasks at work. Many of the these tasks belong to a primary project, but they also exist for independent initiatives. The tasks themselves cover everything from development to documentation to discussions, with varying priorities, and deadlines ranging from right away to a few months from now. Historically I have used a notepad to keep track of these tasks, with a star next to an item indicating it needs to be done and a check mark when it's completed. However, as I gain more responsibilities and more things to manage: it becomes harder to make sure I've done everything (because some things get lost 5 pages back) it becomes harder to remember what's most important to do next it becomes harder to keep track of dependencies between tasks Has anyone found methods that have made their tasks easier to manage? I've considered adding some meta-data to keep track of what's most important and dependencies, or possibly switching to an app that could automate this (if such a thing exists). Something that's accessible anywhere would definitely be a plus.

    Read the article

  • Logging errors caused by exceptions deep in the application

    - by Kaleb Pederson
    What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error? For example, let's say that I have an ETL system whose transform step involves: a transformer, pipeline, processing algorithm, and processing engine. In brief, the transformer takes in an input file, parses out records, and sends the records through the pipeline. The pipeline aggregates the results of the processing algorithm (which could do serial or parallel processing). The processing algorithm sends each record through one or more processing engines. So, I have at least four levels: Transformer - Pipeline - Algorithm - Engine. My code might then look something like the following: class Transformer { void Process(InputSource input) { try { var inRecords = _parser.Parse(input.Stream); var outRecords = _pipeline.Transform(inRecords); } catch (Exception ex) { var inner = new ProcessException(input, ex); _logger.Error("Unable to parse source " + input.Name, inner); throw inner; } } } class Pipeline { IEnumerable<Result> Transform(IEnumerable<Record> records) { // NOTE: no try/catch as I have no useful information to provide // at this point in the process var results = _algorithm.Process(records); // examine and do useful things with results return results; } } class Algorithm { IEnumerable<Result> Process(IEnumerable<Record> records) { var results = new List<Result>(); foreach (var engine in Engines) { foreach (var record in records) { try { engine.Process(record); } catch (Exception ex) { var inner = new EngineProcessingException(engine, record, ex); _logger.Error("Engine {0} unable to parse record {1}", engine, record); throw inner; } } } } } class Engine { Result Process(Record record) { for (int i=0; i<record.SubRecords.Count; ++i) { try { Validate(record.subRecords[i]); } catch (Exception ex) { var inner = new RecordValidationException(record, i, ex); _logger.Error( "Validation of subrecord {0} failed for record {1}", i, record ); } } } } There's a few important things to notice: A single error at the deepest level causes three log entries (ugly? DOS?) Thrown exceptions contain all important and useful information Logging only happens when failure to do so would cause loss of useful information at a lower level. Thoughts and concerns: I don't like having so many log entries for each error I don't want to lose important, useful data; the exceptions contain all the important but the stacktrace is typically the only thing displayed besides the message. I can log at different levels (e.g., warning, informational) The higher level classes should be completely unaware of the structure of the lower-level exceptions (which may change as the different implementations are replaced). The information available at higher levels should not be passed to the lower levels. So, to restate the main questions: What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error?

    Read the article

  • Do you leverage the benefits of the open-closed principle?

    - by Kaleb Pederson
    The open-closed principle (OCP) states that an object should be open for extension but closed for modification. I believe I understand it and use it in conjunction with SRP to create classes that do only one thing. And, I try to create many small methods that make it possible to extract out all the behavior controls into methods that may be extended or overridden in some subclass. Thus, I end up with classes that have many extension points, be it through: dependency injection and composition, events, delegation, etc. Consider the following a simple, extendable class: class PaycheckCalculator { // ... protected decimal GetOvertimeFactor() { return 2.0M; } } Now say, for example, that the OvertimeFactor changes to 1.5. Since the above class was designed to be extended, I can easily subclass and return a different OvertimeFactor. But... despite the class being designed for extension and adhering to OCP, I'll modify the single method in question, rather than subclassing and overridding the method in question and then re-wiring my objects in my IoC container. As a result I've violated part of what OCP attempts to accomplish. It feels like I'm just being lazy because the above is a bit easier. Am I misunderstanding OCP? Should I really be doing something different? Do you leverage the benefits of OCP differently? Update: based on the answers it looks like this contrived example is a poor one for a number of different reasons. The main intent of the example was to demonstrate that the class was designed to be extended by providing methods that when overridden would alter the behavior of public methods without the need for changing internal or private code. Still, I definitely misunderstood OCP.

    Read the article

  • How to Stop Current Playing Song When using one thread with JLayer?

    - by mcnemesis
    I recently used a solution to the one-thread-at-a-time problem whe using Jlayer to play mp3 songs in Java. But this solution by Kaleb Brasee didn't hint at how you could stop the player, i.e how could one then call player.close()? Kaleb's code was: Executor executor = Executors.newSingleThreadExecutor(); executor.execute(new Runnable() { public void run() { /* do something */ } }); and this is the code I put in run() if(player != null) player.close(); try{ player = new Player(new FileInputStream(musicD.getPath())); player.play(); }catch(Exception e){} The problem is that much as this solves the problem of keeping the gui active while the music plays (in only one other thread -- what i'd wanted), I can't start playing another song :-( What could I do?

    Read the article

  • Data access strategy for a site like SO - sorted SQL queries and simultaneous updates that affect th

    - by Kaleb Brasee
    I'm working on a Grails web app that would be similar in access patterns to StackOverflow or MyLifeIsAverage - users can vote on entries, and their votes are used to sort a list of entries based on the number of votes. Votes can be placed while the sorted select queries are being performed. Since the selects would lock a large portion of the table, it seems that normal transaction locking would cause updates to take forever (given enough traffic). Has anyone worked on an app with a data access pattern such as this, and if so, did you find a way to allow these updates and selects to happen more or less concurrently? Does anyone know how sites like SO approach this? My thought was to make the sorted selects dirty reads, since it is acceptable if they're not completely up to date all of the time. This is my only idea for possibly improving performance of these selects and updates, but I thought someone might know a better way.

    Read the article

  • External Hard-Drive Randomly Ejects; Stays On

    - by Kaleb F.
    My 250GB I/O Magic USB external hard-drive randomly disconnects / ejects from the computer after between 2-30 minutes of use. When this happens, the blinking activity light on the front of the hdd turns off; however, the disks can still be heard spinning. Unplugging & replugging in the USB does not reconnect the device and the activity light remains unlit. The only way to continue using it is to flip off then on the power switch of the hdd. The hard-drive was formatted with MBR partition table and 2 NTFS volumes. I recently tried switching to GUID with two Mac OS Extended (Journaled), but the problem remains. This error occurs with my new Macbook Pro with Snow Leopard as well as with my DELL E520 with Windows 7 Ultimate.

    Read the article

  • gunit syntax for tree walker with a flat list of nodes

    - by Kaleb Pederson
    Here's a simple gunit test for a portion of my tree grammar which generates a flat list of nodes: objectOption walks objectOption: <<one:"value">> -> (one "value") Although you define a tree in ANTLR's rewrite syntax using a caret (i.e. ^(ROOT child...)), gunit matches trees without the caret, so the above represents a tree and it's not surprising that it fails: it's a flat list of nodes and not a tree. This results in a test failure: 1 failures found: test2 (objectOption walks objectOption, line17) - expected: (one \"value\") actual: one \"value\" Another option which seems intuitive is to leave off the parenthesis, like this: objectOption walks objectOption: <<one:"value">> -> one "value" But gunit doesn't like this syntax. It seems to result in a parse failure in the gunit grammar: line 17:20 no viable alternative at input 'one' line 17:24 missing ':' at 'value' line 0:-1 no viable alternative at input '<EOF>' java.lang.NullPointerException at org.antlr.gunit.OutputTest.getExpected(OutputTest.java:65) at org.antlr.gunit.gUnitExecutor.executeTests(gUnitExecutor.java:245) ... What is the correct way to match a flat tree?

    Read the article

  • Javassist failure in hibernate: invalid constant type: 60

    - by Kaleb Pederson
    I'm creating a cli tool to manage an existing application. Both the application and the tests build fine and run fine but despite that I receive a javassist failure when running my cli tool that exists within the jar: INFO: Bytecode provider name : javassist ... INFO: Hibernate EntityManager 3.5.1-Final Exception in thread "main" javax.persistence.PersistenceException: Unable to configure EntityManagerFactory at org.hibernate.ejb.Ejb3Configuration.configure(Ejb3Configuration.java:371) at org.hibernate.ejb.HibernatePersistence.createEntityManagerFactory(HibernatePersistence.java:55) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:48) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:32) ... at com.sophware.flexipol.admin.AdminTool.<init>(AdminTool.java:40) at com.sophware.flexipol.admin.AdminTool.main(AdminTool.java:69) Caused by: java.lang.RuntimeException: Error while reading file:flexipol-jar-with-dependencies.jar at org.hibernate.ejb.packaging.NativeScanner.getClassesInJar(NativeScanner.java:131) at org.hibernate.ejb.Ejb3Configuration.addScannedEntries(Ejb3Configuration.java:467) at org.hibernate.ejb.Ejb3Configuration.addMetadataFromScan(Ejb3Configuration.java:457) at org.hibernate.ejb.Ejb3Configuration.configure(Ejb3Configuration.java:347) ... 11 more Caused by: java.io.IOException: invalid constant type: 60 at javassist.bytecode.ConstPool.readOne(ConstPool.java:1027) at javassist.bytecode.ConstPool.read(ConstPool.java:970) at javassist.bytecode.ConstPool.<init>(ConstPool.java:127) at javassist.bytecode.ClassFile.read(ClassFile.java:693) at javassist.bytecode.ClassFile.<init>(ClassFile.java:85) at org.hibernate.ejb.packaging.AbstractJarVisitor.checkAnnotationMatching(AbstractJarVisitor.java:243) at org.hibernate.ejb.packaging.AbstractJarVisitor.executeJavaElementFilter(AbstractJarVisitor.java:209) at org.hibernate.ejb.packaging.AbstractJarVisitor.addElement(AbstractJarVisitor.java:170) at org.hibernate.ejb.packaging.FileZippedJarVisitor.doProcessElements(FileZippedJarVisitor.java:119) at org.hibernate.ejb.packaging.AbstractJarVisitor.getMatchingEntries(AbstractJarVisitor.java:146) at org.hibernate.ejb.packaging.NativeScanner.getClassesInJar(NativeScanner.java:128) ... 14 more Since I know the jar is fine as the unit and integration tests run against it, I thought it might be a problem with javassist, so I tried cglib. The bytecode provider then shows as cglib but I still get the exact same stack trace with javassist present in it. cglib is definitely in the classpath: $ unzip -l flexipol-jar-with-dependencies.jar | grep cglib | wc -l 383 I've tried with both hibernate 3.4 and 3.5 and get the exact same error. Is this a problem with javassist?

    Read the article

  • IoC/DI in the face of winforms and other generated code

    - by Kaleb Pederson
    When using dependency injection (DI) and inversion of control (IoC) objects will typically have a constructor that accepts the set of dependencies required for the object to function properly. For example, if I have a form that requires a service to populate a combo box you might see something like this: // my files public interface IDataService { IList<MyData> GetData(); } public interface IComboDataService { IList<MyComboData> GetComboData(); } public partial class PopulatedForm : BaseForm { private IDataService service; public PopulatedForm(IDataService service) { //... InitializeComponent(); } } This works fine at the top level, I just use my IoC container to resolve the dependencies: var form = ioc.Resolve<PopulatedForm>(); But in the face of generated code, this gets harder. In winforms a second file composing the rest of the partial class is generated. This file references other components, such as custom controls, and uses no-args constructors to create such controls: // generated file: PopulatedForm.Designer.cs public partial class PopulatedForm { private void InitializeComponent() { this.customComboBox = new UserCreatedComboBox(); // customComboBox has an IComboDataService dependency } } Since this is generated code, I can't pass in the dependencies and there's no easy way to have my IoC container automatically inject all the dependencies. One solution is to pass in the dependencies of each child component to PopulatedForm even though it may not need them directly, such as with the IComboDataService required by the UserCreatedComboBox. I then have the responsibility to make sure that the dependencies are provided through various properties or setter methods. Then, my PopulatedForm constructor might look as follows: public PopulatedForm(IDataService service, IComboDataService comboDataService) { this.service = service; InitializeComponent(); this.customComboBox.ComboDataService = comboDataService; } Another possible solution is to have the no-args constructor to do the necessary resolution: public class UserCreatedComboBox { private IComboDataService comboDataService; public UserCreatedComboBox() { if (!DesignMode && IoC.Instance != null) { comboDataService = Ioc.Instance.Resolve<IComboDataService>(); } } } Neither solution is particularly good. What patterns and alternatives are available to more capably handle dependency-injection in the face of generated code? I'd love to see both general solutions, such as patterns, and ones specific to C#, Winforms, and Autofac.

    Read the article

  • Exception declared on ANTLR grammar rule ignored

    - by Kaleb Pederson
    I have a tree parser that's doing semantic analysis on the AST generated by my parser. It has a rule declared as follows: transitionDefinition throws WorkflowStateNotFoundException: /* ... */ This compiles just fine and matches the rule syntax at the ANTLR Wiki but my exception is never declared so the Java compiler complains about undeclared exceptions. ./tool/src/main/antlr3/org/antlr/grammar/v3/ANTLRv3.g shows that it's building a tree (but I'm not actually positive if it's the v2 or v3 grammar that ANTLR 3.2 is using): throwsSpec : 'throws' id ( ',' id )* -> ^('throws' id+) ; I know I can make it a runtime exception, but I'd like to use my exception hierarchy. Am I doing something wrong or should that syntax work?

    Read the article

  • file.createNewFile() creates files with last-modified time before actual creation time

    - by Kaleb Pederson
    I'm using JPoller to detect changes to files in a specific directory, but it's missing files because they end up with a timestamp earlier than their actual creation time. Here's how I test: public static void main(String [] files) { for (String file : files) { File f = new File(file); if (f.exists()) { System.err.println(file + " exists"); continue; } try { // find out the current time, I would hope to assume that the last-modified // time on the file will definitely be later than this System.out.println("-----------------------------------------"); long time = System.currentTimeMillis(); // create the file System.out.println("Creating " + file + " at " + time); f.createNewFile(); // let's see what the timestamp actually is (I've only seen it <time) System.out.println(file + " was last modified at: " + f.lastModified()); // well, ok, what if I explicitly set it to time? f.setLastModified(time); System.out.println("Updated modified time on " + file + " to " + time + " with actual " + f.lastModified()); } catch (IOException e) { System.err.println("Unable to create file"); } } } And here's what I get for output: ----------------------------------------- Creating test.7 at 1272324597956 test.7 was last modified at: 1272324597000 Updated modified time on test.7 to 1272324597956 with actual 1272324597000 ----------------------------------------- Creating test.8 at 1272324597957 test.8 was last modified at: 1272324597000 Updated modified time on test.8 to 1272324597957 with actual 1272324597000 ----------------------------------------- Creating test.9 at 1272324597957 test.9 was last modified at: 1272324597000 Updated modified time on test.9 to 1272324597957 with actual 1272324597000 The result is a race condition: JPoller records time of last check as xyz...123 File created at xyz...456 File last-modified timestamp actually reads xyz...000 JPoller looks for new/updated files with timestamp greater than xyz...123 JPoller ignores newly added file because xyz...000 is less than xyz...123 I pull my hair out for a while I tried digging into the code but both lastModified() and createNewFile() eventually resolve to native calls so I'm left with little information. For test.9, I lose 957 milliseconds. What kind of accuracy can I expect? Are my results going to vary by operating system or file system? Suggested workarounds? NOTE: I'm currently running Linux with an XFS filesystem. I wrote a quick program in C and the stat system call shows st_mtime as truncate(xyz...000/1000).

    Read the article

  • Eager/Lazy loaded member always empty with JPA one-to-many relationship

    - by Kaleb Pederson
    I have two entities, a User and Role with a one-to-many relationship from user to role. Here's what the tables look like: mysql> select * from User; +----+-------+----------+ | id | name | password | +----+-------+----------+ | 1 | admin | admin | +----+-------+----------+ 1 row in set (0.00 sec) mysql> select * from Role; +----+----------------------+---------------+----------------+ | id | description | name | summary | +----+----------------------+---------------+----------------+ | 1 | administrator's role | administrator | Administration | | 2 | editor's role | editor | Editing | +----+----------------------+---------------+----------------+ 2 rows in set (0.00 sec) And here's the join table that was created: mysql> select * from User_Role; +---------+----------+ | User_id | roles_id | +---------+----------+ | 1 | 1 | | 1 | 2 | +---------+----------+ 2 rows in set (0.00 sec) And here's the subset of orm.xml that defines the tables and relationships: <entity class="User" name="User"> <table name="User" /> <attributes> <id name="id"> <generated-value strategy="AUTO" /> </id> <basic name="name"> <column name="name" length="100" unique="true" nullable="false"/> </basic> <basic name="password"> <column length="255" nullable="false" /> </basic> <one-to-many name="roles" fetch="EAGER" target-entity="Role" /> </attributes> </entity> <entity class="Role" name="Role"> <table name="Role" /> <attributes> <id name="id"> <generated-value strategy="AUTO"/> </id> <basic name="name"> <column name="name" length="40" unique="true" nullable="false"/> </basic> <basic name="summary"> <column name="summary" length="100" nullable="false"/> </basic> <basic name="description"> <column name="description" length="255"/> </basic> </attributes> </entity> Yet, despite that, when I retrieve the admin user, I get back an empty collection. I'm using Hibernate as my JPA provider and it shows the following debug SQL: select user0_.id as id8_, user0_.name as name8_, user0_.password as password8_ from User user0_ where user0_.name=? limit ? When the one-to-many mapping is lazy loaded, that's the only query that's made. This correctly retrieves the one admin user. I changed the relationship to use eager loading and then the following query is made in addition to the above: select roles0_.User_id as User1_1_, roles0_.roles_id as roles2_1_, role1_.id as id9_0_, role1_.description as descript2_9_0_, role1_.name as name9_0_, role1_.summary as summary9_0_ from User_Role roles0_ left outer join Role role1_ on roles0_.roles_id=role1_.id where roles0_.User_id=? Which results in the following results: +----------+-----------+--------+----------------------+---------------+----------------+ | User1_1_ | roles2_1_ | id9_0_ | descript2_9_0_ | name9_0_ | summary9_0_ | +----------+-----------+--------+----------------------+---------------+----------------+ | 1 | 1 | 1 | administrator's role | administrator | Administration | | 1 | 2 | 2 | editor's role | editor | Editing | +----------+-----------+--------+----------------------+---------------+----------------+ 2 rows in set (0.00 sec) Hibernate obviously knows about the roles, yet getRoles() still returns an empty collection. Hibernate also recognized the relationship sufficiently to put the data in the first place. What problems can cause these symptoms?

    Read the article

  • ArrayList.Sort should be a stable sort with an IComparer but is not?

    - by Kaleb Pederson
    A stable sort is a sort that maintains the relative ordering of elements with the same value. The docs on ArrayList.Sort say that when an IComparer is provided the sort is stable: If comparer is set to null, this method performs a comparison sort (also called an unstable sort); that is, if two elements are equal, their order might not be preserved. In contrast, a stable sort preserves the order of elements that are equal. To perform a stable sort, you must implement a custom IComparer interface. Unless I'm missing something, the following testcase shows that ArrayList.Sort is not using a stable sort: internal class DisplayOrdered { public int ID { get; set; } public int DisplayOrder { get; set; } public override string ToString() { return string.Format("ID: {0}, DisplayOrder: {1}", ID, DisplayOrder); } } internal class DisplayOrderedComparer : IComparer { public int Compare(object x, object y) { return ((DisplayOrdered)x).DisplayOrder - ((DisplayOrdered)y).DisplayOrder; } } [TestFixture] public class ArrayListStableSortTest { [Test] public void TestWeblinkCallArrayListIsSortedUsingStableSort() { var call1 = new DisplayOrdered {ID = 1, DisplayOrder = 0}; var call2 = new DisplayOrdered {ID = 2, DisplayOrder = 0}; var call3 = new DisplayOrdered {ID = 3, DisplayOrder = 2}; var list = new ArrayList {call1, call2, call3}; list.Sort(new DisplayOrderedComparer()); // expected order (by ID): 1, 2, 3 (because the DisplayOrder // is equal for ID's 1 and 2, their ordering should be // maintained for a stable sort.) Assert.AreEqual(call1, list[0]); // Actual: ID=2 ** FAILS Assert.AreEqual(call2, list[1]); // Actual: ID=1 Assert.AreEqual(call3, list[2]); // Actual: ID=3 } } Am I missing something? If not, would this be a documentation bug or a library bug? Apparently using an OrderBy in Linq gives a stable sort.

    Read the article

1