Search Results

Search found 125 results on 5 pages for 'smells'.

Page 4/5 | < Previous Page | 1 2 3 4 5  | Next Page >

  • Provider claiming "all web servers in the cloud are automatically kept in sync" - should I be skeptical?

    - by RobMasters
    I'm no expert in cloud computing - I've spent a fair bit of time researching it and various providers but am yet to get any hands-on experience with it. From what I've read about AWS and auto-scaling EC2 instances though, it seems as though each instance should be completely decoupled from all other instances. i.e. If content is uploaded to the web server's local filesystem from a custom CMS backend then that content won't be available if subsequently requested from a different web server in the auto-scaling group. Is that right? I met with a representative of our existing hosting provider recently and he was claiming that it isn't a problem that our legacy CMS system is highly dependent on having a local filesystem. He said that all web servers, regardless of how many, would be kept as exact duplicates so I shouldn't notice any difference compared to our existing setup of a single dedicated server. This smells a little too much like bull fecal-matter to me...should I be skeptical about this? I'm a little worried because my (non-technical) boss who ultimately makes the decisions is all for signing up to this cloud solution because it won't require any extra work. I'm sure that they must at least be able to provide this, otherwise they wouldn't be attempting to sell it to us. But at what cost? It sounds as though each web server will always need to be checking the other web server(s) for new static content, which to me sounds like unwanted overhead that'll slow things down. I'd really appreciate it if somebody could clear this up to me. I'm all for switching to AWS and using S3+CloudFront for all static content, but that isn't looking very likely to happen at the moment.

    Read the article

  • How to (re)enable the "New" context menu items for an administrator when right-clicking in a folder and selecting New > X?

    - by Metro Smurf
    I just migrated from XP x86 to Win7 x64 (clean install). I had a couple of data drives in my XP x86 system that I physically moved to my Win7 x64 system. When browsing a directory in any of the transferred drives, the only option available in the 'new' context menu is "Folder", i.e., Right-Click inside a folder New Folder (this is similar behavior for Win7 when using the context menu in c:\Program Files): However, whenever creating a new folder within any of the directories, all the context menu new items are available within the new folder: Steps I've taken that have failed to add the new context menu items: Removing all security permissions from a directory and sub-directories. Replacing them with new permissions. As well as removing inheritable permissions from the parent. Taking explicit ownership of a directory and sub-directories. Combing the above two. Sample of Effective Permissions that do not work: Steps I've taken that have succeeded to add the new context menu items: Adding the "Everyone" group to the drive and giving the group explicit "Modify" privileges. Giving the "Everyone" group explicit privileges smells wrong. I'm an administrator on my system; why should I have to add the "Everyone" group as well? Adding my username to the drive and giving full permissions. Again, since I'm an administrator on my system and the administrators group already has full control of the drive/directories/folders, why should I have to explicitly add my user name to the security permissions? Finally, The Question: Is it possible to have the New Item context menu have all available options by default without having to explicitly add the everyone group or a specific user name to the security permissions? I'm suspecting that the option may not be available unless the username is explicitly added to the security permissions. Of note: I've seen the registry hacks for updating the new items context menu; my preference is to avoid such hacks and return the functionality to the expected behavior an administrator should have.

    Read the article

  • Executing a git command using remote powershell results in a NativeCommmandError

    - by user204777
    I am getting an error while executing a remote PowerShell script. From my local machine I am running a PowerShell script that uses Invoke-Command to cd into a directory on a remote Amazon Windows Server instance, and a subsequent Invoke-Command to execute script that lives on that server instance. The script on the server is trying to git clone a repository from GitHub. I can successfully do things in the server script like "ls" or even "git --version". However git clone, git pull, etc. result in the following error: Cloning into 'MyRepo'... + CategoryInfo : NotSpecified: (Cloning into 'MyRepo'...:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError This is my first time using PowerShell or a Windows Server. Can anyone provide some direction on this problem. The client script: $s = new-pssession -computername $server -credential $user invoke-command -session $s -scriptblock { cd C:\Repos; ls } invoke-command -session $s -scriptblock { param ($repo, $branch) & '.\clone.ps1' -repository $repo -branch $branch} -ArgumentList $repository, $branch exit-pssession The server script: param([string]$repository = "repository", [string]$branch = "branch") git --version start-process -FilePath git -ArgumentList ("clone", "-b $branch https://github.com/MyGithub/$repository.git") -Wait I've changed the server script to use start process and it is no longer throwing the exception. It creates the new repository directory and the .git directory but doesn't write any of the files from the github repository. This smells like a permissions issue. Once again invoking the script manually (remote desktop into the amazon box and execute it from powershell) works like a charm.

    Read the article

  • Clean Code: A Handbook of Agile Software Craftsmanship – book review

    - by DigiMortal
       Writing code that is easy read and test is not something that is easy to achieve. Unfortunately there are still way too much programming students who write awful spaghetti after graduating. But there is one really good book that helps you raise your code to new level – your code will be also communication tool for you and your fellow programmers. “Clean Code: A Handbook of Agile Software Craftsmanship” by Robert C. Martin is excellent book that helps you start writing the easily readable code. Of course, you are the one who has to learn and practice but using this book you have very good guide that keeps you going to right direction. You can start writing better code while you read this book and you can do it right in your current projects – you don’t have to create new guestbook or some other simple application to start practicing. Take the project you are working on and start making it better! My special thanks to Robert C. Martin I want to say my special thanks to Robert C. Martin for this book. There are many books that teach you different stuff and usually you have markable learning curve to go before you start getting results. There are many books that show you the direction to go and then leave you alone figuring out how to achieve all that stuff you just read about. Clean Code gives you a lot more – the mental tools to use so you can go your way to clean code being sure you will be soon there. I am reading books as much as I have time for it. Clean Code is top-level book for developers who have to write working code. Before anything else take Clean Code and read it. You will never regret your decision. I promise. Fragment of editorial review “Even bad code can function. But if code isn’t clean, it can bring a development organization to its knees. Every year, countless hours and significant resources are lost because of poorly written code. But it doesn’t have to be that way. What kind of work will you be doing? You’ll be reading code—lots of code. And you will be challenged to think about what’s right about that code, and what’s wrong with it. More importantly, you will be challenged to reassess your professional values and your commitment to your craft. Readers will come away from this book understanding How to tell the difference between good and bad code How to write good code and how to transform bad code into good code How to create good names, good functions, good objects, and good classes How to format code for maximum readability How to implement complete error handling without obscuring code logic How to unit test and practice test-driven development This book is a must for any developer, software engineer, project manager, team lead, or systems analyst with an interest in producing better code.” Table of contents Clean code Meaningful names Functions Comments Formatting Objects and data structures Error handling Boundaries Unit tests Classes Systems Emergence Concurrency Successive refinement JUnit internals Refactoring SerialDate Smells and heuristics A Concurrency II org.jfree.date.SerialDate Cross references of heuristics Epilogue Index

    Read the article

  • Normalisation and 'Anima notitia copia' (Soul of the Database)

    - by Phil Factor
    (A Guest Editorial for Simple-Talk) The other day, I was staring  at the sys.syslanguages  table in SQL Server with slightly-raised eyebrows . I’d just been reading Chris Date’s  interesting book ‘SQL and Relational Theory’. He’d made the point that you’re not necessarily doing relational database operations by using a SQL Database product.  The same general point was recently made by Dino Esposito about ASP.NET MVC.  The use of ASP.NET MVC doesn’t guarantee you a good application design: It merely makes it possible to test it. The way I’d describe the sentiment in both cases is ‘you can hit someone over the head with a frying-pan but you can’t call it cooking’. SQL enables you to create relational databases. However,  even if it smells bad, it is no crime to do hideously un-relational things with a SQL Database just so long as it’s necessary and you can tell the difference; not only that but also only if you’re aware of the risks and implications. Naturally, I’ve never knowingly created a database that Codd would have frowned at, but around the edges are interfaces and data feeds I’ve written  that have caused hissy fits amongst the Normalisation fundamentalists. Part of the problem for those who agonise about such things  is the misinterpretation of Atomicity.  An atomic value is one for which, in the strange virtual universe you are creating in your database, you don’t have any interest in any of its component parts.  If you aren’t interested in the electrons, neutrinos,  muons,  or  taus, then  an atom is ..er.. atomic. In the same way, if you are passed a JSON string or XML, and required to store it in a database, then all you need to do is to ask yourself, in your role as Anima notitia copia (Soul of the database) ‘have I any interest in the contents of this item of information?’.  If the answer is ‘No!’, or ‘nequequam! Then it is an atomic value, however complex it may be.  After all, you would never have the urge to store the pixels of images individually, under the misguided idea that these are the atomic values would you?  I would, of course,  ask the ‘Anima notitia copia’ rather than the application developers, since there may be more than one application, and the applications developers may be designing the application in the absence of full domain knowledge, (‘or by the seat of the pants’ as the technical term used to be). If, on the other hand, the answer is ‘sure, and we want to index the XML column’, then we may be in for some heavy XML-shredding sessions to get to store the ‘atomic’ values and ensure future harmony as the application develops. I went back to looking at the sys.syslanguages table. It has a months column with the months in a delimited list January,February,March,April,May,June,July,August,September,October,November,December This is an ordered list. Wicked? I seem to remember that this value, like shortmonths and days, is treated as a ‘thing’. It is merely passed off to an external  C++ routine in order to format a date in a particular language, and never accessed directly within the database. As far as the database is concerned, it is an atomic value.  There is more to normalisation than meets the eye.

    Read the article

  • Permission denied: .hg\store\lock

    - by harpo
    This smells like a serverfault question, yet there are many similar questions here. Your call. I'm setting up Mercurial over IIS6, and thanks to a number of detailed blogs, it's working fine — almost. I can browse and clone the repositories fine, but this is what happens when I try to push: D:\sample2>hg push pushing to http://localhost/hg/sample2 searching for changes abort: HTTP Error 500: Permission denied: .hg\store\lock First of all, there is no such file or folder. Second, the App Pool's logon has total permission on the repository's parent directory, with these inherited ad infinitum. The repository is located on another logical drive (on the same machine), and if I push to it directly, that also works: D:\sample2>hg push e:\hg\sample2 pushing to e:\hg\sample2 searching for changes adding changesets adding manifests adding file changes added 1 changesets with 1 changes to 1 files If I change the password in my hgrc, the message indicates a failed authorization, so I believe that's working. I've been fighting this for a couple of days, so any leads would be helpful. Thanks!

    Read the article

  • How can I detect message boxes popping up in another process?

    - by Frerich Raabe
    I'd like to execute some code whenever a (any!) message box (as spawned by the MessageBox Function) is shown in another process. I didn't start the process I'm monitoring. I can think of three approaches: Install a global CBT Hook procedure which tells me whenever a window is created on the desktop. Then, check whether the window belongs to the process I'm monitoring and whether the class name is #32770 (which is the class name of dialogs according to the About Window Classes page at the MSDN). This would probably work, but it would pull the DLL which contains the hook procedure into virtually every process on the desktop, and the hook procedure gets called a lot. It smells like a potential perfomance problem. Try to subclass the #32770 system window class (is this possible at all?) and look for WM_CREATE messages in my custom window procedure. Intercept the MessageBox Function API call (even though the remote process is running already!) and call my code from the hook function. So far, I only know that the first idea is feasible, but it seems really inefficient. Can anybody think of a simpler solution than that to this problem?

    Read the article

  • Is there a way to effect user defined data types in MySQL?

    - by Dancrumb
    I have a database which stores (among other things), the following pieces of information: Hardware IDs BIGINTs Storage Capacities BIGINTs Hardware Names VARCHARs World Wide Port Names VARCHARs I'd like to be able to capture a more refined definition of these datatypes. For instance, the hardware IDs have no numerical significance, so I don't care how they are formatted when displayed. The Storage Capacities, however, are cardinal numbers and, at a user's request, I'd like to present them with thousands and decimal separators, e.g. 123,456.789. Thus, I'd like to refine BIGINT into, say ID_NUMBER and CARDINAL. The same with Hardware Names, which are simple text and WWPNs, which are hexstrings, e.g. 24:68:AC:E0. Thus, I'd like to refine VARCHAR into ENGLISH_WORD and HEXSTRING. The specific datatypes I made up are just for illustrative purposes. I'd like to keep all this information in one place and I'm wondering if anybody knows of a good way to hold this all in my MySQL table definitions. I could use the Comment field of the table definition, but that smells fishy to me. One approach would be to define the data structure elsewhere and use that definition to generate my CREATE TABLEs, but that would be a major rework of the code that I currently have, so I'm looking for alternatives. Any suggestions? The application language in use is Perl, if that helps.

    Read the article

  • best practice for boolean REST results

    - by Andrew Patterson
    I have a resource /system/resource And I wish to ask the system a boolean question about the resource that can't be answered by processing on the client (i.e I can't just GET the resource and look through the actual resource data - it requires some processing on the backend using data not available to the client). eg /system/resource/related/otherresourcename I want this is either return true or false. Does anyone have any best practice examples for this type of interaction? Possibilities that come to my mind: use of HTTP status code, no returned body (smells wrong) return plain text string (True, False, 1, 0) - Not sure what string values are appropriate to use, and furthermore this seems to be ignoring the Accept media type and always returning plain text come up with a boolean object for each of my support media types and return the appropriate type (a JSON document with a single boolean result, an XML document with a single boolean field). However this seems unwieldy. I don't particularly want to get into a long discussion about the true meaning of a RESTful system etc - I have used the word REST in the title because it best expresses the general flavour of system I am designing (even if perhaps I am tending more towards RPC over the web rather than true REST). However, if someone has some thoughts on how a true RESTful system avoids this problem entirely I would be happy to hear them.

    Read the article

  • Lookup site column not saving/storing metadata for Office 2007 documents?

    - by Greg Hurlman
    I'm having this issue on several server environments. We have a list at the site collection root. There is a site column created as a multi-value lookup on that list's Title field. This site column is used in document libraries in subsites as a required field. When we upload anything but an Office 2007 document, the user is presented with the document metadata fill-in screen (EditForm.aspx?Mode=Upload), the user fills in the appropriate data (including picking a value(s) for this lookup), and clicks "check in" - the document is checked in as expected, with the lookup field's value filled in. With an Office 2007 document, this fails. The user selected values for the lookup field do not ever make it to the server - no errors are thrown, but the field is not saved with the document. We have an event listener on these document libraries, and if we inspect the incoming SPListItem on the event listener method before a single line of our code has run, we see that the value for the lookup field is null. It smells like a SharePoint bug to me - but before I go calling Microsoft, has anyone seen this & worked around it? Edit: the only entry I see in the SP trace logs relating to the problem: CMS/Publishing/8ztg/Medium/Got List Item Version, but item was null

    Read the article

  • protecting COM interfaces from exceptions

    - by rmeador
    I have several dozen objects exposed through COM interfaces, each of which with many methods, totaling a few hundred methods. These interfaces expose business objects from my app to a scripting engine. I have been given the task of protecting every single one of these methods from exceptions being thrown (to catch them and return an error using COM's Error() function, which incidentally I can find no documentation on because it's impossible to google). To my understanding, this requires that I add a try/catch around the guts of each one of these methods. The catch blocks are going to be similar or identical for each and every one of these hundreds of methods, which strongly smells of a problem (massively violates the DRY principle), but I can't think of any way to avoid changing every method. As far as I can tell, these methods are invoked directly by COM, with no intervening code that I can hook into to catch the exceptions. My current best idea is to make a macro for the catch block, but that has it's own sort of code-smell. Can anyone come up with a better approach? BTW, my app's exceptions do not derive from std::exception, so if there is some way of COM automatically handling standard exceptions, it won't help. And I sadly cannot change the existing exceptions to derive from std::exception.

    Read the article

  • {DCC Warning} W1036 Variable '$frame' might not have been initialized?

    - by Gad D Lord
    Any ideas why I get this warning in Delphi XE: [DCC Warning] Form1.pas(250): W1036 Variable '$frame' might not have been initialized procedure TForm1.Action1Execute(Sender: TObject); var Thread: TThread; begin ... Thread := TThread.CreateAnonymousThread( procedure{Anonymos}() procedure ShowLoading(const Show: Boolean); begin /// <------------- WARNING IS GIVEN FOR THIS LINE (line number 250) Thread.Synchronize(Thread, procedure{Anonymous}() begin ... Button1.Enabled := not Show; ... end ); end; var i: Integer; begin ShowLoading(true); try Thread.Synchronize(Thread, procedure{Anonymous}() begin ... // some UI updates end Thread.Synchronize(Thread, procedure{Anonymous}() begin ... // some UI updates end ); finally ShowLoading(false); end; end ).NameThread('Some Thread Name'); Thread.Start; end; I do not have anywhere in my code a variable names frame nor $frame. I am even not sure how $frame with $ sign can be a valid identifier. Smells like compiler magic to me. PS: Of course the real life xosw is having other than Form1, Button1, Action1 names.

    Read the article

  • Avoiding a fork()/SIGCHLD race condition

    - by larry
    Please consider the following fork()/SIGCHLD pseudo-code. // main program excerpt for (;;) { if ( is_time_to_make_babies ) { pid = fork(); if (pid == -1) { /* fail */ } else if (pid == 0) { /* child stuff */ print "child started" exit } else { /* parent stuff */ print "parent forked new child ", pid children.add(pid); } } } // SIGCHLD handler sigchld_handler(signo) { while ( (pid = wait(status, WNOHANG)) > 0 ) { print "parent caught SIGCHLD from ", pid children.remove(pid); } } In the above example there's a race-condition. It's possible for "/* child stuff */" to finish before "/* parent stuff */" starts which can result in a child's pid being added to the list of children after it's exited, and never being removed. When the time comes for the app to close down, the parent will wait endlessly for the already-finished child to finish. One solution I can think of to counter this is to have two lists: started_children and finished_children. I'd add to started_children in the same place I'm adding to children now. But in the signal handler, instead of removing from children I'd add to finished_children. When the app closes down, the parent can simply wait until the difference between started_children and finished_children is zero. Another possible solution I can think of is using shared-memory, e.g. share the parent's list of children and let the children .add and .remove themselves? But I don't know too much about this. EDIT: Another possible solution, which was the first thing that came to mind, is to simply add a sleep(1) at the start of /* child stuff */ but that smells funny to me, which is why I left it out. I'm also not even sure it's a 100% fix. So, how would you correct this race-condition? And if there's a well-established recommended pattern for this, please let me know! Thanks.

    Read the article

  • DotNetOpenAuth / WebSecurity Basic Info Exchange

    - by Jammer
    I've gotten a good number of OAuth logins working on my site now. My implementation is based on the WebSecurity classes with amends to the code to suit my needs (I pulled the WebSecurity source into mine). However I'm now facing a new set of problems. In my application I have opted to make the user email address the login identifier of choice. It's naturally unique and suits this use case. However, the OAuth "standards" strikes again. Some providers will return your email address as "username" (Google) some will return the display name (Facebook). As it stands I see to options given my particular scenario: Option 1 Pull even more framework source code into my solution until I can chase down where the OpenIdRelyingParty class is actually interacted with (via the DotNetOpenAuth.AspNet facade) and make addition information requests from the OpenID Providers. Option 2 When a user first logs in using an OpenID provider I can display a kind of "complete registration" form that requests missing info based on the provider selected.* Option 2 is the most immediate and probably the quickest to implement but also includes some code smells through having to do something different based on the provider selected. Option 1 will take longer but will ultimately make things more future proof. I will need to perform richer interactions down the line so this also has an edge in that regard. The more I get into the code it does seem that the WebSecurity class itself is actually very limiting as it hides lots of useful DotNetOpenAuth functionality in the name of making integration easier. Andrew (the author of DNOA) has said that the Attribute Exchange stuff happens in the OpenIdRelyingParty class but I cannot see from the DotNetOpenAuth.AspNet source code where this class is used so I'm unsure of what source would need to be pulled into my code in order to enable the functionality I need. Has anyone completely something similar?

    Read the article

  • C++ Changing a class in a dll where a pointer to that class is returned to other dlls...

    - by Patrick
    Hello, Horrible title I know, horrible question too. I'm working with a bit of software where a dll returns a ptr to an internal class. Other dlls (calling dlls) then use this pointer to call methods of that class directly: //dll 1 internalclass m_class; internalclass* getInternalObject() { return &m_class; } //dll 2 internalclass* classptr = getInternalObject(); classptr->method(); This smells pretty bad to me but it's what I've got... I want to add a new method to internalclass as one of the calling dlls needs additional functionality. I'm certain that all dlls that access this class will need to be rebuilt after the new method is included but I can't work out the logic of why. My thinking is it's something to do with the already compiled calling dll having the physical address of each function within internalclass in the other dll but I don't really understand it; is anyone here able to provide a concise explanation of how the dlls (new internal class dll, rebuilt calling dll and calling dll built with previous version of the internal class dll) would fit together? Thanks, Patrick

    Read the article

  • Emulating Test::More::done_testing - what is the most idiomatic way?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test dies prematurely. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it.

    Read the article

  • Effective communication in a component-based system

    - by Tesserex
    Yes, this is another question about my game engine, which is coming along very nicely, with much thanks to you guys. So, if you watched the video (or didn't), the objects in the game are composed of various components for things like position, sprites, movement, collision, sounds, health, etc. I have several message types defined for "tell" type communication between entities and components, but this only goes so far. There are plenty of times when I just need to ask for something, for example an entity's position. There are dozens of lines in my code that look like this: SomeComponent comp = (SomeComponent)entity.GetComponent(typeof(SomeComponent)); if (comp != null) comp.GetSomething(); I know this is very ugly, and I know that casting smells of improper OO design. But as complex as things are, there doesn't seem to be a better way. I could of course "hard-code" my component types and just have SomeComponent comp = entity.GetSomeComponent(); but that seems like a cop-out, and a bad one. I literally JUST REALIZED, while writing this, after having my code this way for months with no solution, that a generic will help me. SomeComponent comp = entity.GetComponent<SomeComponent>(); Amazing how that works. Anyway, this is still only a semantic improvement. My questions remain. Is this actually that bad? What's a better alternative?

    Read the article

  • Best loose way to get objects with common base class

    - by Michael Teper
    I struggled to come up with a good title for this question, so suggestions are welcome. Let's say we have an abstract base class ActionBase that looks something like this: public abstract class ActionBase { public abstract string Name { get; } public abstract string Description { get; } // rest of declaration follows } And we have a bunch of different actions defined, like a MoveFileAction, WriteToRegistryAction, etc. These actions get attached to Worker objects: public class Worker { private IList<ActionBase> _actions = new List<ActionBase>(); public IList<ActionBase> Actions { get { return _actions; } } // worker stuff ... } So far, pretty straight-forward. Now, I'd like to have a UI for setting up Workers, assigning Actions, setting properties, and so on. In this UI, I want to present a list of all available actions, along with their properties, and for that I'd want to first gather up all the names and descriptions of available actions (plus the type) into a collection of the following type of item: public class ActionDescriptor { public string Name { get; } public string Description { get; } poblic Type Type { get; } } Certainly, I can use reflection to do this, but is there a better way? Having Name and Description be instance properties of ActionBase (as opposed to statics on derived classes) smells a bit, but there isn't an abstract static in C#. Thank you!

    Read the article

  • How to make your more experienced and authoritative teammates not to create 'fast temporary solution

    - by Roman
    I'm currently working on a small short-lived project. But despite the size it's complicated enough with very unclear logic. That's why it was started by more experienced developers. They work on it from time to time because it's not their main project. They made some code drafts with numerous places which 'would be rewritten in the nearest future'. After that they added several another 'temporary pieces'. And then again.. So, now the project is a mess of 'half-working' pieces of code with some hardcoded values, like file names or some constants which 'will be replaced latter with working parts'. The API is awful (nobody thinks about it actually). And it's really, really hard to do development now (for me it's the main and only project). I caught myself thinking that I spent about an hour every day just to understand again all that tricky 'temporary' things and API weaknesses. And after that hour my brain melts. I can't just say that "guys, your code smells like a trash dump". What's the correct way?

    Read the article

  • Drupal 7: How can I create a key/value field(or field group, if that's even possible)?

    - by Su'
    Let's say I'm creating some app documentation. In creating a content type for functions, I have a text field for name, a box for a general description, and a couple other basic things. Now I need something for storing arguments to the function. Ideally, I'd like to input these as key-value pairs, or just two related fields, which can then be repeated as many times as needed for the given function. But I can't find any way to accomplish this. The closest I've gotten is an abandonded field multigroup module that says to wait for CCK3, which hasn't even produced an alpha yet as far as I can tell and whose project page makes no obvious mention of this multi-group functionality. I also checked the CCK issue queue and don't think I saw it in there, either. Is there a current viable way of doing this I'm not seeing? Viable includes "you're thinking of this the wrong way and do X instead." I've considered using a "Long text and summary" field, but that smells hackish and I don't know if I'd be setting myself up for side-effects. I'm new to Drupal.

    Read the article

  • Updating the $PATH for running an command through SSH with LDAP user account

    - by Guillaume Bodi
    Hi all, I am setting up a Mac OSX 1.6 server to host Git repositories. As such we need to push commits to the server through SSH. The server has only an admin account and uses a user list from a LDAP server. Now, since it is accessing the server through a non interactive shell, git operations are not able to complete since git executables are not in the default path. As the users are network users, they do not have a local home folder. So I cannot use a ~/.bashrc and the like solution. I browsed over several articles here and there but could not get it working in a nice and clean setup. Here are the infos on the methods I gathered so far: I could update the default PATH environment to include the git executables folder. However, I could not manage to do it successfully. Updating /etc/paths didn't change anything and since it's not an interactive shell, /etc/profile and /etc/bashrc are ignored. From the ssh manpage, I read that a BASH_ENV variable can be set to get an optional script to be executed. However I cannot figure how to set it system wide on the server. If it needs to be set up on the client machine, this is not an acceptable solution. If someone has some info on how it is supposed to be done, please, by all means! I can fix this problem by creating a .bashrc with PATH correction in the system root (since all network users would start here as they do not have home). But it just feels wrong. Additionally, if we do create a home folder for an user, then the git command would fail again. I can install a third party application to set up hooks on the login and then run a script creating a home directory with the necessary path corrections. This smells like a backyard tinkering and duct tape solution. I can install a small script on the server and ForceCommand the sshd to this script on login. This script will then look for a command to execute ($SSH_ORIGINAL_COMMAND) and trigger a login shell to run this command, or just trigger a regular login shell for an interactive session. The full details of this method can be found here: http://marc.info/?l=git&m=121378876831164 The last one is the best method I found so far. Any suggestions on how to deal with this properly?

    Read the article

  • SQL Saturday #44 Huntington Beach Recap

    What a great day. It was long and tiring, but rewarding in so many ways. On Sunday morning, I was driving home and I decided to take the Pacific Coast Highway from Huntington Beach.  It was a great chance to exhale and just enjoy the sun and smells of the beach (I really love SoCal sometimes). And for future reference for all you speakers, the beach and ocean are only 5 minutes from the SQL Saturday location.  I just could help noticing also the shocking number of high priced cars on the road (4 Bentleys, 3 Ferraris, 1 Aston Martins, 3 Maserati, 1 Rolls Royce, and 2 Lamborghinis).  It made me think about this: Price of all those cars: $ 150,000+.  Impacting the ability of people to learn: Priceless.  We have positively impacted the education, knowledge, capabilities of not only our attendees, but also all of their companies and people they might help as well.  That is just staggering and something to be immensely proud of. To all of my fellow community leaders, I salute you. So lets talk about the event Overall We had over 220 people register for the event and had 180+ people attend the event. I was shooting for the magical 200 number, but I guess it just gives us more motivation to make it even bigger and better next time. We had a few snags along the way, but what event doesnt, but I think everything turned out great. I did not hear any negative comments and heard lots of positive comments along with people asking when the next one is going to be (More on that later). Location- Golden West College We could not have asked for a better partner for the event. Herb Cohen from Golden West College was the wizard behind the curtains. From the beginning, he was our advocate to the GWC Board and was instrumental in getting our event approved. The day off, Herb was a HUGE help getting any and all logistics that we needed taken care of. In the craziness of the early morning registration crush it was a big help knowing that he and Bret Stateham (Blog | Twitter) were taking care of testing projectors in all the rooms. Anything we needed he was there and was even proactive in getting some things that I had not even thought of (i.e. a dumpster for all of our garbage). I cannot thank Herb enough along with other members of the GWC staff including Minnie Higgins of the Career and Technical Education Division office, Jack Taylor, public safety, and Ron Pryor, Tech Services Support. And last, but not least, the Wireless on campus was absolutely FANTASTIC! Some lessons learned Unless you are a glutton for punishment, as I no doubt am, you most certainly want to give yourself more than six weeks to plan the event. I am lucky that I have a very understanding wife and had a wonderful set of co-coordinators helping me out. A big thanks goes out to Phil, Marlon (Blog | Twitter), Nitin (Twitter), Thomas (Blog | Twitter), Bret (Blog | Twitter), Ben, and Laurie. Thankfully, the sponsor and speaker community was hugely supportive and we were able to fill out the entire event with speakers and sponsors. I have to say that there is not a lot that I would change after this years event. There are obviously going to be some things that we can do better or differently next time, but overall I think it was a great event and I was more than happy with the response we received from the community. Sponsors We obviously could not have put together our event without our sponsors. So certainly have to show them some love. Platinum Sponsors Quest Software http://www.quest.com My Space http://www.myspace.com/ Gold Strategy Companion http://www.strategycompanion.com Silver Fusion-IO http://www.fusionio.com Bronze WestClinTech http://westclintech.com Professional Association For SQL Server http://www.sqlpass.org Attunity http://www.attunity.com Sharepoint 360 http://www.sharepoint360.com Some additional Thanks Andy Warren (Blog | Twitter) Always there to answer my question and help out when I had some issues or questions with the website. The amount of work that he and everyone else put into SQL Saturday is very amazing. What a great gift to the community! Einstein Bros. Bagels They were our Breakfast Vendor and arrived perfectly on time with yummy bagels, sweets and most importantly coffee. Luccis Deli (http://www.luccisdeli.com) Luccis was out Lunch Vendor. They were great to work with and the food was excellent. They worked with us to give us a great price. Heard lots of great comments about the lunches. Definitely not your ordinary box lunch. Moving Forward Unfortunately, the work does not end after the event. We have a few things to clear up such as surveys, sponsor stuff, presentations uploaded to the website, expense reimbursement, stuff like that. Hopefully, all that should be cleared up within the next couple weeks. After that as a group we are going to get together and decide what our next steps are. We definitely want to keep some of the momentum that we are building as a SQL Community and channel that into future SQL Saturdays and other types of community events. In the meantime, for additional training be sure to check out your local User Group and PASS. San Diego SQL Server Users Group ( http://www.sdsqlug.org/home/index.cfm ) Orange County SQL Server Users Group ( http://www.sqloc.com/ ) L.A. SQL Server Users Group ( http://www.sql.la/ ) SQL PASS ( http://www.sqlpass.org/ ) 24 Hours of PASS ( http://www.sqlpass.org/24hours/2010/ ) So stay tuned, there will be more events to come in SoCal!!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • The illusion of Competence

    - by tony_lombardo
    Working as a contractor opened my eyes to the developer food chain.  Even though I had similar experiences earlier in my career, the challenges seemed much more vivid this time through.  I thought I’d share a couple of experiences with you, and the lessons that can be taken from them. Lesson 1: Beware of the “funnel” guy.  The funnel guy is the one who wants you to funnel all thoughts, ideas and code changes through him.  He may say it’s because he wants to avoid conflicts in source control, but the real reason is likely that he wants to hide your contributions.  Here’s an example.  When I finally got access to the code on one of my projects, I was told by the developer that I had to funnel all of my changes through him.  There were 4 of us coding on the project, but only 2 of us working on the UI.  The other 2 were working on a separate application, but part of the overall project.  So I figured, I’ll check it into SVN, he reviews and accepts then merges in.  Not even close.  I didn’t even have checkin rights to SVN, I had to email my changes to the developer so he could check those changes in.  Lesson 2: If you point out flaws in code to someone supposedly ‘higher’ than you in the developer chain, they’re going to get defensive.  My first task on this project was to review the code, familiarize myself with it.  So of course, that’s what I did.  And in familiarizing myself with it, I saw so many bad practices and code smells that I immediately started coming up with solutions to fix it.  Of course, when I reviewed these changes with the developer (guy who originally wrote the code), he smiled and nodded and said, we can’t make those changes now, it’s too destabilizing.  I recommended we create a new branch and start working on refactoring, but branching was a new concept for this guy and he was worried we would somehow break SVN. How about some concrete examples? I started out by recommending we remove NUnit dependency and tests from the application project, and create a separate Unit testing project.  This was met with a little bit of resistance because - “How do I access the private methods?”  As it turned out there weren’t really any private methods that weren’t exposed by public methods, so I quickly calmed this fear. Win 1 Loss 0 Next, I recommended that all of the File IO access be wrapped in Using clauses, or at least properly wrapped in try catch finally.  This recommendation was accepted.. but never implemented. Win 2  Loss 1 Next recommendation was to refactor the command pattern implementation.  The command pattern was implemented, but it wasn’t really necessary for the application.  More over, the fact that we had 100 different command classes, each with it’s own specific command parameters class, made maintenance a huge hassle.  The same code repeated over and over and over.  This recommendation was declined, the code was too fragile and this change would destabilize it.  I couldn’t disagree, though it was the commands themselves in many cases that were fragile. Win 2 Loss 2 Next recommendation was to aid performance (and responsiveness) of the application by using asynchronous service calls.  This on was accepted. Win 2 Loss 3 If you’re paying any attention, you’re wondering why the async service calls was scored as a loss.. Let me explain.  The service call was made using the async pattern.  Followed by a thread.sleep  <facepalm>. Now it’s easy to be harsh on this kind of code, especially if you’re an experienced developer.  But I understood how most of this happened.  One junior guy, working as hard as he can to build his first real world application, with little or no guidance from anyone else.  He had his pattern book and theory of programming to help him, but no real world experience.  He didn’t know how difficult it would be to trace the crashes to the coding issues above, but he will one day.  The part that amazed me was the management position that “this guy should be a team lead, because he’s worked so hard”.  I’m all for rewarding hard work, but when you reward someone by promoting them past the point of their competence, you’re setting yourself and them up for failure.  And that’s lesson 3.  Just because you’ve got a hard worker, doesn’t mean he should be leading a development project.  If you’re a junior guy busting your ass, keep at it.  I encourage you to try new things, but most importantly to learn from your mistakes.  And correct your mistakes.  And if someone else looks at your code and shows you a laundry list of things that should be done differently, don’t take it personally – they’re really trying to help you.  And if you’re a senior guy, working with a junior guy, it’s your duty to point out the flaws in the code.  Even if it does make you the bad guy.  And while I’ve used “guy” above, I mean both men and women.  And in some cases mutant dinosaurs. 

    Read the article

  • When is my View too smart?

    - by Kyle Burns
    In this posting, I will discuss the motivation behind keeping View code as thin as possible when using patterns such as MVC, MVVM, and MVP.  Once the motivation is identified, I will examine some ways to determine whether a View contains logic that belongs in another part of the application.  While the concepts that I will discuss are applicable to most any pattern which favors a thin View, any concrete examples that I present will center on ASP.NET MVC. Design patterns that include a Model, a View, and other components such as a Controller, ViewModel, or Presenter are not new to application development.  These patterns have, in fact, been around since the early days of building applications with graphical interfaces.  The reason that these patterns emerged is simple – the code running closest to the user tends to be littered with logic and library calls that center around implementation details of showing and manipulating user interface widgets and when this type of code is interspersed with application domain logic it becomes difficult to understand and much more difficult to adequately test.  By removing domain logic from the View, we ensure that the View has a single responsibility of drawing the screen which, in turn, makes our application easier to understand and maintain. I was recently asked to take a look at an ASP.NET MVC View because the developer reviewing it thought that it possibly had too much going on in the view.  I looked at the .CSHTML file and the first thing that occurred to me was that it began with 40 lines of code declaring member variables and performing the necessary calculations to populate these variables, which were later either output directly to the page or used to control some conditional rendering action (such as adding a class name to an HTML element or not rendering another element at all).  This exhibited both of what I consider the primary heuristics (or code smells) indicating that the View is too smart: Member variables – in general, variables in View code are an indication that the Model to which the View is being bound is not sufficient for the needs of the View and that the View has had to augment that Model.  Notable exceptions to this guideline include variables used to hold information specifically related to rendering (such as a dynamically determined CSS class name or the depth within a recursive structure for indentation purposes) and variables which are used to facilitate looping through collections while binding. Arithmetic – as with member variables, the presence of arithmetic operators within View code are an indication that the Model servicing the View is insufficient for its needs.  For example, if the Model represents a line item in a sales order, it might seem perfectly natural to “normalize” the Model by storing the quantity and unit price in the Model and multiply these within the View to show the line total.  While this does seem natural, it introduces a business rule to the View code and makes it impossible to test that the rounding of the result meets the requirement of the business without executing the View.  Within View code, arithmetic should only be used for activities such as incrementing loop counters and calculating element widths. In addition to the two characteristics of a “Smart View” that I’ve discussed already, this View also exhibited another heuristic that commonly indicates to me the need to refactor a View and make it a bit less smart.  That characteristic is the existence of Boolean logic that either does not work directly with properties of the Model or works with too many properties of the Model.  Consider the following code and consider how logic that does not work directly with properties of the Model is just another form of the “member variable” heuristic covered earlier: @if(DateTime.Now.Hour < 12) {     <div>Good Morning!</div> } else {     <div>Greetings</div> } This code performs business logic to determine whether it is morning.  A possible refactoring would be to add an IsMorning property to the Model, but in this particular case there is enough similarity between the branches that the entire branching structure could be collapsed by adding a Greeting property to the Model and using it similarly to the following: <div>@Model.Greeting</div> Now let’s look at some complex logic around multiple Model properties: @if (ModelPageNumber + Model.NumbersToDisplay == Model.PageCount         || (Model.PageCount != Model.CurrentPage             && !Model.DisplayValues.Contains(Model.PageCount))) {     <div>There's more to see!</div> } In this scenario, not only is the View code difficult to read (you shouldn’t have to play “human compiler” to determine the purpose of the code), but it also complex enough to be at risk for logical errors that cannot be detected without executing the View.  Conditional logic that requires more than a single logical operator should be looked at more closely to determine whether the condition should be evaluated elsewhere and exposed as a single property of the Model.  Moving the logic above outside of the View and exposing a new Model property would simplify the View code to: @if(Model.HasMoreToSee) {     <div>There’s more to see!</div> } In this posting I have briefly discussed some of the more prominent heuristics that indicate a need to push code from the View into other pieces of the application.  You should now be able to recognize these symptoms when building or maintaining Views (or the Models that support them) in your applications.

    Read the article

  • Cases of companies taking IP rights of your own personal projects developed outside company time

    - by GSS
    Hi, I have heard of cases where a developer working for a company is also making his own personal projects in his own time, using his own equipment yet the company he works for tries to claim ownership for the project. I really find this annoying, and bang out of order. It should also be illegal. I am in this position (work for a company and working on my own systems - from small class libraries used to practise what I learn in my exam revision to a large commercial-scale system). While I don't know if the company will try to take ownership, all I know is they say they do not want a conflict of interest. Fair enough, my system is developed in my own time using my own equipment. They also say that work time should be for work only, which it is. Funny thing that as work is so boring, easy and slow that I have plenty of free time, which I wish I could spend on something productive - said system. The problem is, my company does not take hiring technical talent seriously. This is my first job, I am a junior coder (but my status/position doesn't really reflect what I can do), but I am the only developer. Likewise with the guy who controls Windows Server. As the contract does not say anything about taking ownership, I would assume they would. They would try to milk my success (I've made a good impression so I am sure they would). How can this be allowed? Are there any examples of this happening to any fellow Stacker here? It really makes my blood boil. What I find funny is that my company hardly has the expertise and resources to even be able to successfully run a project of my size. What I do at work is an ASP.NET application consisting of five pages, and even then there are flaws in the project. If I told them that they would also have to take responsibility for flaws in the project, then they would think twice! It's exactly because of this I save the best code for myself and at work I write rubbish code full of code smells. The company don't really care about error handling, as long as the business functionality works (ie a scheduled email sends, but there is no error handling). They'd think twice when they see the embarassment and business cost of a YSOD...

    Read the article

< Previous Page | 1 2 3 4 5  | Next Page >