Search Results

Search found 25534 results on 1022 pages for 'write powershell'.

Page 173/1022 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • Myths about Coding Craftsmanship part 2

    - by tom
    Myth 3: The source of all bad code is inept developers and stupid people When you review code is this what you assume?  Shame on you.  You are probably making assumptions in your code if you are assuming so much already.  Bad code can be the result of any number of causes including but not limited to using dated techniques (like boxing when generics are available), not following standards (“look how he does the spacing between arguments!” or “did he really just name that variable ‘bln_Hello_Cats’?”), being redundant, using properties, methods, or objects in a novel way (like switching on button.Text between “Hello World” and “Hello World “ //clever use of space character… sigh), not following the SOLID principals, hacking around assumptions made in earlier iterations / hacking in features that should be worked into the overall design.  The first two issues, while annoying are pretty easy to spot and can be fixed so easily.  If your coding team is made up of experienced professionals who are passionate about staying current then these shouldn’t be happening.  If you work with a variety of skills, backgrounds, and experience then there will be some of this stuff going on.  If you have an opportunity to mentor such a developer who is receptive to constructive criticism don’t be a jerk; help them and the codebase will improve.  A little patience can improve the codebase, your work environment, and even your perspective. The novelty and redundancy I have encountered has often been the use of creativity when language knowledge was perceived as unavailable or too time consuming.  When developers learn on the job you get a lot of this.  Rather than going to MSDN developers will use what they know.  Depending on the constraints of their assignment hacking together what they know may seem quite practical.  This was not stupid though I often wonder how much time is actually “saved” by hacking.  These issues are often harder to untangle if we ever do.  They can also grow out of control as we write hack after hack to make it work and get back to some development that is satisfying. Hacking upon an existing hack is what I call “feeding the monster”.  Code monsters are anti-patterns and hacks gone wild.  The reason code monsters continue to get bigger is that they keep growing in scope, touching more and more of the application.  This is not the result of dumb developers. It is probably the result of avoiding design, not taking the time to understand the problems or anticipate or communicate the vision of the product.  If our developers don’t understand the purpose of a feature or product how do we expect potential customers to do so? Forethought and organization are often what is missing from bad code.  Developers who do not use the SOLID principals should be encouraged to learn these principals and be given guidance on how to apply them.  The time “saved” by giving hackers room to hack will be made up for and then some. Not as technical debt but as shoddy work that if not replaced will be struggled with again and again.  Bad code is not the result of dumb developers (usually) it is the result of trying to do too much without the proper resources and neglecting the right thing that needs doing with the first thoughtless thing that comes into our heads. Object oriented code is all about relationships between objects.  Coders who believe their coworkers are all fools tend to write objects that are difficult to work with, not eager to explain themselves, and perform erratically and irrationally.  If you constantly find you are surrounded by idiots you may want to ask yourself if you are being unreasonable, if you are being closed minded, of if you have chosen the right profession.  Opening your mind up to the idea that you probably work with rational, well-intentioned people will probably make you a better coder and it might even make you less grumpy.  If you are surrounded by jerks who do not engage in the exchange of ideas who do not care about their customers or the durability of the code you are building together then I suggest you find a new place to work.  Myth 4: Customers don’t care about “beautiful” code Craftsmanship is customer focused because it means that the job was done right, the product will withstand the abuse, modifications, and scrutiny of our customers.  Users can appreciate a predictable timeline for a release, a product delivered on time and on budget, a feature set that does not interfere with the task(s) it is supporting, quick turnarounds on exception messages, self healing issues, and less issues.  These are all hindered by skimping on craftsmanship.  When we write data access and when we write reusable code.   What do you think?  Does bad code come primarily from low IQ individuals?  Do customers care about beautiful code?

    Read the article

  • Is it good practice to use functions just to centralize common code?

    - by EpsilonVector
    I run across this problem a lot. For example, I currently write a read function and a write function, and they both check if buf is a NULL pointer and that the mode variable is within certain boundaries. This is code duplication. This can be solved by moving it into its own function. But should I? This will be a pretty anemic function (doesn't do much), rather localized (so not general purpose), and doesn't stand well on its own (can't figure out what you need it for unless you see where it is used). Another option is to use a macro, but I want to talk about functions in this post. So, should you use a function for something like this? What are the pros and cons?

    Read the article

  • How can I keep directories in sync

    - by Guillaume Boudreau
    I have a directory, dirA, that users can work in: they can create, modify, rename and delete files & sub-directores in dirA. I want to keep another directory, dirB, in sync with dirA. What I'd like, is a discussion on finding a working algorithm that would achieve the above, with the limitations listed below. Requirements: 1. Something asynchronous - I don't want to stop file operations in dirA while I work in dirB. 2. I can't assume that I can just blindly rsync dirA to dirB on regular interval - dirA could contain millions of files & directories, and terrabytes of data. Completely walking the dirA tree could take hours. Those two requirements makes this really difficult. Having it asynchronous means that when I start working on a specific file from dirA, it might have moved a lot since it appeared. And the second limitation means that I really need to watch dirA, and work on atomic file operations that I notice. Current (broken) implementation: 1. Log all file & directory operations in dirA. 2. Using a separate process, read that log, and 'repeat' all the logged operations in dirB. Why is it broken: echo 1 > dirA/file1 # Allow the 'log reader' process to create dirB/file1: log = "write dirA/file1"; action = cp dirA/file1 dirB/file1; result = OK echo 1 > dirA/file2 mv dirA/file1 dirA/file3 mv dirA/file2 dirA/file1 rm dirA/file3 # End result: file1 contains '1' # 'log reader' process starts working on the 4 above file operations: log = "write file2"; action = cp dirA/file2 dirB/file2; result = failed: there is no dirA/file2 log = "rename file1 file3"; action = mv dirB/file1 dirB/file3; result = OK log = "rename file2 file1"; action = mv dirB/file2 dirB/file1; result = failed: there is no dirB/file2 log = "delete file3"; action = rm dirB/file3; result = OK # End result in dirB: no more files! Another broken example: echo 1 > dirA/dir1/file1 mv dirA/dir1 dirA/dir2 # 'log reader' process starts working on the 2 above file operations: log = "write file1"; action = cp dirA/dir1/file1 dirB/dir1/file1; result = failed: there is no dirA/dir1/file1 log = "rename dir1 dir2"; action = mv dirB/dir1 dirB/dir2; result = failed: there is no dirA/dir1 # End result if dirB: nothing!

    Read the article

  • problem connecting to magento connect

    - by amir
    I'm using magento 1.4.0 and when I try to get to magento connect and download a plugin the page will say Error: Please check for sufficient write file permissions Your Magento folder does not have sufficient write permissions, which this web based downloader requires. If you wish to proceed downloading Magento packages online, please set all Magento folders to have writable permission for the web server user (example: apache) and press the "Refresh" button to try again. does anyone know how I can fix this problem, thanks Update: the plugin I'm trying to use is MagentoPycho light box so I unpacked the folder into the app/code/local but it still doesn't show in the admin area

    Read the article

  • How do I mount my External HDD with filesystem type errors?

    - by Snuggie
    I am a relatively new Ubuntu user and I am having some difficulty mounting my external 2TB HDD. When I first installed Linux my external HDD was working just fine, however, it has stopped working and I have a lot of important files on there that I need. Before my HDD would automatically mount and no worries. Now, however, it doesn't automatically mount and when I try to manually mount it I keep running into filesystem type errors that I can't seem to get past. Below are images that depict my step by step process of how I am trying to mount my HDD along with the errors I am receiving. If anybody has any idea what I am doing wrong or how to correct the issue I would greatly appreciate it. Step 1) Ensure the computer recognizes my external HDD. pj@PJ:~$ dmesg ... [ 5790.367910] scsi 7:0:0:0: Direct-Access WD My Passport 0748 1022 PQ: 0 ANSI: 6 [ 5790.368278] scsi 7:0:0:1: Enclosure WD SES Device 1022 PQ: 0 ANSI: 6 [ 5790.370122] sd 7:0:0:0: Attached scsi generic sg2 type 0 [ 5790.370310] ses 7:0:0:1: Attached Enclosure device [ 5790.370462] ses 7:0:0:1: Attached scsi generic sg3 type 13 [ 5792.971601] sd 7:0:0:0: [sdb] 3906963456 512-byte logical blocks: (2.00 TB/1.81 TiB) [ 5792.972148] sd 7:0:0:0: [sdb] Write Protect is off [ 5792.972162] sd 7:0:0:0: [sdb] Mode Sense: 47 00 10 08 [ 5792.972591] sd 7:0:0:0: [sdb] No Caching mode page found [ 5792.972605] sd 7:0:0:0: [sdb] Assuming drive cache: write through [ 5792.975235] sd 7:0:0:0: [sdb] No Caching mode page found [ 5792.975249] sd 7:0:0:0: [sdb] Assuming drive cache: write through [ 5792.987504] sdb: sdb1 [ 5792.988900] sd 7:0:0:0: [sdb] No Caching mode page found [ 5792.988911] sd 7:0:0:0: [sdb] Assuming drive cache: write through [ 5792.988920] sd 7:0:0:0: [sdb] Attached SCSI disk Step 2) Check if it mounted properly (it does not) pj@PJ:~$ df -ah Filesystem Size Used Avail Use% Mounted on /dev/sda1 682G 3.9G 644G 1% / proc 0 0 0 - /proc sysfs 0 0 0 - /sys none 0 0 0 - /sys/fs/fuse/connections none 0 0 0 - /sys/kernel/debug none 0 0 0 - /sys/kernel/security udev 2.9G 4.0K 2.9G 1% /dev devpts 0 0 0 - /dev/pts tmpfs 1.2G 928K 1.2G 1% /run none 5.0M 0 5.0M 0% /run/lock none 2.9G 156K 2.9G 1% /run/shm gvfs-fuse-daemon 0 0 0 - /home/pj/.gvfs Step 3) Try mounting manually using NTFS and VFAT (both as SDB and SDB1) pj@PJ:~$ sudo mount /dev/sdb /media/Passport/ NTFS signature is missing. Failed to mount '/dev/sdb': Invalid argument The device '/dev/sdb' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? pj@PJ:~$ sudo mount /dev/sdb1 /media/Passport/ NTFS signature is missing. Failed to mount '/dev/sdb1': Invalid argument The device '/dev/sdb1' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? pj@PJ:~$ sudo mount -t ntfs /dev/sdb /media/Passport/ NTFS signature is missing. Failed to mount '/dev/sdb': Invalid argument The device '/dev/sdb' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? pj@PJ:~$ sudo mount -t vfat /dev/sdb /media/Passport/ mount: wrong fs type, bad option, bad superblock on /dev/sdb, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so pj@PJ:~$ sudo mount -t ntfs /dev/sdb1 /media/Passport/ NTFS signature is missing. Failed to mount '/dev/sdb1': Invalid argument The device '/dev/sdb1' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? pj@PJ:~$ sudo mount -t vfat /dev/sdb1 /media/Passport/ mount: wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

  • Is client side JavaScript capable of ~replicating the Node.JS module loading system?

    - by jt0dd
    I like the Node.JS style of JavaScript, where I can write all of my functionalities into smaller files and then require those neatly from within my code. I'm even thinking about trying to write a framework to mimic that behavior in client-side JS. My goal would be to implement the module loading system as accurately as possible - See Module docs. For require(), I can use things detailed in answers to this question, most notably JQuery's $.getScript(). It seems to me that other aspects of the module loading system should be possible as well. So I'm asking more experienced programmers here first, before I waist my time: Is there something that I'm missing that's going to cause such an attempt to fail miserably, or can this be successfully done?

    Read the article

  • OpenGL problem with FBO integer texture and color attachment

    - by Grieverheart
    In my simple renderer, I have 2 FBOs one that contains diffuse, normals, instance ID and depth in that order and one that I use store the ssao result. The textures I use for the first FBO are RGB8, RGBA16F, R32I and GL_DEPTH_COMPONENT32F for the depth. For the second FBO I use an R16F texture. My rendering process is to first render to everything I mentioned in the first FBO, then bind depth and normals textures for reading for the ssao pass and write to the second FBO. After that I bind the second FBO's texture for reading in my blur shader and bind the first FBO for writing. What I intend to do is to write the blurred ssao value to the alpha component of the Normals texture. Here are where the problems start. First of all, I use shading language 3.3, which my graphics card does support. I manage ouputs in my shaders using layout(location = #). Now, the normals texture should be bound to color attachment 1, but when I use 1, it seems to write to my diffuse texture which should be in color attachment 0. When I instead use layout(location = 0), it gets correctly written to my normals texture. Besides this, my instance ID texture also gets resets after running the blur shader which is weird because if I use a float texture and write to it instanceID / nInstances, the texture doesn't get reset after the blur shader has ran. Here is how I prepare my first FBO: bool CGBuffer::Init(unsigned int WindowWidth, unsigned int WindowHeight){ //Create FBO glGenFramebuffers(1, &m_fbo); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_fbo); //Create gbuffer and Depth Buffer Textures glGenTextures(GBUFF_NUM_TEXTURES, &m_textures[0]); glGenTextures(1, &m_depthTexture); //prepare gbuffer for(unsigned int i = 0; i < GBUFF_NUM_TEXTURES; i++){ glBindTexture(GL_TEXTURE_2D, m_textures[i]); if(i == GBUFF_TEXTURE_TYPE_NORMAL) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, WindowWidth, WindowHeight, 0, GL_RGBA, GL_FLOAT, NULL); else if(i == GBUFF_TEXTURE_TYPE_DIFFUSE) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, WindowWidth, WindowHeight, 0, GL_RGB, GL_FLOAT, NULL); else if(i == GBUFF_TEXTURE_TYPE_ID) glTexImage2D(GL_TEXTURE_2D, 0, GL_R32I, WindowWidth, WindowHeight, 0, GL_RED_INTEGER, GL_INT, NULL); else{ std::cout << "Error in FBO initialization" << std::endl; return false; } glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, m_textures[i], 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); } //prepare depth buffer glBindTexture(GL_TEXTURE_2D, m_depthTexture); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, WindowWidth, WindowHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_depthTexture, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); GLenum DrawBuffers[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2}; glDrawBuffers(GBUFF_NUM_TEXTURES, DrawBuffers); GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER); if(Status != GL_FRAMEBUFFER_COMPLETE){ std::cout << "FB error, status 0x" << std::hex << Status << std::endl; return false; } //Restore default framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); return true; } where I use an enum defined as, enum GBUFF_TEXTURE_TYPE{ GBUFF_TEXTURE_TYPE_DIFFUSE, GBUFF_TEXTURE_TYPE_NORMAL, GBUFF_TEXTURE_TYPE_ID, GBUFF_NUM_TEXTURES }; Am I missing some kind of restriction? Does the color attachment of the FBO's textures somehow gets reset i.e. I'm using a re-size function which re-sizes the textures of the FBO but should I perhaps call glFramebufferTexture2D again too? EDIT: Here is the shader in question: #version 330 core uniform sampler2D aoSampler; uniform vec2 TEXEL_SIZE; // x = 1/res x, y = 1/res y uniform bool use_blur; noperspective in vec2 TexCoord; layout(location = 0) out vec4 out_AO; void main(void){ if(use_blur){ float result = 0.0; for(int i = -1; i < 2; i++){ for(int j = -1; j < 2; j++){ vec2 offset = vec2(TEXEL_SIZE.x * i, TEXEL_SIZE.y * j); result += texture(aoSampler, TexCoord + offset).r; // -0.004 because the texture seems to be a bit displaced } } out_AO = vec4(vec3(0.0), result / 9); } else out_AO = vec4(vec3(0.0), texture(aoSampler, TexCoord).r); }

    Read the article

  • A command-line clipboard copy and paste utility?

    - by Peter.O
    In Windows I used command-line clipboard copy-and-paste utilities... pclip.exe and gclip.exe These were UnixUtils ports for Windows (but they only handled plain text). There were a couple of other native Windows utils which could write/extracy any format. I've looked for something similar in Synaptic Package Manager, but I can't find anything. Is there something there, that I've missed? ... or maybe this is available in bash scripting? The type of utility I'd like will be able to read/write via std-in/std-out or file-in/file-out, and handle Unicode/Rich-text/Picture/etc clipboard formats... Late Edit: NB: I'm not after a clipboard manager.

    Read the article

  • Assign keys to commands in Terminal?

    - by NES
    Is there a solution to assign special key combinations to words in terminal use. For example the less command is very usefull and i use i a lot to pipe the output of another process through it. The idea would be to set up special key combinations that are only active in terminal use assigned to write different commands? So pressing CTRL + l in terminal window could write | less or CTRL + G could stand for | grep Note: i just mean adding the letters to commandline not execute the finally. A similar way what's tabcompletion but more specific.

    Read the article

  • Different ways of solving problems in code.

    - by Erin
    I now program in C# for a living but before that I programmed in python for 5 years. I have found that I write C# very differently than most examples I see on the web. Rather then writing things like: foreach (string bar in foo) { //bar has something doen to it here } I write code that looks like this. foo.ForEach( c => c.someActionhere() ) Or var result = foo.Select( c => { //Some code here to transform the item. }).ToList(); I think my using code like above came from my love of map and reduce in python - while not exactly the same thing, the concepts are close. Now it's time for my question. What concepts do you take and move with you from language to language; that allow you to solve a problem in a way that is not the normal accepted solution in that language?

    Read the article

  • ASP.NET vNext Blog Post Series

    - by Soe Tun
    Originally posted on: http://geekswithblogs.net/stun/archive/2014/06/04/asp.net-vnext-blog-post-series.aspxASP.NET vNext Blog Post Series ASP.NET vNext was announced at TechEd 2014, and I have been playing around with it a bit. ASP.NET vNext is an exciting and revolutionary change for the Microsoft .NET development platform. ASP.NET vNext is now open-source, and available on Github at this location: https://github.com/aspnet/Home. I want to start a blog post series on the ASP.NET vNext, and share my experience as I learn more about it. Keeping it simple Each blog post in the series will be short and simple so I can write them in a short amount of time, and keep it focused on one (at most two) topic(s) per post. My goal is to make it easy to absorb the information as there are a ton of great new stuff to cover. Many other people in the community have blogged about the key new features of the ASP.NET vNext. I will link to those blog posts in my next blog post. MVC 6 POCO Controller Today, I want to start this blog post series with a teaser code snippet for those developers familiar with the ASP.NET MVC. Getting Started with ASP.NET MVC 6 article from ASP.NET website shows how to write a lightweight POCO (plain-old CLR object) MVC Controller class in the upcoming ASP.NET MVC 6. However, it doesn't show us how to use the IActionResultHelper interface to render a View. This is how I wrote my POCO MVC Controller based on the https://github.com/aspnet/Home/blob/master/samples/HelloMvc/Controllers/HomeController.cs sample from Github.   Note that this may not be the best way to write it, but this is good enough for now. using Microsoft.AspNet.Mvc; using Microsoft.AspNet.Mvc.ModelBinding; using MvcSample.Web.Models; namespace MvcSample.Web { public class HomeController { IActionResultHelper html; IModelMetadataProvider mmp; public HomeController(IActionResultHelper h, IModelMetadataProvider mmp) { this.html = h; this.mmp = mmp; } public IActionResult Index() { var viewData = new ViewDataDictionary<User>(mmp) { Model = User() }; return html.View("Index", viewData); } public User User() { return new User { Name = "My name", Address = "My address" }; } } } Please feel free to give me feedback as this will greatly help me organize the blog posts in this series, and plan head. Thanks for reading!

    Read the article

  • Is anyone doing "real" TDD with Visual-C++, and if yes, how do they do it?

    - by Martin
    Test Driven Development implies writing the test before the code and following a certain cycle: Write Test Check Test (run) Write Production Code Check Test (run) Clean up Production Code Check test (run) As far as I'm concerned, this is only possible if your development solution allows you to very quickly switch between the production and test code, and to execute the test for a certain production code part extremely quickly. Now, while there exist lots of Unit Testing Frameworks for C++ (I'm using Bost.Test atm.) it does seem that there doesn't really exist any decent (for native C++) Visual Studio (Plugin) solution that makes the TDD cycle bearable regardless of framework used. "Bearable" means that it's a one-click action to run a test for a certain cpp file without having to manually set up a separate testing project etc. "Bearable" also means that a simple test starts (linking!) and runs very quickly. So, what tools (plugins) and techniques are out there that make the TDD cycle possible for native C++ development with Visual Studio? Note: I'm fine with free or "commercial" tools. Please: No framework recommendations. (Unless the framework has a dedicated Visual Studio plugin and you want to recommend the plugin.) Edit Note: The answers so far have provided links on how to integrate a Unit Testing framework into Visual Studio. The resources more or less describe how to get the UT framework to compile and get your first Tests running. This is not what this question is about. I'm of the opinion that to really work productively, having the Unit Tests in a manually maintained(!), separate vcproj from your production classes will add so much overhead that TDD "isn't possible". As far as I am aware, you do not add extra "projects" to a Java or C# thing to enable Unit Tests and TDD, and for a good reason. This should be possible with C++ given the right tools, but it seems (this question is about) that there are very little tools for TDD/C++/VS. Googling around, I've found one tool, VisualAssert, that seems to aim in the right direction. However, afaiks, it doesn't seem to be in widespread use (compared to CppUnit, Boost.Test etc.). Edit: I would like to add a comment to the context for this question. I think it does a good summary of outlining (part of) the problem: (comment by Billy ONeal) Visual Studio does not use "build scripts" that are reasonably editable by the user. One project produces one binary. Moreover, Java has the property that Java never builds a complete binary -- the binary you build is just a ZIP of the class files. Therefore it's possible to compile separately then JAR together manually (using e.g. 7z). C++ and C# both actually link their binaries, so generally speaking you can't write a script like that. The closest you can get is to compile everything separately and then do two linkings (one for production, one for testing).

    Read the article

  • The Solution

    - by Patrick Liekhus
    So I recently attended a class about time management as well as read the book “The Seven Habits of Highly Effective People” by Stephen Covey.  Both have been instrumental in helping me get my priorities aligned as well as keep me focused. The reason I bring this up is that it gave me a great idea for a small application with which to create a great technical stack solution that would be easy to demo and explain.  Therefore, the project from this point forward with be the Liekhus.TimeTracker application which will bring some the time management skills that I have acquired into a technical implementation.  The idea is rather simple, but leverages some of the basic principles of Covey along with some of the worksheets that I garnered from class.  The basics are as such: 1) a plan is a must have and 2) write it down!  A plan not written down is just an idea.  How many times have you had an idea that didn’t materialize?  Exactly.  Hence why I am writing it all down now! The worksheet consists of a few simple columns that I will outline below as well as some modifications that I made according to the Covey habits.  The worksheet looks like the following: Status Issue Area CQ Notes P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   The idea is really simple and straightforward; you write down all your tasks and keep track of them along the way.  The status stands for (P)ending, (F)inished or (L)ater.  You write a quick title for the issue and select the CQ (Covey Quadrant) with which the issue occurs.  The notes section is for things that happen while you are working through the issue.  And last, but not least, is the Area column that I added as a way to identify the Role or Area of your life that this task falls within based upon Covey’s teachings. The second part of this application is a simple phone log that allows you to track your phone conversations throughout the day.  All of this is currently done on a sheet of paper, but being involved in technology, I want it to have bells and whistles.  Therefore, this is my simple idea for a project that will allow me to test my theories about coding and implementations.  Stay tuned as the next session will be flushing out the concept and coming up with user stories to begin the SCRUM process. Thanks

    Read the article

  • Web Development Goes Pre-Visual InterDev

    - by Ken Cox [MVP]
    As a longtime and hardcore ASP.NET webforms developer, I’m finding the new client-side development world a bit of a grind.  I love learning new technologies, but I can’t help feeling we’ve regressed and lost our old RAD advantage as we move heavy lifting to the client. For my latest project, I’m using Telerik’s KendoUI in Visual Studio 2012. To say I feel clumsy writing this much JavaScript is an understatement. It seems like the only safe way to ‘write’ this code is by copying a working snippet from someone else and pasting it into my HTML page.  For me, JavaScript has largely been for small UI tasks like client-side validation and a bit of AJAX – and often emitted by a server-side control. I find myself today lost in nests of curly braces that Ctrl+K, Ctrl+D doesn’t seem to understand that well either. IntelliSense, my old syntax saviour, doesn’t seem to have kept up with this cobweb of code either. Code completion? Not seeing it. As I fumbled about this evening, I thought about how web development rocketed forward when Microsoft introduced Visual InterDev. Its Design-Time Controls (DTCs) changed the way we created sites. All the iterations of Visual Studio have enhanced that server-side experience where you let a tool write the bulk of the code and manually finesse it from there. What happened? Why am I typing  properties and values (especially default values!) into VS 2012 to get a client-side grid on a page? Where are the drag and drop objects that traditionally provided 70 percent of the mark-up and configuration?  Did we forget how to write Property Pages where you enter a value and the correct syntax appears magically in the source code? To me, the tooling was looking the other way as the scene shifted from server-side code to nimble client-side script. It’ll have to catch up. Although JavaScript is the lingua franca of web browsers, the language is unwieldy, tough to maintain, and messy to debug. If a .NET JIT compiler can turn our VB, F#, and C# source code into an Intermediate Language that executes on a computer, I don’t see why there can’t be a client-side compiler that turns a .NET language into JavaScript that browsers can consume.

    Read the article

  • What tool do you use to organise your tasks?

    - by Gearóid
    Hi, I've been a web developer for four years now and I've yet to come across a nice piece of software that allows me to manage my day to day tasks well. In theory, I should be able to just pull up a textfile and write: write test scripts check code into svn remember to go home But obviously this isnt very usable. I've tried stuff like Ta-da List but it feels quite limited. JIRA is great for bug tracking but what if I have to remember to go to the bank at 2pm. Is there anything piece of software out there that helps organise programmers? I'm really interested in hearing what you guys have to say on this one. Thanks.

    Read the article

  • Spreadsheet or writing an application?

    - by Lenny222
    When would you keep simple to medium-complex personal calculations in a spread sheet (Excel etc) and when would you write a small program or script for it? For example when you want to calculate what size of mortgage you can afford to buy a house. I could create a spreadsheet and have a nice tabular representation. On the other hand, if i would write a small script in a nice language (in my case Haskell), i'd have the security of a nice type system, preventing typos etc. What are the pro/cons in your opinion?

    Read the article

  • Is Cygwin or Windows Command Prompt preferable for getting a consistent terminal experience for development?

    - by Paul Hazen
    The question: Which is better, installing cygwin or one of its cousins on all my windows machines to have a consistent terminal experience across all my development machines, or becoming well trained in the skill of mentally switching from linux terminal to windows command prompt? Systems I use: OSX Lion on a Macbook Air Windows 8 on a desktop Windows 7 on the same desktop Fedora 16 on the same desktop What I'm trying to accomplish Configure an entirely consistent (or consistent enough) terminal experience across all my machines. "enough" in this context is clearly subjective. Please be clear in your answer why the configuration you suggest is consistent enough. One more thing to keep in mind: While I do write a lot of code intended to run on Windows (actually code that runs on Windows Phone which necessitates a windows machine), I also write a lot of Java code, and prefer to do so in vim. I test a local repo in Java on my windows machine, and push to another test machine running ubuntu later in the development stage. When I push to the ubuntu machine, I'm exclusively in terminal, since I'm accessing it via SSH. Summary, with more accurate question: Is there a good way to accomplish what I'm trying to do, or is it better to get accustomed to remembering different commands based on the system I'm on? Which (if either) is considered "best practice" by the development community? Alternatively, for a consistent development experience, would it be better to write all my code SSHed into another machine, and move things to windows for compile / build only when I needed to? That seems like too much work... but could be a solution. Update: While there are insightful responses below, I have yet to hear an answer that talks about why any given solution is superior. Cygwin/GnuWin32 is certainly a way to accomplish a similar experience on all platforms, but since I'm just learning all things command line, I don't want to set myself up to do a lot of relearning/unlearning in the future. Cygwin/GnuWin32 has its peculiarities I would imagine, and being aware of how that set up works on Windows is a learning curve. Additionally, using Cygwin/GnuWin32 robs me of learning the benefits of PowerShell. As a newcomer to working in a command line, which path should I choose to minimize having to relearn/unlearn things in the future? or as my first paragraph poses: [is it better to use Cygwin] ...or [become] well trained in the skill of mentally switching from linux terminal to windows command prompt?

    Read the article

  • Java dev learning Python: what concepts do I need to wrap my head around?

    - by LRE
    I've run through a few tutorials and written some small projects. I'm right in the middle of a small project now infact. All is going well enough thanks in no small part to Uncle Google (who usually points me to Stackoverflow ;-) Several times in the last few days I've found myself wondering "what am I missing?" - I feel that I'm still thinking in Java as I write in Python. This question over at StackOverflow is full of tips about what resources to read up on for learning Python, but I still feel that I'm a Java dev with a dictionary (pun unintended) to translate into Python. What I really want to do is refactor my head to be able to write Pythonic Python instead of Java disguised as Python (not that I want to loose my Java skills). So, the crux of my question is: what concepts does a Java dev really need to learn to think Pythonic? This includes anything that needs to be un-learnt. ps: I consider language syntax to not be particularly relevant to this question.

    Read the article

  • What ethical problems realistically arise in programming?

    - by Fishtoaster
    When I co-oped during college, I had to fill out an evaluation of the co-op afterwards. One metric I always had to rate was how much the company required me to "Make ethical decisions related to your profession." This always seemed kinda silly- I mean, my first co-op was writing java apps to manage industrial radios. There wasn't much moral ambiguity going on. Anyway, I'm wonder what sort of ethical dilemmas one might actually encounter in software development. Edit: It should be noted that no ethically-trained software engineer would ever consent to write a DestroyBaghdad procedure. Basic professional ethics would require him to write a DestroyCity procedure, to which Baghdad could be given as a parameter. - Nathanial Borenstein

    Read the article

  • Naming your unit tests

    - by kerry
    When you create a test for your class, what kind of naming convention do you use for the tests? How thorough are your tests? I have lately switched from the conventional camel case test names to lower case letters with underscores. I have found this increases the readability and causes me to write better tests. A simple utility class: public class ArrayUtils { public static T[] gimmeASlice(T[] anArray, Integer start, Integer end) { // implementation (feeling lazy today) } } I have seen some people who would write a test like this: public class ArrayUtilsTest { @Test public void testGimmeASliceMethod() { // do some tests } } A more thorough and readable test would be: public class ArrayUtilsTest { @Test public void gimmeASlice_returns_appropriate_slice() { // ... } @Test public void gimmeASlice_throws_NullPointerException_when_passed_null() { // ... } @Test public void gimmeASlice_returns_end_of_array_when_slice_is_partly_out_of_bounds() { // ... } @Test public void gimmeASlice_returns_empty_array_when_slice_is_completely_out_of_bounds() { // ... } } Looking at this test, you have no doubt what the method is supposed to do. And, when one fails, you will know exactly what the issue is.

    Read the article

  • Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 4)

    - by hinkmond
    And now here's the Java code that you'll need to read your ghost sensor on your Raspberry Pi The general idea is that you are using Java code to access the GPIO pin on your Raspberry Pi where the ghost sensor (JFET trasistor) detects minute changes in the electromagnetic field near the Raspberry Pi and will change the GPIO pin to high (+3 volts) when something is detected, otherwise there is no value (ground). Here's that Java code: try { /*** Init GPIO port(s) for input ***/ // Open file handles to GPIO port unexport and export controls FileWriter unexportFile = new FileWriter("/sys/class/gpio/unexport"); FileWriter exportFile = new FileWriter("/sys/class/gpio/export"); for (String gpioChannel : GpioChannels) { System.out.println(gpioChannel); // Reset the port File exportFileCheck = new File("/sys/class/gpio/gpio"+gpioChannel); if (exportFileCheck.exists()) { unexportFile.write(gpioChannel); unexportFile.flush(); } // Set the port for use exportFile.write(gpioChannel); exportFile.flush(); // Open file handle to input/output direction control of port FileWriter directionFile = new FileWriter("/sys/class/gpio/gpio" + gpioChannel + "/direction"); // Set port for input directionFile.write(GPIO_IN); } /*** Read data from each GPIO port ***/ RandomAccessFile[] raf = new RandomAccessFile[GpioChannels.length]; int sleepPeriod = 10; final int MAXBUF = 256; byte[] inBytes = new byte[MAXBUF]; String inLine; int zeroCounter = 0; // Get current timestamp with Calendar() Calendar cal; DateFormat dateFormat = new SimpleDateFormat("yyyy/MM/dd HH:mm:ss.SSS"); String dateStr; // Open RandomAccessFile handle to each GPIO port for (int channum=0; channum And, then we just load up our Java SE Embedded app, place each Raspberry Pi with a ghost sensor attached in strategic locations around our Santa Clara office (which apparently is very haunted by ghosts from the Agnews Insane Asylum 1906 earthquake), and watch our analytics for any ghosts. Easy peazy. See the previous posts for the full series on the steps to this cool demo: Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 1) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 2) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 3) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 4) Hinkmond

    Read the article

  • Are there any good guides for making mods for Minecraft?

    - by Pureferret
    I've been coding in Java for 5 months at work now, and having past experience with programming in other languages, modifying existing code at Uni etc. I feel like I want to get started on (read: continue learning to program by) modding with minecraft. I know what I need, but not exactly how to do so. I once saw some good guides on the minecraft forum, but they all explained how to write in java, hows different classes in the code work etc. I'm more interested in how you decompile the code, write your own separate from the main 'trunk' of minecraft and then package it to install with a tool like 'Magic Loader'. My issue with these guides is that they always relied on being in windows, but I'm primarily a linux user, and the guides on the forums only seemed to assume you were on a Windows box. So is there a good 'walkthrough' for modding for Minecraft? Especially one where it assumes or at least allows for the fact you are in linux?

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >