Search Results

Search found 30014 results on 1201 pages for 'go yourself'.

Page 642/1201 | < Previous Page | 638 639 640 641 642 643 644 645 646 647 648 649  | Next Page >

  • Should we have a database independent SQL like query language in Django?

    - by Yugal Jindle
    Note : I know we have Django ORM already that keeps things database independent and converts to the database specific SQL queries. Once things starts getting complicated it is preferred to write raw SQL queries for better efficiency. When you write raw sql queries your code gets trapped with the database you are using. I also understand its important to use the full power of your database that can-not be achieved with the django orm alone. My Question : Until I use any database specific feature, why should one be trapped with the database. For instance : We have a query with multiple joins and we decided to write a raw sql query. Now, that makes my website postgres specific. Even when I have not used any postgres specific feature. I feel there should be some fake sql language which can translate to any database's sql query. Even Django's ORM can be built over it. So, that if you go out of ORM but not database specific - you can still remain database independent. I asked the same question to Jacob Kaplan Moss (In person) : He advised me to stay with the database that I like and endure its whole power, to which I agree. But my point was not that we should be database independent. My point is we should be database independent until we use a database specific feature. Please explain, why should be there a fake sql layer over the actual sql ?

    Read the article

  • Add a Real-Time Earth Wallpaper App to Ubuntu with xplanetFX

    - by Asian Angel
    Are you tired of the same old wallpaper on your Ubuntu desktop? Now you can go from blah to literally spacious, real-time styled views of Earth with the xplanetFX Wallpaper App for Linux. You can conveniently access the “file type” downloads, screenshots, and jump-to links all on the front page. For our example we downloaded the .deb setup file on our system. The setup file will need to download three additional files to complete the setup process. After those are downloaded all dependencies will have been met and you can complete the installation process. Once that is done you can find xplanetFX by going to the Accessories Section of your Ubuntu Menu. This is what the main control window looks like when you start xplanetFX for the first time. You should take a few moments to look through the various tabs and tweak the settings for items like location, screen resolution, timing, auto-start, etc. When you are done click on Execute and within a few moments your desktop will have a fresh new look! Note: It took ~30 seconds for the display to activate on our system. Have fun with xplanetFX! xplanetFX Homepage [via OMG! Ubuntu!] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware Add a Real-Time Earth Wallpaper App to Ubuntu with xplanetFX The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker Google Sky Map Turns Your Android Phone into a Digital Telescope

    Read the article

  • What are some arguments AGAINST using EntityFramework?

    - by Rachel
    The application I am currently building has been using Stored procedures and hand-crafted class models to represent database objects. Some people have suggested using Entity Framework and I am considering switching to that since I am not that far into the project. My problem is, I feel the people arguing for EF are only telling me the good side of things, not the bad side :) My main concerns are: We want Client-Side validation using DataAnnotations, and it sounds like I have to create the client-side models anyways so I am not sure that EF would save that much coding time We would like to keep the classes as small as possible when going over the network, and I have read that using EF often includes extra data that is not needed We have a complex database layer which crosses multiple databases, and I am not sure EF can handle this. We have one Common database with things like Users, StatusCodes, Types, etc and multiple instances of our main databases for different instances of the application. SELECT queries can and will query across all instances of the databases, however users can only modify objects that are in the database they are currently working on. They can switch databases without reloading the application. Object modes are very complex and there are often quite a few joins involved Arguments for EF are: Concurrency. I wouldn't have to code in checks to see if the record was updated before each save Code Generation. EF can generate partial class models and POCOs for me, however I am not positive this would really save me that much time since I think we would still need to create the client-side models for validation and some custom parsing methods. Speed of development since we wouldn't need to create the CRUD stored procedures for every database object Our current architecture consists of a WPF Service which handles database calls via parameterized Stored Procedures, POCO objects that go to/from the WCF service and the WPF client, and the WPF client itself which transforms POCOs into class Models for the purpose of Validation and DataBinding.

    Read the article

  • Why Oracle Data Integrator for Big Data?

    - by Mala Narasimharajan
    Big Data is everywhere these days - but what exactly is it? It’s data that comes from a multitude of sources – not only structured data, but unstructured data as well.  The sheer volume of data is mindboggling – here are a few examples of big data: climate information collected from sensors, social media information, digital pictures, log files, online video files, medical records or online transaction records.  These are just a few examples of what constitutes big data.   Embedded in big data is tremendous value and being able to manipulate, load, transform and analyze big data is key to enhancing productivity and competitiveness.  The value of big data lies in its propensity for greater in-depth analysis and data segmentation -- in turn giving companies detailed information on product performance, customer preferences and inventory.  Furthermore, by being able to store and create more data in digital form, “big data can unlock significant value by making information transparent and usable at much higher frequency." (McKinsey Global Institute, May 2011) Oracle's flagship product for bulk data movement and transformation, Oracle Data Integrator, is a critical component of Oracle’s Big Data strategy. ODI provides automation, bulk loading, and validation and transformation capabilities for Big Data while minimizing the complexities of using Hadoop.  Specifically, the advantages of ODI in a Big Data scenario are due to pre-built Knowledge Modules that drive processing in Hadoop. This leverages the graphical UI to load and unload data from Hadoop, perform data validations and create mapping expressions for transformations.  The Knowledge Modules provide a key jump-start and eliminate a significant amount of Hadoop development.  Using Oracle Data Integrator together with Oracle Big Data Connectors, you can simplify the complexities of mapping, accessing, and loading big data (via NoSQL or HDFS) but also correlating your enterprise data – this correlation may require integrating across heterogeneous and standards-based environments, connecting to Oracle Exadata, or sourcing via a big data platform such as Oracle Big Data Appliance. To learn more about Oracle Data Integration and Big Data, download our resource kit to see the latest in whitepapers, webinars, downloads, and more… or go to our website on www.oracle.com/bigdata

    Read the article

  • Proper Data Structure for Commentable Comments

    - by Wesley
    Been struggling with this on an architectural level. I have an object which can be commented on, let's call it a Post. Every post has a unique ID. Now I want to comment on that Post, and I can use ID as a foreign key, and each PostComment has an ItemID field which correlates to the Post. Since each Post has a unique ID, it is very easy to assign "Top Level" comments. When I comment on a comment however, I feel like I now need a PostCommentComment, which attaches to the ID of the PostComment. Since ID's are assigned sequentially, I can no longer simply use ItemID to differentiate where in the tree the comment is assigned. I.E. both a Post and a Post Comment might have an ID of '5', so my foreign key relationship is invalid. This seems like it could go on infinitely, with PostCommentCommentComment's etc... What's the best way to solve this? Should I have a field in the comment called "IsPostComment" or something of the like to know which collection to attach the ID to? This strikes me as the best solution I've seen so far, but now I feel like I need to make recursive DataBase calls which start to get expensive. Meaning, I get a Post and get all PostComments where ItemID == Post.ID && where IsPostComment == true Then I take that as a collection, gather all the ID's of the PostComments, and do another search where ItemID == PostComment[all].ID && where IsPostComment == false, then repeat infinitely. This means I make a call for every layer, and if I'm calling 100 Posts, I might make 1000 DB calls to get 10 layers of comments each. What is the right way to do this?

    Read the article

  • How to recover data from a failing hard drive?

    - by intuited
    An external 3½" HDD seems to be in danger of failing — it's making ticking sounds when idle. I've acquired a replacement drive, and want to know the best strategy to get the data off of the dubious drive with the best chance of saving as much as possible. There are some directories that are more important than others. However, I'm guessing that picking and choosing directories is going to reduce my chances of saving the whole thing. I would also have to mount it, dump a file listing, and then unmount it in order to be able to effectively prioritize directories. Adding in the fact that it's time-consuming to do this, I'm leaning away from this approach. I've considered just using dd, but I'm not sure how it would handle read errors or other problems that might prevent only certain parts of the data from being rescued, or which could be overcome with some retries, but not so many that they endanger other parts of the drive from being saved. I guess ideally it would do a single pass to get as much as possible and then go back to retry anything that was missed due to errors. Is it possible that copying more slowly — e.g. pausing every x MB/GB — would be better than just running the operation full tilt, for example to avoid any overheating issues? For the "where is your backup" crowd: this actually is my backup drive, but it also contains some non-critical and bulky stuff, like music, that aren't backups, i.e. aren't backed up. The drive has not exhibited any clear signs of failure other than this somewhat ominous sound. I did have to fsck a few errors recently — orphaned inodes, incorrect free blocks/inodes counts, inode bitmap differences, zero dtime on deleted inodes; about 20 errors in all. The filesystem of the partition is ext3.

    Read the article

  • Black Screen after Resume from Sleep (Kubuntu)

    - by user20271
    I know there is a lot of other posts like this, but I have been looking for hours and I still haven't found any solution. I have recently installed Kubuntu Linux along side my Windows 7, the sleep on my Win7 works fine and resumes like normal. When I am loaded into Kubuntu, and I put my laptop to sleep, it goes into sleep as normal. When I go to RESUME from the sleep, the screen stays solid black, it doesn't light up, no blinking curser or anything. The Wi-Fi light is 'off' (orange) and I cannot turn it on. The Caps lock and the Num lock lights on the keyboard blink slowly. I hear something on the inside of the computer start to spin. I am not very experienced with Kubuntu/Linux, but I do know a bunch of computer terminology, I am still far from an expert though. I have about 300GB designated to my Win7 stuff, and another partition with about 100GB for my Kubuntu Linux. My computers specs are as follows: Windows 7 64-bit I have the most recent version of Kubuntu because I just downloaded it a few days ago and updated it yesterday. AMD Athlon Duel-Core processor 4GB of RAM And it is a HP G61 Laptop

    Read the article

  • Can't boot Ubuntu 12.04 from external Hard Drive using Mac

    - by Catgirl the Crazy
    Recently, I upgraded the RAM and hard drive on my Early 2008 Macbook to improve the performance. Rather than throw away the old hard drive, I bought an enclosure for it to turn it into an external hard drive, and, since all the data was migrated to my new drive, I decided to install Ubuntu on it for funsies (note: I am a near-total Ubuntu n00b). My first attempt to install Ubuntu didn't work (it gave me errors about not being able to find the BIOS or something), but my second attempt finished successfully (can't remember what, if anything, I did different). However, when I plug the external drive into my Macbook, it gives me a message saying it can't read the disk. Moreover, when I go into the Startup Manager (i.e.: what you get when you turn on the Macbook while holding the option key), the external drive is not one of the available startup disks. I thought this might be because I have an older Macbook, so I tried booting it with my mom's Late 2011 Macbook, and got the same results. Then I tried booting it through my dad's Dell laptop that runs Windows 7, and that time it worked. This is really counter intuitive to me, since the hard drive originally came from a Macbook, so if anything you'd think it would be less compatible with the Windows laptop than the Macbook. In case it helps, here's a link to a picture of how I set up the partition table while doing the install (not shown there is the fact that I checked the "Format?" box next to the /boot partition, since it gave me a warning when I tried to continue the installation without doing so) Anyone have any clue at all? If it helps, the hard drive I'm using is a 120GB 5400-rpm Serial ATA hard disk drive.

    Read the article

  • Duplicating someone's content legitimately & writing HTML to support that

    - by Codecraft
    I want to add content from other blogs to my own (with the authors permission) to help build additional relevant content and support articles I've found useful that others have written. I'm looking into how to do this responsibly - ie, by giving the original content author a boost and not competing against them for search traffic which should go to their site. In order to keep my duplicate content out of search, and to hint to the search engines where the original content is to be found i've implemented: <head> <meta name='robots' content='noindex, follow'> <link rel='canonical' href='http://www.originalblog.com/original-post.html' /> </head> Additionally, to boost the original article and to let readers know where it came from i'll be adding something like this: <div> Article originally written by <a href='http://www.authorswebsite.com'>Authors Name</a> and reproduced with permission.<br/> <a href='http://www.originalblog.com/original-post.html' target='new'> Read the original article here. </a> </div> All that remains is a way to 'officially' credit the original author in the HTML for the search spiders to see. Can anyone tell me a way to do this possibly using rel="author" (as far as I can see thats only good for my own original content), or perhaps it doesn't matter given that the reproduced pages will be kept out of search engines? Also, have I overlooked anything in the approach?

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • Distribute Sort Sample Service

    - by kaleidoscope
    How it works? Using the front-end of the service, a user can specify a size in MB for the input data set to sort. Algorithm CreateAndSplit The CreateAndSplit task generates the input data and stores them as 10 blobs in the utility storage. The URLs to these blobs are packaged as Separate work items and written to the queue. · Separate The Separate task reads the blobs with the random numbers created in the CreateAndSplit task and places the random numbers into buckets. The interval of the numbers that go into one bucket is chosen so that the expected amount of numbers (assuming a uniform distribution of the numbers in the original data set) is around 100 kB. Each bucket is represented as a blob container in utility storage. Whenever there are 10 blobs in one bucket (i.e., the placement in this bucket is complete because we had 10 original splits), the separate task will generate a new Sort task and write the task into the queue. · Sort The Sort task merges all blobs in a single bucket and sorts them using a standard sort algorithm. The result is stored as a blob in utility storage. · Concat The concat task merges the results of all Sort tasks into a single blob. This blob can be downloaded as a text file using this Web page. As the resulting file is presented in text format, the size of the file is likely to be larger than the specified input file. Anish

    Read the article

  • Why can't I run Compiz?

    - by jasoncruz98
    I installed Ubuntu 11.10 on my laptop. The main reason I switched to Ubuntu is because I wanted to use Compiz. The first thing I did was to go to Additional Drivers and install ATI/AMD Proprietary FGRLX Graphics Driver. There was also another one available, ATI/AMD Proprietary FGRLX Graphics Driver (post-release updates), but I didn't install that one, because it basically meant the same thing to me as the one I already installed. Next, I went to the ubuntuguide.org Oneiric Wiki http://ubuntuguide.org/wiki/Ubuntu:Oneiric#Compiz_Fusion So I followed the instructions there and ran this command in terminal: sudo apt-get install compiz compizconfig-settings-manager compiz-fusion-plugins-main compiz-fusion-plugins-extra emerald librsvg2-common But then, the terminal window said that the package "emerald" could not be found. So, I ran this command instead: sudo apt-get install compiz compizconfig-settings-manager compiz-fusion-plugins-main compiz-fusion-plugins-extra After that, I installed Fusion Icon by running this command: sudo apt-get install fusion-icon I restarted my computer, searched for Compiz Config Settings Manager, and clicked on it. Then, I activated Wobbly Windows. I logged off and logged back on again, but there was no wobbly windows effect. So I tried clicking on Fusion Icon, but it never started. Can someone please tell me what I did wrong here? Because I see everyone seems to be able to run Compiz except me. I really need to start Compiz, or else I think I'm going to uninstall Ubuntu.

    Read the article

  • How can dev teams prevent slow performance in consumer apps?

    - by Crashworks
    When I previously asked what's responsible for slow software, a few answers I've received suggested it was a social and management problem: This isn't a technical problem, it's a marketing and management problem.... Utimately, the product mangers are responsible to write the specs for what the user is supposed to get. Lots of things can go wrong: The product manager fails to put button response in the spec ... The QA folks do a mediocre job of testing against the spec ... if the product management and QA staff are all asleep at the wheel, we programmers can't make up for that. —Bob Murphy People work on good-size apps. As they work, performance problems creep in, just like bugs. The difference is - bugs are "bad" - they cry out "find me, and fix me". Performance problems just sit there and get worse. Programmers often think "Well, my code wouldn't have a performance problem. Rather, management needs to buy me a newer/bigger/faster machine." The fact is, if developers periodically just hunt for performance problems (which is actually very easy) they could simply clean them out. —Mike Dunlavey So, if this is a social problem, what social mechanisms can an organization put into place to avoid shipping slow software to its customers?

    Read the article

  • Debugging/Logging Techniques for End Users

    - by James Burgess
    I searched a bit, but didn't find anything particularly pertinent to my problem - so please do excuse me if I missed something! A few months back I inherited the source to a fairly-popular indie game project and have been working, along with another developer, on the code-base. We recently made our first release since taking over the development but we're a little stuck. A few users are experiencing slowdowns/lagging in the current version, as compared to the previous version, and we are not able to reproduce these issues in any of our various development environments (debug, release, different OSes, different machines, etc.). What I'd like to know is how can we go about implementing some form of logging/debugging mechanism into the game, that users can enable and send the reports to us for examination? We're not able to distribute debug binaries using the MSVS 2010 runtimes, due to the licensing - and wouldn't want to, for a variety of reasons. We'd really like to get to the bottom of this issue, even if just to find out it's nothing to do with our code base but everything to do with their system configuration. At the moment, we just have no leads - and the community isn't a very technically-savvy one, so we're unable to rely on 'expert' bug reports or investigations. I've seen the debug logging mechanism used in other applications and games for everything from logging simple errors to crash dumps. We're really at a loss at this stage as to how to address these issues, having been over every commit to the repository from the previous to the current version and not finding any real issues.

    Read the article

  • CRM On Demand Performance Tips - Live Web Session on April 20, 2010

    - by Cheryl
    The CRM On Demand Customer Care specialists have another live Web session coming up - this one is about performance - issues, tips, and considerations. This is a part of their Web series, where they pick topics that they hear a lot of questions or concerns about from customers and run live (and free) 1-hour Web sessions about them. Here are the details for this event: Event Title: CRM On Demand Performance Brandon (Hank) Henrie will present some of the top CRM On Demand performance questions and issues that customers raise and some tips and tricks that you can use to avoid them. He will point out good resources that can help and tips for logging performance-related service requests, when all else fails. Date: April 20, 2010 Time: 10:00 am (UTC-07:00 Arizona) How to join: 1. Dial 1-866-682-4770 to access the conference line. 2. Enter the conference code - 6241996 and press # 3. Follow the instructions to record your name and press # 4. Enter the meeting passcode - 1212 and press # 5. Follow the instructions below to join the web portion of the conference. The Web Conference Go to the Oracle Web Conference site: https://strtc.oracle.com Prior to the event: Click the New User button then run the New User Test. (If you have difficulties installing the web conference software try downloading the conference software from the test status window and installing manually.) To join the event: 1. Enter the conference information In the Join Conference box: Conference ID: 6566623 Your Name 2. Click the Join Conference button. Watch for announcements of future sessions on different topics. And, let us know what you think!

    Read the article

  • Knowledge Transfer without a Plan

    - by Kanini
    Hello...We are doing work for a particular client managing their CRM implementation. (The CRM itself is a product which has been largely customized to suit my client's needs). Now, they want us to manage the Oracle batch jobs/ETL as well. And for this, they are ready to provide us with Knowledge Transfer. (The Oracle batch jobs/ETL is managed in-house by the client now). After much persuasion, I got one of the Project Lead (designation-wise) to email the client asking for a KT Plan. (The Project Lead kept saying that they have never had KT plans before and all that for which I offered I will draft a template and even that was rejected!). Email from us to them - Can you please share with us the KT Plan? Response from them - Not sure what is expected from my side? The KT is planned for tomorrow from 11 am onwards where Functional knowledge of existing ETL Data migration package will be shared. How do you handle such a client? Most likely what is going to happen is this. The person who is giving the KT will say that I have given complete Knowledge Transfer and we will go back and say that "No, this was not covered. For this, they provided an overview alone and left it at that!" and so on... My Project Lead also did not respond to that email. He just said that the meeting is scheduled to happen at 11 AM (basically repeating whatever the email said and left for the day!). What could I possibly do? PS: Look for another job is a very helpful answer, but I am not looking for it. :-)

    Read the article

  • How can a non-technical person learn to write a spec for small projects?

    - by Joseph Turian
    How can a non-technical person learn to write specs for small projects? A friend of mine is trying to outsource some development on a statistics project. In particular, he does a lot of work in excel, and wants to outsource the creation of scripts to do what he now does by hand. However, my friend is extremely non-technical. He is poor at writing technical specs. When he does write a spec, it is written the way you would describe doing something in excel (go to this cell and then copy the value to that cell). It is also overly verbose, and does examples several times. I'm not sure if he properly describes corner cases. The first project he outsourced was a failure. I think he overdescribed some details, but underdescribed corner cases. That and/or the coder he hired didn't think through the corner cases and ask appropriate questions. I'm not sure. I got on IM with him and it took me half an hour to dig out a description that should have taken five minutes or less to describe. I wrote the scripts for him at the end, but didn't examine why his process with the coder failed. He has asked me for help. However, I refuse to get involved, because taking his spec and translating it into clear requirements is 10x more work than executing on a clearly written spec. What is the right way for him to learn? Are there resources he could use? Are there ways he can learn from small, low-pressure practice projects with coders? Most of his scripts are statistical and data processing oriented. e.g. take this column and run an average over it. Remove these rows under these conditions. So the challenge is different than spec'ing a web app.

    Read the article

  • Does a team of developers need a manager?

    - by Amadiere
    Background: I'm currently part of a team of four: 1 manager, 1 senior developer and 2 developers. We do a range of bespoke in-house systems / projects (e.g. 6-8 weeks) for an organisation of around 3500 staff, as well as all the maintenance and support required from the systems that have been created before. There is not enough of us to do all the work that is potentially coming our way - we're understaffed. Management acknowledge this, but budget restraints limit our ability to recruit additional members to the team (even if we make the salary back in savings). The Change This leaves us where we are now. Our manager is due to leave his role for pastures new, leaving a vacancy in the team. Management are using this opportunity to restructure our team which would see the team manager role replaced by another developer and another senior developer. Their logic being that we need more developers, so here's a way of funding it (one of the roles is partially funded from another vacant post). The team would have no direct line manager and the roles and responsibilities would be divided up between the seniors and the (relatively new to post) service manager (a non-technical role with little-to-no development knowledge/experience whose focus is shared amongst a number of other teams and individuals) - who would be our next actual manager up the food chain. I guess the final question is: Is it possible to run a development team without an manager? Have you had experience of this? And what things could go wrong / could be of benefit to us? I'd ideally like to "see the light" and the benefits of doing things this way, or come up with some points for argument against it.

    Read the article

  • unit level testing, agile, and refactoring

    - by dsollen
    I'm working on a very agile development system, a small number of people with my doing the vast majority of progaming myself. I've gotten to the testing phase and find myself writing mostly functional level testing, which I should in theory be leavning for our tester (in practice I don't entirely...trust our tester to detect and identify defects enough to leave him the sole writter of functional tests). In theory what I should be writing is Unit level tests. However, I'm not sure it's worth the expense. Unit testing takes some time to do, more then functional testing since I have to set up mocks and plugs into smaller units that weren't design to run in issolation. More importantly, I find I refactor and redesign heavily-part of this is due to my inherriting code that needed heavy redesign and is still being cleaned up, but even once I've finished removing parts that need work I'm sure in the act of expanding the code I'll still do a decent amount of refactoring and redesign. It feels as if I will break my unit tests, forcing wasted time to refactor them as well, often due to unit test, by definition, having to be coupled so closely to the code structure. So.is it worth all the wasted time when functional tests, that will never break when I refactor/redesign, should find most defects? Do unit tests really provide that much extra defect detetection over through functional? and how does one create good unit tests that work with very quick and agile code that is modified rapidly? ps, I would be fine/happy with links to anything one considers an excellent resource for how to 'do' unit testing in a highly changing enviroment. edit: to clarify I am doing a bit of very unoffical TDD, I just seem to be writing tests on what would be considered a functional level rather then unit level. I think part of this is becaus I own nearly all of the project I don't feel I need to limit the scope as much; and part of it is that it's daunting to think of trying to go back and retroactively add the unit tests needed to cover enough code that I can feel comfortable testing only a unit without the full functionality and trust that unit still works with the rest of the units.

    Read the article

  • Are You Using Facebook with an Encrypted Session Yet?

    - by The Geek
    If you’re geeky and keep up with all the tech news, you probably already know that Facebook added an SSL feature, but for everybody else: You can make your Facebook profile more secure by turning this option on, and here’s how to do it. All we’re going to do is head into the Facebook profile settings and then check a box that forces the use of SSL encryption whenever possible. Easy Latest Features How-To Geek ETC Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? Project M Brings Classic Super Smash Bro Style Gameplay to the Wii Now Together and Complete – McBain: The Movie [Simpsons Video] Be Creative by Using Hex and RGB Codes for Crayola Crayon Colors on Your Next Web or Art Project [Geek Fun] Flash Updates; Finally Supports Full Screen Video on Multiple Monitors 22 Ways to Recycle an Altoids Mint Tin Make Your Desktop Go Native with the Tribal Arts Theme for Windows 7

    Read the article

  • Are programmers a bunch of heartless robots who are lacking of empathy? [closed]

    - by Graviton
    OK, the provocative title got your attention. My experience as a programmer and dealing with my fellow programmers is that, a programmer is also usually someone who is so consumed by his programming work, so absorbed in his algorithmic construction that he has little passion/ time left for anything else, which includes empathy for other people, love and care for the people whom he love or should love ( such as their spouses, parents, kids, colleagues etc). The better a person is in terms of his programming powers, the more defective he is in terms of love/care because both honing programming skills and loving the surrounding takes time and one has only so much time to be allocated among so many different things. Also, programming ( especially INTERESTING programming job, like, writing an AI to predict the future search trend) is a highly consuming job; it doesn't just consume you from 9 to 5, it will also consume you after 5 and practically every second of your waking hours because a good programmer can't just magically switch off his thinking hat after the office lights go off ( If you can then I don't really think you are a passionate programmer, and the prerequisite of a good programmer is passion). So, a good programmer is necessarily someone who can't love as much as others do because the very nature of the programming job prevents him from loving others as much as he wants to. Do you concur with my observation/ reasoning?

    Read the article

  • Fun with upgrading and BCP

    - by DavidWimbush
    I just had trouble with using BCP out via xp_cmdshell. Probably serves me right but that's a different issue. I got a strange error message 'Unable to resolve column level collations' which turned out to be a bit misleading. I wasted some time comparing the collations of the the server, the database and all the columns in the query. I got so desperate that I even read the Books Online article. Still no joy but then I tried the interweb. It turns out that calling bcp without qualifying it with a path causes Windows to search the folders listed in the Path environment variable - in that order - and execute the first version of BCP it can find. But when you do an in-place version upgrade, the new paths are added on the end of the Path variable so you don't get the latest version of BCP by default. To check which version you're getting execute bcp -v at the command line. The version number will correspond to SQL Server version numbering (eg. 10.50.n = 2008 R2). To examine and/or edit the Path variable, right-click on My Computer, select Properties, go to the Advanced tab and click on the Environment Variables button. If you change the variable you'll have to restart the SQL Server service before it takes effect.

    Read the article

  • Websockets through Stunnel is giving random bytes.

    - by user16682
    I have several servers set up for a web application that I am developing. One is a load balancing server, and I'm running a php WebSockets server on this balancer. The website that I am developing on uses ssl, so I have my WebSockets running through a wss uri connecting directly to the balancer, where I have set up stunnel4 to decrypt all traffic at a certain port and re-rout it to my php WebSockets server. This works fine when it comes down to connecting to my server. That's not the problem. The problem occurs when I try to send data to the server. Occasionally when I try this, stunnel does not appear to be decrypting the data properly. I get garbage characters in my termanal running the server: what appear to be completely random bytes. This can sometimes go on for several consecutive messages that I send before abruptly working again. It is very erratic and unpredictable. Sometimes I refresh the page, and all the messages work perfectly. Sometimes the WebSocket connects and I have to wait 5-10 seconds before any messages I send are interpreted properly by the server. Other times I can't send any messages at all, because they all end up as garbage. I believe this is a stunnel problem, but I am not certain. Does anybody have any experience with this? I would like a more predictable server if I can get it. Some more information: This occurs extensively in Chrome, not quite as much in FireFox, and never in Safari. The php server I am using is phpws http://code.google.com/p/phpws/ -- On the WebSocket connection, this server would sometimes detect that the header was only 1 byte in size, the first byte of the WebSockets header. I had to modify the server to flush the buffer ever time this occurred so that it would reliably connect.

    Read the article

  • [MINI HOW-TO] Repair Missing External Hard Drive Database Error in WHS

    - by Mysticgeek
    If you’re using external hard drives with your Windows Home Server, they might get unplugged and create an error. Here we look at running the Repair Wizard to quickly fix the issue. If an external drive that is included in your drive pool becomes unplugged or loses power, you might see the following error under Home Network Health when opening the WHS Console. To fix the issue verify the drive has power and is plugged in correctly and click Repair. The wizard launches and you’ll need to agree that you may lose data during the repair of the backup database. In this example it was a simple problem where an external drive became unplugged from the server…so you can close out of the wizard. If you look under Server Storage you can see the drive is missing…To fix the issue verify the drive has power and is plugged in correctly. WHS will add the drive back into the pool and when finished you’ll see it listed as healthy and good to go. Using external drives that are part of your storage pool may not be the best way to have your home built WHS setup, but if you do, expect occasional errors such as this. Similar Articles Productive Geek Tips Find Your Missing USB Drive on Windows XPFixing "BOOTMGR is missing" Error While Trying to Boot Windows VistaSpeed up External USB Hard Drives in Windows VistaRebit Backup Software [Review]Troubleshoot Startup Problems with Startup Repair Tool in Windows 7 & Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries Rent Cameras In Bulk At CameraRenter Download Songs From MySpace Steve Jobs’ iPhone 4 Keynote Video

    Read the article

  • Create Shortened goo.gl URLs in Your Favorite Browser

    - by Asian Angel
    Now that the new goo.gl URL shortening service has been active for a bit you may be wanting to add it to your favorite non Internet Explorer/Firefox browser. See how easy it is to enjoy that goo.gl URL shortening goodness with the “goo.gl with prompt for copying Bookmarklet”. goo.gl with Prompt for Copying Bookmarklet In Action To add the bookmarklet to your browser simply drag it to your “Bookmarks Toolbar” and get ready to start creating those shortened URLs. For our example we chose one of the wonderful malware removal articles here at the site. All that you will need to do is click on the bookmarklet to create your shortened URL. The wonderful thing about this bookmarklet is the small pop-up window that provides an easy way for you to copy the shortened URL. You can see the “Context Menu” for the small window here…definitely nice. Once you have copied your new URL you can close the window by clicking on “OK” or pressing “Enter”. Now you are ready to use your new shortened goo.gl URL wherever you like or need to. Conclusion If you have been wanting to add goo.gl URL shortening power to your favorite browser then this is the perfect bookmarklet to have. Links Add the goo.gl with Prompt for Copying Bookmarklet to Your Favorite Browser Similar Articles Productive Geek Tips See Where Shortened URLs “Link To” in Your Favorite BrowserVerify the Destinations of Shortened URLs the Easy WayA Quick Look at URL Shortening Services & ExtensionsCreate Shortened goo.gl URLs in Google Chrome the Easy WayText-Only URL Quick-Fix for Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool

    Read the article

< Previous Page | 638 639 640 641 642 643 644 645 646 647 648 649  | Next Page >