Search Results

Search found 3184 results on 128 pages for 'beginning'.

Page 87/128 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Ubuntu's garbage collection cron job for PHP sessions takes 25 minutes to run, why?

    - by Lamah
    Ubuntu has a cron job set up which looks for and deletes old PHP sessions: # Look for and purge old sessions every 30 minutes 09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] \ && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 \ -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir \ fuser -s {} 2> /dev/null \; -delete My problem is that this process is taking a very long time to run, with lots of disk IO. Here's my CPU usage graph: The cleanup running is represented by the teal spikes. At the beginning of the period, PHP's cleanup jobs were scheduled at the default 09 and 39 minutes times. At 15:00 I removed the 39 minute time from cron, so a cleanup job twice the size runs half as often (you can see the peaks get twice as wide and half as frequent). Here are the corresponding graphs for IO time: And disk operations: At the peak where there were about 14,000 sessions active, the cleanup can be seen to run for a full 25 minutes, apparently using 100% of one core of the CPU and what seems to be 100% of the disk IO for the entire period. Why is it so resource intensive? An ls of the session directory /var/lib/php5 takes just a fraction of a second. So why does it take a full 25 minutes to trim old sessions? Is there anything I can do to speed this up? The filesystem for this device is currently ext4, running on Ubuntu Precise 12.04 64-bit. EDIT: I suspect that the load is due to the unusual process "fuser" (since I expect a simple rm to be a damn sight faster than the performance I'm seeing). I'm going to remove the use of fuser and see what happens.

    Read the article

  • Apache on CentOS 5.9 VM serves my optimized images corrupted (but my Mac doesn't)

    - by Robert K
    I'm using a Vagrant VM to mirror the client's environment as closely as I can. As part of our build process we do no optimization of assets early on; that comes as we're ready to take a site live. Needless to say, this issue is beginning to worry me as we need to take the site live very soon. I use ImageOptim to automate optimization of image assets, which runs a whole series of tools (Zopfli, PNGOUT, OptiPNG, AdvPNG, PNGCrush). I always set the optimizations to their maximum setting. After optimization, my PNGs start looking like this: What's weird is, if I serve the same file through my Mac's copy of Apache, not through Vagrant, the image loads fine. In fact, the only time it's ever corrupt like this is when the image is served from the Vagrant VM and its install of Drupal. All optimized JPEGs display only the first ~20% of the image. And PNGs, depending on the image, may show either a portion or the "progressive"-style corruption below. The browser itself makes no difference, the same browser will serve an uncorrupted image from my Mac's Apache instance and a corrupt image from the VM. When I disable all PNG optimizations except PNGCrush, and the removal of the PNG metadata, the image is served corrupted. I'm optimizing JPEG images with JPEGmini. The server is running CentOS 5.9, Apache 2.2.3-85, PHP 5.3.3, and Drupal 7. As best as I can tell the error lies somewhere within the VM, either with Apache or with (perhaps) the network stack. Seems like the tools that optimize the compression of the PNGs and JPEGs are what trigger this error. I've already determined that the .htaccess file isn't interfering with how the images load. What should I try to troubleshoot this?

    Read the article

  • How do I reformat/reinstall an OS on a hard drive after its been wiped by eban?

    - by Aggrevated
    I am trying to install XP SP2 Home on a Dell Inspiron 531s. Yes it came with Vista but I happen to own a legal copy of XP. I bought the hard drive off eBay(used). I changed the bios settings to boot from CD and when I start up the computer it goes to the hard drive which says its been cleaned using eban XX.XXX.XXX. My problem is that I can't get the PC to boot from the XP disk. I have tried putting the disk into another PC it works. I have put the bad hard drive back in and the dvd+rw works. I am really stumped and am beginning to think that the hard drive is unusable. I only have the sata cords for 1 hard drive to be hooked up at a time. Any ideas or suggestions or confirmation that I can't use this hard drive would be appreciated. Yes I am probably a 'noob' but I have installed XP and Vista many times without any problems. I don't need to be put down on my level of experience I just need answers. Thanks Everyone

    Read the article

  • How do I reformat/reinstall an OS on a hard drive after its been wiped by eban?

    - by Aggrevated
    I am trying to install XP SP2 Home on a Dell Inspiron 531s. Yes it came with Vista but I happen to own a legal copy of XP. I bought the hard drive off eBay(used). I changed the bios settings to boot from CD and when I start up the computer it goes to the hard drive which says its been cleaned using eban XX.XXX.XXX. My problem is that I can't get the PC to boot from the XP disk. I have tried putting the disk into another PC it works. I have put the bad hard drive back in and the dvd+rw works. I am really stumped and am beginning to think that the hard drive is unusable. I only have the sata cords for 1 hard drive to be hooked up at a time. Any ideas or suggestions or confirmation that I can't use this hard drive would be appreciated. Yes I am probably a 'noob' but I have installed XP and Vista many times without any problems. I don't need to be put down on my level of experience I just need answers. Thanks Everyone

    Read the article

  • Identifying test machines in analytics logs

    - by RTigger
    We're just beginning to add analytics to our SaaS application, to begin (among other things) billing clients based on usage. The problem we're running into is there's a few circumstances where our support team will simulate a log in into production to try to reproduce reported issues with a client's configuration. When they log in, an entry will be made into our analytics logs that their specific account has logged in, which we use to calculate billing. A few ideas we had to solve this: 1) We log IP addresses as well as machine keys for each PC that logs in - we could filter out known IP addresses and/or machine keys belonging to support. The drawback is we have to maintain a list of keys / addresses manually. 2) If support (or anyone else internal) runs our application in debug mode (as opposed to release), it will not report analytics. This is fine, as long as support / anyone else remembers to switch to debug mode. 3) Include some sort of reg key / similar setting required to be set when configuring a production system in order to send analytics. Again, fine, as long as our infrastructure team remembers to set the reg key or setting. All of these approaches require some sort of human involvement, which we all know can be iffy at best. Has anyone run into a similar situation? Is there an automated approach to this problem? (PS Of course, we shouldn't be testing in production, but there are a few one-off instances with customer set up that we can't reproduce without logging in as them in production. This is the only time we do so, and this is the case I'm talking about in this question.)

    Read the article

  • Compiz & Linux compositing: how does it fit into the X architecture?

    - by Latanius
    Not a really "how to solve stuff" question, but... I was wondering how the modern X architecture works, with compiz & all. What I know about it: in the beginning, there was the X server, clients connected (presumably on TCP), and then sent messages to the server to instruct it to show windows etc. because this didn't work (at all? or just fast enough?) for OpenGL & 3D acceleration, additional APIs were created for direct rendering (DRI? and, in addition to the X server, what things did the X clients talk to to render stuff and through what interfaces?) and, finally, enter Compiz: X clients end up (somehow) rendering to OpenGL textures, which is then put together to form a fancy-looking screen with translucent windows, and rendered to the screen. What I'm especially interested in is what components does the system have and how do they connect to each other? Like... if there is a box labelled "compiz" in the system... is it inside the X server? If it's not, how do the rendered images from the apps end up in it? And where does it render to? Is that another X server? Or DRI? Of course, I'd be equally happy if pointed to some docs capable of clearing up the confusion described above (conditional on they being significantly shorter than book-sized entities).

    Read the article

  • Toshiba Satellite C665 Rebooting from Standby

    - by Coodu
    I currently am working on a C665 with a strange issue. When the panel is closed the notebook will put itself to sleep in the usual way, and the power LED changes to the pulse to indicate that the device is asleep. However, when the panel is opened to resume using the notebook, the system will restart itself, instead beginning from the Toshiba logo and proceed to boot back in to W7. I should also note that each time this occurs, the "Windows Startup Recovery" option occurs, indicating that the system was not shut down correctly. Some things I have tried: Updated to latest Toshiba BIOS. Returned BIOS settings to their defaults. Swapped Memory to known good module, tested KGM in both memory slots within system. Confirmed that power settings are set to sleep/wake when power button is pressed. Ran a quick HDD fitness test using a parted magic USB stick. Checked for BSOD logs using BlueScreenView, none found. Ran src, no violations found. Any ideas? I have a good feeling the system is restarting itself, but in the event viewer there is a "Kernel Power" error, but it simply says "The system was not shut down correctly." Perhaps a bad driver? I'm not sure. Any advice would be great.

    Read the article

  • Using Plesk for webhosting on Ubuntu - Security risk or reasonably safe?

    - by user66952
    Sorry for this newb-question I'm pretty clueless about Plesk, only have limited debian (without Plesk) experience. If the question is too dumb just telling me how to ask a smarter one or what kind of info I should read first to improve the question would be appreciated as well. I want to offer a program for download on my website hosted on an Ubuntu 8.04.4 VPS using Plesk 9.3.0 for web-hosting. I have limited the ssh-access to the server via key only. When setting up the webhosting with Plesk it created an FTP-login & user is that a potential security risk that could bypass the key-only access? I think Plesk itself (even without the ftp-user-account) through it's web-interface could be a risk is that correct or are my concerns exaggerated? Would you say this solution makes a difference if I'm just using it for the next two weeks and then change servers to a system where I know more about security. 3.In other words is one less likely to get hacked within the first two weeks of having a new site up and running than in week 14&15? (due to occurring in less search results in the beginning perhaps, or for whatever reason... )

    Read the article

  • Is Google tracking our web history even when we do not visit Google or if affiliate websites?

    - by Anoyon-12
    I have recently updated to Google Chrome 29.0.1547.76. And when I click a the new tab button there is a new homepage now. Ok. My current settings forbid from third party cookies being set. And I clear all of my browsing data every time I close or well if its been too long browsing. Ok so there is this help dialog that appears first time you open the new tab page. What I did was Cleared all my browsing settings (From Beginning of time) and then again I went to the new tab, the Help dialog appears. And for the third time I did the same and the Same thing Happened. So For the fourth time I cleared all my Browsing data. Clicked open new tab and then. navigate myself to chrome://settings/cookies. and there were So does this mean google is tracking our web History just for using Chrome. I know Its not Illegal because these cookies only appear when you click the new tab Google Chrome 29.0.1547.76. maybe that was the reason google redesigned the entire New:tab page. From this Google is forcing us to allow them track us. I don't want to set another page as my new:tab page. I just want the old one. Google has a long History of invading Privacy without users consent. There was that Safari incident. I am sure you people remember. So can anyone tell me about this issue? I maybe wrong. So please explain.

    Read the article

  • Contact Form + jQuery validationengine

    - by BigMad
    I created this contact form, inserting jQuery fadeLabel and validationEngine to beautify the form the file index.php / .html (I have not yet figured out which of the two versions put it) scripts are index: <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></script> <script src="/js/backtop.js"></script> <script src="/js/fadeLabel.js"></script> <script> $(document).ready(function () { $('form .fadeLabel').fadeLabel(); }); </script> <script src="/js/validationEngine-it.js"></script> <script src="/js/validationEngine.js"></script> <script> $(document).ready(function(){ $("#form_box").validationEngine({ ajaxSubmit: true, ajaxSubmitFile: "contact.php", ajaxSubmitMessage: "Thank you, We will contact you soon !", success : false, failure : function() {} }) }); </script> <script src="/js/contactform.js"></script> however this is the part of the form's code <p id="form_success" class="success hide"><strong>Grazie!</strong> Il tuo messaggio è stato inviato con successo.</p> <form id="form_box"> <fieldset> <p><label for="name">Nome*</label><input type="text" id="name" name="name" class="validate[required] fadeLabel" value=""/></p> <p><label for="email">E-mail*</label><input type="email" id="email" name="email" class="validate[required,custom[email]] fadeLabel" value=""/></p> <p><label for="website">Sito web</label><input type="url" id="website" name="website" class="fadeLabel" value=""/></p> <p><label for="message">Messaggio*</label><textarea id="message" name="message" class="validate[required] fadeLabel" cols="30" rows="10" value=""></textarea></p> </fieldset> <p id="form_submit" class="submit"><button class="send">Invia</button> *Campi obbligatori</p> <p id="form_send" class="send hide">Invio in corso&hellip;</p> <p id="form_error" class="submit error hide"><button class="send">Invia</button> Si prega di correggere l'errore e re-inviarlo.</p> </form> This is the contact.php where it receives the data and sends 2 emails (one for me with the data and a thank you to those who contacted me) contact.php: <?php //include variables $my_email = "[email protected]"; $my_site = "adrianogenovese.com"; session_start(); $name = $_POST['name']; $email = $_POST['email']; $website = $_POST['website']; $message = $_POST['message']; $ip = $_SERVER['REMOTE_ADDR']; //beginning to email me $to = $my_email; $sbj = "Richiesta Informazioni - $my_site"; $msg = " <html> ... //the body of the email to me ... </html> "; $from = $email; $headers = 'MIME-Version: 1.0' . "\n"; $headers .= 'Content-type: text/html; charset=iso-8859-1' . "\n"; $headers .= "From: $from"; mail($to,$sbj,$msg,$headers); //email sent to me //beginning of the email recipient $toClient = $email; $msgClient = " <html> ... //the body of the email recipient ... </html> "; $fromClient = $my_email; $sbjClient = "Grazie $name per aver contattato $my_site "; $headersClient = 'MIME-Version: 1.0' . "\r\n"; $headersClient .= 'Content-type: text/html; charset=iso-8859-1' . "\r\n"; $headersClient .= "From: $fromClient"; mail($toClient,$sbjClient,$msgClient,$headersClient); //mail inviata al cliente //order confirmation email //Reset error session_destroy(); exit; ?> this is the contact form jscript contactform.js: $(document).ready(function() { $(".send").click(function(){ $("#form_send").removeClass('hide'); $("#form_submit").addClass('hide'); $("#form_error").addClass('hide'); var name = $("#name").val(); var email = $("#email").val(); var website = $("#website").val(); var message = $("#message").val(); if (name == "" || email == "" ) { $("#form_send").addClass('hide'); $("#form_error").removeClass('hide'); } else { $.ajax({ type: "POST", url: "contatti/contact.php", data: "name=" + name + "&email=" + email + "&message=" + message + "&website=" + website, dataType: "html", success: function(msg) { $("#form_send").addClass('hide').delay(3000).fadeOut(3000); $("#form_success").removeClass('hide'); $("#form_box").addClass('hide').slideUp(2000).fadeOut(); }, error: function() { alert("An unexpected error occurred..."); } }); } }); //end form });//end Dom The jQuery seem to work very well, I wanted to make sure that the page is not of the form updated or go to another page (the only thing that works for now) compensation reflected in the following problems: I always leave the alert of contactform.js Does not send any mail, it to me to recipient I can not do the work properly. delay () .fadeOut / fadeIn and. SlideUp (). FadeOut () so that the sending of this email appears for 3 seconds "$ (" # form_send "). addClass ('hide')" before you do anything else then the form disappears up using some second type slideUp "$ (" # form_box "). addClass ('hide')" by displaying just the "$ (" # form_success "). removeClass ('hide')" in the address bar also appears the form data (e.g. ../index.php?name=test&email=example%40mail.com&website=&message=helloworld)

    Read the article

  • Will these optimizations to my Ruby implementation of diff improve performance in a Rails app?

    - by grg-n-sox
    <tl;dr> In source version control diff patch generation, would it be worth it to use the optimizations listed at the very bottom of this writing (see <optimizations>) in my Ruby implementation of diff for making diff patches? </tl;dr> <introduction> I am programming something I have never done before and there might already be tools out there to do the exact thing I am programming but at this point I am having too much fun to care so I am still going to do it from scratch, even if there is a tool for this. So anyways, I am working on a Ruby on Rails app and need a certain feature. Basically I want each entry in a table of mine, let's say for example a table of video games, to have a stored chunk of text that represents a review or something of the sort for that table entry. However, I want this text to be both editable by any registered user and also keep track of different submissions in a version control system. The simplest solution I could think of is just implement a solution that keeps track of the text body and the diff patch history of different versions of the text body as objects in Ruby and then serialize it, preferably in human readable form (so I'll most likely use YAML for this) for editing if needed due to corruption by a software bug or a mistake is made by an admin doing some version editing. So at first I just tried to dive in head first into this feature to find that the problem of generating a diff patch is more difficult that I thought to do efficiently. So I did some research and came across some ideas. Some I have implemented already and some I have not. However, it all pretty much revolves around the longest common subsequence problem, as you would already know if you have already done anything with diff or diff-like features, and optimization the function that solves it. Currently I have it so it truncates the compared versions of the text body from the beginning and end until non-matching lines are found. Then it solves the problem using a comparison matrix, but instead of incrementing the value stored in a cell when it finds a matching line like in most longest common subsequence algorithms I have seen examples of, I increment when I have a non-matching line so as to calculate edit distance instead of longest common subsequence. Although as far as I can tell between the two approaches, they are essentially two sides of the same coin so either could be used to derive an answer. It then back-traces through the comparison matrix and notes when there was an incrementation and in which adjacent cell (West, Northwest, or North) to determine that line's diff entry and assumes all other lines to be unchanged. Normally I would leave it at that, but since this is going into a Rails environment and not just some stand-alone Ruby script, I started getting worried about needing to optimize at least enough so if a spammer that somehow knew how I implemented the version control system and knew my worst case scenario entry still wouldn't be able to hit the server that bad. After some searching and reading of research papers and articles through the internet, I've come across several that seem decent but all seem to have pros and cons and I am having a hard time deciding how well in this situation that the pros and cons balance out. So are the ones listed here worth it? I have listed them with known pros and cons. </introduction> <optimizations> Chop the compared sequences into multiple chucks of subsequences by splitting where lines are unchanged, and then truncating each section of unchanged lines at the beginning and end of each section. Then solve the edit distance of each subsequence. Pro: Changes the time increase as the changed area gets bigger from a quadratic increase to something more similar to a linear increase. Con: Figuring out where to split already seems like you have to solve edit distance except now you don't care how it is changed. Would be fine if this was solvable by a process closer to solving hamming distance but a single insertion would throw this off. Use a cryptographic hash function to both convert all sequence elements into integers and ensure uniqueness. Then solve the edit distance comparing the hash integers instead of the sequence elements themselves. Pro: The operation of comparing two integers is faster than the operation of comparing two strings, so a slight performance gain is received after every comparison, which can be a lot overall. Con: Using a cryptographic hash function takes time to convert all the sequence elements and may end up costing more time to do the conversion that you gain back from the integer comparisons. You could use the built in hash function for a string but that will not guarantee uniqueness. Use lazy evaluation to only calculate the three center-most diagonals of the comparison matrix and then only calculate additional diagonals as needed. And then also use this approach to possibly remove the need on some comparisons to compare all three adjacent cells as desribed here. Pro: Can turn an algorithm that always takes O(n * m) time and make it so only worst case scenario is that time, best case becomes practically linear, and average case is somewhere between the two. Con: It is an algorithm I've only seen implemented in functional programming languages and I am having a difficult time comprehending how to convert this into Ruby based on how it is described at the site linked to above. Make a C module and do the hard work at the native level in C and just make a Ruby wrapper for it so Ruby can make all the calls to it that it needs. Pro: I have to imagine that evaluating something like this in could be a LOT faster. Con: I have no idea how Rails handles apps with ruby code that has C extensions and it hurts the portability of the app. This is an optimization for after the solving of edit distance, but idea is to store additional combined diffs with the ones produced by each version to make a delta-tree data structure with the most recently made diff as the root node of the tree so getting to any version takes worst case time of O(log n) instead of O(n). Pro: Would make going back to an old version a lot faster. Con: It would mean every new commit, the delta-tree would get a new root node that will cost time to reorganize the delta-tree for an operation that will be carried out a lot more often than going back a version, not to mention the unlikelihood it will be an old version. </optimizations> So are these things worth the effort?

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • MySQL – Learning MySQL Online in 6 Hours – MySQL Fundamentals in 320 Minutes

    - by Pinal Dave
    MySQL is one of the most popular database language and I have been recently working with it a lot. Data have no barrier and every database have their own place. I have been working with MySQL for quite a while and just like SQL Server, I often find lots of people asking me if I have a tutorial which can teach them MySQL from the beginning. Here is the good news, I have written two different courses on MySQL Fundamentals, which is available online. The reason for writing two different courses was to keep the learning simple. Both of the courses are absolutely connected with other but designed if you watch either of the course independently you can watch them and learn without dependencies. However, if you ask me, I will suggest that you watch MySQL Fundamentals Part 1 course following with MySQL Fundamentals Part 2 course. Let us quickly explore outline of MySQL courses. MySQL Fundamental – 1 (157 minutes) MySQL is a popular choice of database for use in web applications, and is a central component of the widely used LAMP open source web application software stack. This course covers the fundamentals of MySQL, including how to install MySQL as well as written basic data retrieval and data modification queries. Introduction (duration 00:02:12) Installations and GUI Tools (duration 00:13:51) Fundamentals of RDBMS and Database Designs (duration 00:16:13) Introduction MYSQL Workbench (duration 00:31:51) Data Retrieval Techniques (duration 01:11:13) Data Modification Techniques (duration 00:20:41) Summary and Resources (duration 00:01:31) MySQL Fundamental – 2 (163 minutes) MySQL is a popular choice of database for use in web applications, and is a central component of the widely used LAMP open source web application software stack. In this course, which is part 2 of the Fundamentals of MySQL series, we explore more advanced topics such as stored procedures & user-defined functions, subqueries & joins, views and events & triggers. Introduction (duration 00:02:09) Joins, Unions and Subqueries (duration 01:03:56) MySQL Functions (duration 00:36:55) MySQL Views (duration 00:19:19) Stored Procedures and Stored Functions (duration 00:25:23) Triggers and Events (duration 00:13:41) Summary and Resources (duration 00:02:18) Note if you click on the link above and you do not see the play button to watch the course, you will have to login to the system and watch the course. I would like to throw a challenge to you – Can you watch both of the courses in a single day? If yes, once you are done watching the course on your Pluralsight Profile Page (here is my profile http://pluralsight.com/training/users/pinal-dave) you will get following badges. If you have already watched MySQL Fundamental Part 1, you can qualify by just watching MySQL Fundamental Part 2. Just send me the link to your profile and I will publish your name on this blog. For the first five people who send me email at Pinal at sqlauthority.com; I might have something cool as a giveaway as well. Watch the teaser of MySQL course. Reference: Pinal Dave (http://blog.sqlauthority.com)  Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Links to my “Best of 2010” Posts

    - by ScottGu
    I hope everyone is having a Happy New Years! 2010 has been a busy blogging year for me (this is the 100th blog post I’ve done in 2010).  Several people this week suggested I put together a summary post listing/organizing my favorite posts from the year.  Below is a quick listing of some of my favorite posts organized by topic area: VS 2010 and .NET 4 Below is a series of posts I wrote (some in late 2009) about the VS 2010 and .NET 4 (including ASP.NET 4 and WPF 4) release we shipped in April: Visual Studio 2010 and .NET 4 Released Clean Web.Config Files Starter Project Templates Multi-targeting Multiple Monitor Support New Code Focused Web Profile Option HTML / ASP.NET / JavaScript Code Snippets Auto-Start ASP.NET Applications URL Routing with ASP.NET 4 Web Forms Searching and Navigating Code in VS 2010 VS 2010 Code Intellisense Improvements WPF 4 Add Reference Dialog Improvements SEO Improvements with ASP.NET 4 Output Cache Extensibility with ASP.NET 4 Built-in Charting Controls for ASP.NET and Windows Forms Cleaner HTML Markup with ASP.NET 4 - Client IDs Optional Parameters and Named Arguments in C# 4 - and a cool scenarios with ASP.NET MVC 2 Automatic Properties, Collection Initializers and Implicit Line Continuation Support with VB 2010 New <%: %> Syntax for HTML Encoding Output using ASP.NET 4 JavaScript Intellisense Improvements with VS 2010 VS 2010 Debugger Improvements (DataTips, BreakPoints, Import/Export) Box Selection and Multi-line Editing Support with VS 2010 VS 2010 Extension Manager (and the cool new PowerCommands Extension) Pinning Projects and Solutions VS 2010 Web Deployment Debugging Tips/Tricks with Visual Studio Search and Navigation Tips/Tricks with Visual Studio Visual Studio Below are some additional Visual Studio posts I’ve done (not in the first series above) that I thought were nice: Download and Share Visual Studio Color Schemes Visual Studio 2010 Keyboard Shortcuts VS 2010 Productivity Power Tools Fun Visual Studio 2010 Wallpapers Silverlight We shipped Silverlight 4 in April, and announced Silverlight 5 the beginning of December: Silverlight 4 Released Silverlight 4 Tools for VS 2010 and WCF RIA Services Released Silverlight 4 Training Kit Silverlight PivotViewer Now Available Silverlight Questions Announcing Silverlight 5 Silverlight for Windows Phone 7 We shipped Windows Phone 7 this fall and shipped free Visual Studio development tools with great Silverlight and XNA support in September: Windows Phone 7 Developer Tools Released Building a Windows Phone 7 Twitter Application using Silverlight ASP.NET MVC We shipped ASP.NET MVC 2 in March, and started previewing ASP.NET MVC 3 this summer.  ASP.NET MVC 3 will RTM in less than 2 weeks from today: ASP.NET MVC 2: Strongly Typed Html Helpers ASP.NET MVC 2: Model Validation Introducing ASP.NET MVC 3 (Preview 1) Announcing ASP.NET MVC 3 Beta and NuGet (nee NuPack) Announcing ASP.NET MVC 3 Release Candidate 1  Announcing ASP.NET MVC 3 Release Candidate 2 Introducing Razor – A New View Engine for ASP.NET ASP.NET MVC 3: Layouts with Razor ASP.NET MVC 3: New @model keyword in Razor ASP.NET MVC 3: Server-Side Comments with Razor ASP.NET MVC 3: Razor’s @: and <text> syntax ASP.NET MVC 3: Implicit and Explicit code nuggets with Razor ASP.NET MVC 3: Layouts and Sections with Razor IIS and Web Server Stack The IIS and Web Stack teams have made a bunch of great improvements to the core web server this year: Fix Common SEO Problems using the URL Rewrite Extension Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Introducing IIS Express SQL CE 4 (New Embedded Database Support with ASP.NET) Introducing Web Matrix EF Code First EF Code First is a really nice new data option that enables a very clean code-oriented data workflow: Announcing Entity Framework Code-First CTP5 Release Class-Level Model Validation with EF Code First and ASP.NET MVC 3 Code-First Development with Entity Framework 4 EF 4 Code First: Custom Database Schema Mapping Using EF Code First with an Existing Database jQuery and AJAX Contributions My team began making some significant source code contributions to the jQuery project this year: jQuery Templates, Data Link and Globalization Accepted as Official jQuery Plugins jQuery Templates and Data Linking (and Microsoft contributing to jQuery) jQuery Globalization Plugin from Microsoft Patches and Hot Fixes Some useful fixes you can download prior to VS 2010 SP1: Patch for Cut/Copy “Insufficient Memory” issue with VS 2010 Patch for VS 2010 Find and Replace Dialog Growing Patch for VS 2010 Scrolling Context Menu Videos of My Talks Some recordings of technical talks I’ve done this year: ASP.NET 4, ASP.NET MVC, and Silverlight 4 Talks I did in Europe VS 2010 and ASP.NET 4 Web Forms Talk in Arizona Other About Technical Debates (and ASP.NET Web Forms and ASP.NET MVC debates in particular) ASP.NET Security Fix Now on Windows Update Upcoming Web Camps I’d like to say a big thank you to everyone who follows my blog – I really appreciate you reading it (the comments you post help encourage me to write it).  See you in the New Year! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Add Keyboard Input Language to Ubuntu

    - by Matthew Guay
    Want to type in multiple languages in Ubuntu?  Here we’ll show you how you can easily add and switch between multiple keyboard layouts in Ubuntu. Add a Keyboard Language To add a keyboard language, open the System menu, select Preferences, and then select Keyboard. In the Keyboard Preferences dialog, select the Layouts tab, and click Add.   You can select a country and then choose an language and keyboard variant.  Note that some countries, such as the United States, may show several languages.  Once you’ve made your selection, you can preview it on the sample keyboard displayed below the menu. Alternately, on the second tab, select a language and then choose a variant.  Click Add when you’ve made your selection. Now you’ll notice that there are two languages listed in the Keyboard Preferences, and they’re both ready to use immediately.  You can add more if you wish, or close the dialog. Switch Between Languages When you have multiple input languages installed, you’ll notice a new icon in your system tray on the top right.  It will show the abbreviation of the country and/or language name that is currently selected.  Click the icon to change the language. Right-click the dialog to view available languages (listed under Groups), open the Keyboard Preferences dialog again, or show the current layout. If you select Show Current Layout you’ll see a window with the keyboard preview we saw previously when setting the keyboard layout.  You can even print this layout preview out to help you remember a layout if you wish. Change Keyboard Shortcuts to Switch Languages By default, you can switch input languages in Ubuntu from the keyboard by pressing both Alt keys together.  Many users are already used to the default Alt+Switch combination to switch input languages in Windows, and we can add that in Ubuntu.  Open the keyboard preferences dialog, select the Layout tab, and click Options. Click the plus sign beside Key(s) to change layout, and select Alt+Shift.  Click Close, and you can now use this familiar shortcut to switch input languages. The layout options dialog offers many more neat keyboard shortcuts and options.  One especially neat option was the option to use a keyboard led to show when we’re using the alternate keyboard layout.  We selected the ScrollLock light since it’s hardly used today, and now it lights up when we’re using our other input language.   Conclusion Whether you regularly type in multiple languages or only need to enter an occasional character from an alternate keyboard layout, Ubuntu’s keyboard settings make it easy to make your keyboard work the way you want.  And since you can even preview and print a keyboard layout, you can even remember an alternate keyboard’s layout if it’s not printed on your keyboard. Windows users, you’re not left behind, either.  Check out our tutorial on how to Add keyboard languages to XP, Vista, and Windows 7. Similar Articles Productive Geek Tips Add keyboard languages to XP, Vista, and Windows 7Assign a Hotkey to Open a Terminal Window in UbuntuWhat is ctfmon.exe And Why Is It Running?Keyboard Shortcuts for VMware WorkstationInput Director Controls Multiple Windows Machines with One Keyboard and Mouse TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12

    Read the article

  • SQLAuthority News – Tips for Traveling to Nepal

    - by pinaldave
    If you are a regular reader of this blog, you might know that I travel nearly 20+ days out of 30 days in a month. There are cases when I don’t have a chance to go home for an entire month and my family has to travel to different cities just to meet me. During my recent visit, one of my acquaintances suggested that I should blog about my travel experiences as well. This can be helpful to others who are traveling to the country or city. I have previously written about my experience about all the airlines in India. I would be writing about a few tips about traveling to the beautiful country Nepal today. Kathmandu, the capital of Nepal is very scenic. There are lots of historical places to see and visit. I was fortunate enough to stopover the Pashupatinath Temple, Bhaktapur, Vasantpur and the temple of Kumari Goddess. I also visited casinos there, but even if  I have stayed in Las Vegas for 3 and a half years before, I was not keen on them so I left the casinos just like what I did in Las Vegas . I also traveled to the famous Thamel area by car. Here are my quick tips for anyone who is planning to visit Nepal. They are not categorized but just written in the order that came to my mind. Please note that if you are an Indian, you will get a special privilege everywhere in Nepal, beginning right from the Indian airports. Use the expression “Nameste!” If you want to greet any Indian or Nepali. Indian Nationals do not need visa/passport to enter Nepal. In fact, Indian Nationals can just walk in to Nepal without any passport; but should have any valid Indian ID. There is no use of a passport since it will not be stamped at any immigration ports, whether in India or Nepal. Indian currency is widely accepted everywhere. However, please bring only Rs. 100 bills/notes as Rs. 500 or Rs. 1000 are not accepted. However, casinos there will accept larger bills. Indian National Language – Hindi is widely spoken and understood everywhere. I did not find a single person who had trouble speaking it. Nepali language uses the scripting language as Devnagari, which is similar to Hindi. Here, you will find food of almost every country.  The taste of Nepali food is authentic and very delicious. It is very safe to travel and move around in Kathmandu (despite what media suggests). However, it will really help if you have a friend who speaks Nepali. You can negotiate a few deals and cut off to almost 1/5 of the original quoted price of products sold here. If you are from Gujarat, India – you will find Nepali language sharing many common words. Temples are everywhere, so do not miss to visit a few of them. Pashupatinath is a must. Only followers of Hindu religion (from Nepal and India only) are allowed in most of the holy places. Camera is allowed everywhere except on the holy places. Now it is your turn to share your opinions or any suggestions. I think Nepal is a great country as there are lots of places to visit. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology

    Read the article

  • Convert YouTube Videos to MP3 with YouTube Downloader

    - by DigitalGeekery
    Are you looking for a way to take the music videos you watch on YouTube and convert them to MP3? Today we take a look at an easy way to convert those YouTube videos to MP3 for free with YouTube Downloader. The YouTube Downloader functions in two steps. First, it downloads the video from YouTube in MP4 format, and then allows you to convert that MP4 file to MP3. Note: It also supports conversion conversion to some other formats such as AVI video, MOV, iPhone, PSP, 3GP, and WMV.   Installation and usage Download and Install YouTube Downloader. (See download link below) Open the YouTube Downloader by clicking on the desktop icon. Find a YouTube video you’d like to convert to MP3 and copy the URL. Paste the URL into the “Enter video URL” text box in YouTube Downloader. When you hover your mouse over the text box, the text box will auto-fill with the URL from your clipboard. Select the “Download video from YouTube” radio button and click “Ok.” Choose a folder to location to download your YouTube video and click “Save.” The video is downloaded in MP4 format. Now wait while the video is downloaded to your hard drive.   Select the “Convert video (previously downloaded) from file” radio button. Click the (…) button to the right of the “Select video file” text box to browse for and select the MP4 file you just downloaded. Then select “MPEG Audio Layer (MP3) from the “Convert to” drop down list. Select “OK” to begin the conversion. Choose the conversion quality by moving the slider to the right or left. The options are: Low (96kbps bite rate), Medium (128kbps bit rate), Optimal (192kbps bit rate), and High 256kbps bit rate). Here you can select the output volume as well. Click “OK” when finished. If there is a portion of the beginning or end of the video that you wish to cut out of the MP3, select the “Cut video” check box and choose a Start and End time. Click “OK” when finished. Note: The start and end time represent the audio portion of the MP3 you wish to keep. All portions before and after these times will be cut.   The conversion process will begin and should only take a few moments. Times will vary depending on the size of the video you’re converting. Conversion was successful! The MP3 you converted will be in the same directory you downloaded the video to. Now you’re ready to listen to your MP3 or import it to your Zune, iTunes, or music library. You may also want to delete the MP4 files after the conversion if you will no longer need them. Conclusion YouTube Downloader features a very simple interface that’s user friendly and easy to use. It comes in handy when you watch videos that look horrible, but the sound quality is good. Or if you just need to hear the audio of something posted and don’t need the video. It also allows you to download from Google Video, MySpace, and others. Download YouTube Downloader Similar Articles Productive Geek Tips Download YouTube Videos with Cheetah YouTube DownloaderWatch YouTube Videos in Cinema Style in FirefoxStop YouTube Videos from Automatically Playing in FirefoxRemove Unsuitable Comments from YouTubeImprove YouTube Video Viewing in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet

    Read the article

  • How to prepare for a telephone interview: ‘Develop an Interview Cheat Sheet’

    - by Maria Sandu
    At Oracle we often do telephone interviews in different stages of the process with candidates, due to the fact that we hire native speakers into other countries. On this blog we already have an article with tips and tricks for phone interviews that can help you during the telephone interviews. To help you prepare even better for a telephone interview we would like to introduce you the basics of developing a cheat sheet. The benefit of a telephone interview is that you will be sitting at home, at your table or desk, during the interview, and not in front of someone. So use this to your advantage. The Monster website has some useful and interesting tips and tricks for developing a cheat sheet. Carole Martin, who wrote this article, says that a cheat sheet will help you feel more prepared and confident when speaking to managers over the phone. Important to keep in mind is that you shouldn't memorise what's on the sheet or check it off during the interview. Only use your cheat sheet to remind you of key facts. Here are some suggestions to include on it: • Divide a piece of paper in 2 by drawing a line. Write on one side of the paper a list of requirements as mentioned in the job description. On the other side list your qualities to fulfill the requirements of the employer. This will help you in answering questions about why you are the best candidate for the job and how you fit the role. • Do research on the company, the industry sector and the competitors, so you will get a feeling for the company’s business and can ask more in-depth questions. • Be prepared for the most used introduction question: “Tell me a bit about yourself”. Prepare a 60-second personal statement or pitch in which you summarise who you are and what you can offer, so you will be able to sell yourself from on the very beginning. • Write down a minimum of 5 good examples to answer behavioral interview questions ("Tell me about a time when..." or "Give me an example of a time..." ). These questions are used by interviewers to see how you deal with similar situations as you might encounter in the job. Interviewers use this question as past behaviour is scientifically proven to be the best predictor for future behaviour. • List five questions to ask the interviewer about the job, the company and the industry to help you get a good understanding if the role and company really fit your needs and wants. To get some inspiration check this article on inc.com • Find out how much you are worth on the job market and determine your needs based on your living expenses, especially when moving abroad. • Ask for permission from the people you plan to use as a reference. Also make sure you have your CV at hand and an overview of your grades. Feel free to comment on this article and let us know what your experience is with developing a cheat sheet for a telephone interview. Good luck with the preparation of your sheet.

    Read the article

  • cannot make ubuntu 64-bit v12.04 install work

    - by honestann
    I decided it was time to update my ubuntu (single boot) computer from 64-bit v10.04 to 64-bit v12.04. Unfortunately, for some reason (or reasons) I just can't make it work. Note that I am attempting a fresh install of 64-bit v12.04 onto a new 3TB hard disk, not an upgrade of the 1TB hard disk that contains my working 64-bit v10.04 installation. To perform the attempted install of v12.04 I unplug the SATA cable from the 1TB drive and plug it into the 3TB drive (to avoid risking damage to my working v10.04 installation). I downloaded the ubuntu 64-bit v12.04 install DVD ISO file (~1.6 GB) from the ubuntu releases webpage and burned it onto a DVD. I have downloaded the DVD ISO file 3 times and burned 3 of these installation DVDs (twice with v10.04 and once with my winxp64 system), but none of them work. I run the "check disk" on the DVDs at the beginning of the installation process to assure the DVD is valid. When installation completes and the system boots the 3TB drive, it reports "unknown filesystem". After installation on the 250GB drives, the system boots up fine. During every install I plug the same SATA cable (sda) into only one disk drive (the 3TB or one of the 250GB drives) and leave the other disk drives unconnected (for simplicity). It is my understanding that 64-bit ubuntu (and 64-bit linux in general) has no problem with 3TB disk drives. In the BIOS I have tried having EFI set to "enabled" and "auto" with no apparent difference (no success). I never bothered setting the BIOS to "non-EFI". I have tried partitioning the drive in a few ways to see if that makes a difference, but so far it has not mattered. Typically I manually create partitions something like this: 8GB /boot ext4 8GB swap 3TB / ext4 But I've also tried the following, just in case it matters: 8GB boot efi 8GB swap 8GB /boot ext4 3TB / ext4 Note: In the partition dialog I specify bootup on the same drive I am partitioning and installing ubuntu v12.04 onto. It is a VERY DANGEROUS FACT that the default for this always comes up with the wrong drive (some other drive, generally the external drive). Unless I'm stupid or misunderstanding something, this is very wrong and very dangerous default behavior. Note: If I connect the SATA cable to the 1TB drive that has been my ubuntu 64-bit v10.04 system drive for the past 2 years, it boots up and runs fine. I guess there must be a log file somewhere, and maybe it gives some hints as to what the problem is. I should be able to boot off the 1TB drive with the 3TB drive connected as a secondary (non-boot) drive and get the log file, assuming there is one and someone tells me the name (and where to find it if the name is very generic). After installation on the 3TB drive completes and the system reboots, the following prints out on a black screen: Loading Operating System ... Boot from CD/DVD : Boot from CD/DVD : error: unknown filesystem grub rescue> Note: I have two DVD burners in the system, hence the duplicate line above. Note: I install and boot 64-bit ubuntu v12.04 on both of my 250GB in this same system, but still cannot make the 3TB drive boot. Sigh. Any ideas? ========== motherboard == gigabyte 990FXA-UD7 CPU == AMD FX-8150 8-core bulldozer @ 3.6 GHz RAM == 8GB of DDR3 in 2 sticks (matched pair) HDD == seagate 3TB SATA3 @ 7200 rpm (new install 64-bit v12.04 FAILS) HDD == seagate 1TB SATA3 @ 7200 rpm (64-bit v10.04 WORKS for two years) HDD == seagate 250GB SATA2 @ 7200 rpm (new install 64-bit v12.04 WORKS) HDD == seagate 250GB SATA2 @ 7200 rpm (new install 64-bit v12.04 WORKS) GPU == nvidia GTX-285 ??? == no overclocking or other funky business USB == external seagate 2TB HDD for making backups DVD == one bluray burner (SATA) DVD == one DVD burner (SATA) 64-bit ubuntu v10.04 has booted and run fine on the seagate 1TB drive for 2 years.

    Read the article

  • Apply Skins to Add Some Flair to Windows Media Player 12

    - by DigitalGeekery
    Tired of the same look and feel of Windows Media Player in Windows 7? We’ll show you how to inject new life into your media experience by applying skins in WMP 12. Adding Skins In Library view, click on View from the Menu and select Skin Chooser. By default, WMP 12 comes with only a couple of modest skins. When you select a skin from the left pane, a preview will be displayed to the right. To apply one of the skins, simply select it from the pane on the left and click Apply Skin.   You can also switch to the currently selected skin in the Skin chooser by selecting Skin from the View menu, or by pressing Crtl + 2. Media Player will open in Now Playing mode. Click on the Switch to Library button at the top left to return to Library view.     Ok, so the included skins are a little boring. You can find additional skins by selecting Tools > Download > Skins.   Or, by clicking on More Skins from within the Skin chooser.   You will be taken the the Microsoft website where you can choose from dozens of skins to download and install. Select a skin you’d like to try and click the link to download.   If prompted with a warning message about files containing scripts that access your library, click Yes. Note: These warning boxes may look a bit different depending on your browser. We are using Chrome for this example.   Click on View Now.   Your new skin will be on display. To get back to the Library mode, find and click the Return to Full Mode button.    Some skins may launch video in a separate window.   If you want to delete one of the skins, select it from the list within the Skin chooser and click the red “X.” You can also press the delete key on your keyboard.   Then click Yes to confirm.   Conclusion Using skins is a quick and easy way to add some style to Windows Media Player and switching back and forth between skins is a breeze. Regardless of your interests, you are sure to find a skin that fits your tastes. You may find WMP skins on other sites, but sticking with Microsoft’s website will ensure maximum compatibility. Skins for Windows Media Player Similar Articles Productive Geek Tips Make VLC Player Look like Windows Media Player 10Make VLC Player Look like Windows Media Player 11Make VLC Player Look like Winamp 5 (Kinda)Fixing When Windows Media Player Library Won’t Let You Add FilesInstall and Use the VLC Media Player on Ubuntu Linux TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative

    Read the article

  • Build Explorer version 1.1 for Visual Studio Team Explorer is released

    - by terje
    Our free extension to Visual Studio , the folder based Build Explorer Version 1.1 has now been released, and uploaded to the Visual Studio Gallery and Codeplex. We have collected up a few changes and some bugs, as follows: Changes: Queue Default Builds can now be optionally fully enabled, fully disabled or enabled just for leaf nodes (=disabled for folders).  If you got a large number of builds it was pretty scary to be able to launch all of them with just one click.  However, it is nice to avoid having the dialog box up when you want to just run off a single build.  That’s the reasoning between the 3rd choice here. Auto fill-in of the builds at start up and refresh  This was a request that came up a lot, and which was also irritating to us.  When the Team Project is opened, the Build explorer will start by itself and fill up it’s tree. So you don’t need to click the node anymore. There was also quite a bit of flashing when the tree filled up, this has been reduced to just a single top level fill before it collapses the node. The speed of the buildup of the tree has also been increased. The “All Build Definitions” node is now shown on top of the list Login box appeared in certain cross domain situations. This was a fix for the TF30063 authentication problem we had in the beginning.  Hopefully the new code has that fixed properly so that both the login box and the TF30063 are gone forever.  Our testing so far seems to indicate it works.  If anyone gets a real problem here there are two workarounds: 1) Turn off the auto refresh to reduce the issue. If this doesn’t fix it, then 2) please reinstall the former version (go to the codeplex download site if you don’t have it anymore)  Write a comment to this blog post with a description of what happens, and I will send a temporary fix asap. Bug fixes: The folder name matching was case sensitive, so “Application.CI” and “application.CI” created two different folders.  View all builds not shown for leaf odes, and view builds didn’t work in all cases.  There was some inconsistencies here which have been fixed. Partly fixed:  The context menu to queue a new build for disabled builds should be removed, but that was a difficult one, and is still on the list, but the command will not do anything for a disabled build. Using the Queue Default Builds on a folder, and if it had some disabled builds below an error box appeared and ruined the whole experience. As a result of these fixes there has been introduced some new options, as shown below:   The two first settings, the Separator symbol and the options for how to handle Queuing of default builds are set per Team Project, and is stored in the TFS source control under the BuildProcessTemplates folder, with the name Inmeta.VisualStudio.BuildExplorer.Settings.xml The next two settings need some explanations.  They handle the behavior for the auto update of the build folders.  First, these are stored in the local registry per user, at the key HKEY_CURRENT_USER/Software\Inmeta\BuildExplorer. The first option Use Timed Refresh at Startup, if turned off, you will need to click the node as it is done in Version 1.0.  The second option is a timed value, the time after the Build explorer node is created and until the scanning of the Build folders start.  It is assumed that this is enough, and the tests so far indicates this.  If you have very many builds and you see that the explorer don’t get them all, try to increase this value, and of course, notify me of your case, either here or on the Visual Gallery site.

    Read the article

  • SQL SERVER – Transcript of Learning SQL Server Performance: Indexing Basics – Interview of Vinod Kumar by Pinal Dave

    - by pinaldave
    Recently I just wrote a blog post on about Learning SQL Server Performance: Indexing Basics and I received lots of request that if we can share some insight into the course. Here is 200 seconds interview of Vinod Kumar I took right after completing the course. We have few free codes to watch the course, please your comment at http://facebook.com/SQLAuth and we will few of first ones, we will send the code. There are many people who said they would like to read the transcript of the video. Here I have generated the same. Pinal: Vinod, we recently released this course, SQL Server Indexing. It is about performance tuning. So tell me – how do indexes help performance? Vinod: I think what happens in the industry when it comes to performance is that developers and DBAs look at indexes first.  So that’s the first step for any performance tuning exercise, indexing is one of the most critical aspects and it is important to learn it the right way. Pinal: Correct. So what you mean to say is that if you know indexing you can pretty much tune any server and query. Vinod: So I might contradict my false statement now. Indexing is usually a stepping stone but it does not lead you to the end. But it’s good to start with indexing and there are lots of nuances to indexing that you need to understand, like how SQL uses indexing and how performance can improve because of the strategies that you have made. Pinal: But now I’m confused. First you said indexes are good, and then you said that indexes can degrade your performance.  So what is this course about?  I mean how does this course really make an impact? Vinod: Ok -so from the course perspective, what we are trying to do is give you a capsule which gives you a good start. Every journey needs a beginning, you need that first step.  This course is that first step in understanding. This is the most basic, fundamental course that we have tried to attack. This is the fundamentals of indexing, some of the key things that you must know about indexing.   Some of the basics of indexing are lesser known and so I think this course is geared towards each and every one of you out there who wants to understand little bit more about indexing. Pinal: So what I understand is that if I enrolled in this course I will have a minimum understanding about indexing when dealing with performance tuning.  Right? Vinod: Exactly. In this course is we have tried to give you a nice summary. We are talking about clustered indexing, non clustered indexing, too many indexes, too few indexes, over indexing, under indexing, duplicate indexing, columns tune indexing, with SQL Server 2012. There’s lot’s to learn. Pinal: You can see the URL [http://bit.ly/sql-index] of the course on the screen. Go ahead, attend, and let us know what you think about it. Thank you. Vinod: Thank you. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology, Video

    Read the article

  • Understanding EDI 997.

    - by VishnuTiwariBlog
    Hi Guys, This is for the EDI starter. Below is the complete detail of EDI 997 segment and element details. 997 Functional Acknowledgment Transaction Layout: No. Seg ID Name Description Example M/O 010 ST Transaction Set Header To indicate the start of a transaction set and to assign a control number ST*997*382823~   M ST01   Code uniquely identifying a Transaction Set   M ST02   Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set   M 020 AK1 Functional Group Response Header To start acknowledgment of a functional group AK1*QM*2459823 M        AK101   Code identifying a group of application related transaction sets IN Invoice Information (810) SH Ship Notice/Manifest (856)     AK102   Assigned number originated and maintained by the sender     030 AK2 Transaction Set Response Header To start acknowledgment of a single transaction set AK2*856*001 M AK201   Code uniquely identifying a Transaction Set 810 Invoice 856 Ship Notice/Manifest   M AK202   Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set   M 040 AK3 Data Segment Note To report errors in a data segment and identify the location of the data segment AK3*TD3*9 O AK301 Segment ID Code Code defining the segment ID of the data segment in error (See Appendix A - Number 77)     AK302 Segment Position in Transaction Set The numerical count position of this data segment from the start of the transaction set: the transaction set header is count position 1     050 AK4 Data Element Note To report errors in a data element or composite data structure and identify the location of the data element AK4*2**2 O AK401 Position in Segment Code indicating the relative position of a simple data element, or the relative position of a composite data structure combined with the relative position of the component data element within the composite data structure, in error; the count starts with 1 for the simple data element or composite data structure immediately following the segment ID     AK402 Element Position in Segment This is used to indicate the relative position of a simple data element, or the relative position of a composite data structure with the relative position of the component within the composite data structure, in error; in the data segment the count starts with 1 for the simple data element or composite data structure immediately following the segment ID     AK403 Data Element Syntax Error Code Code indicating the error found after syntax edits of a data element 1 Mandatory Data Element Missing 2 Conditional Required Data Element Missing 3 Too Many Data Elements 4 Data Element Too Short 5 Data Element Too Long 6 Invalid Character in Data Element 7 Invalid Code Value 8 Invalid Date 9 Invalid Time 10 Exclusion Condition Violated     AK404 Copy of Bad Data Element This is a copy of the data element in error     060 AK5 AK5 Transaction Set Response Trailer To acknowledge acceptance or rejection and report errors in a transaction set AK5*A~ AK5*R*5~ M AK501 Transaction Set Acknowledgment Code Code indicating accept or reject condition based on the syntax editing of the transaction set A Accepted E Accepted But Errors Were Noted R Rejected     AK502 Transaction Set Syntax Error Code Code indicating error found based on the syntax editing of a transaction set 1 Transaction Set Not Supported 2 Transaction Set Trailer Missing 3 Transaction Set Control Number in Header and Trailer Do Not Match 4 Number of Included Segments Does Not Match Actual Count 5 One or More Segments in Error 6 Missing or Invalid Transaction Set Identifier 7 Missing or Invalid Transaction Set Control Number     070 AK9 Functional Group Response Trailer To acknowledge acceptance or rejection of a functional group and report the number of included transaction sets from the original trailer, the accepted sets, and the received sets in this functional group AK9*A*1*1*1~ AK9*R*1*1*0~ M AK901 Functional Group Acknowledge Code Code indicating accept or reject condition based on the syntax editing of the functional group A Accepted E Accepted, But Errors Were Noted. R Rejected     AK902 Number of Transaction Sets Included Total number of transaction sets included in the functional group or interchange (transmission) group terminated by the trailer containing this data element     AK903 Number of Received Transaction Sets Number of Transaction Sets received     AK904 Number of Accepted Transaction Sets Number of accepted Transaction Sets in a Functional Group     AK905 Functional Group Syntax Error Code Code indicating error found based on the syntax editing of the functional group header and/or trailer 1 Functional Group Not Supported 2 Functional Group Version Not Supported 3 Functional Group Trailer Missing 4 Group Control Number in the Functional Group Header and Trailer Do Not Agree 5 Number of Included Transaction Sets Does Not Match Actual Count 6 Group Control Number Violates Syntax     080 SE Transaction Set Trailer To indicate the end of the transaction set and provide the count of the transmitted segments (including the beginning (ST) and ending (SE) segments) SE*9*223~ M SE01 Number of Included Segments Total number of segments included in a transaction set including ST and SE segments     SE02 Transaction Set Control Number Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set

    Read the article

  • Understanding EDI 997

    - by VishnuTiwariBlog
    Hi Guys, This is for the EDI starter. Below is the complete detail of EDI 997 segment and element details. 997 Functional Acknowledgment Transaction Layout:   No. Seg ID Name Description Example M/O 010 ST Transaction Set Header To indicate the start of a transaction set and to assign a control number ST*997*382823~   M ST01   Code uniquely identifying a Transaction Set   M ST02   Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set   M 020 AK1 Functional Group Response Header To start acknowledgment of a functional group AK1*QM*2459823 M        AK101   Code identifying a group of application related transaction sets IN Invoice Information (810) SH Ship Notice/Manifest (856)     AK102   Assigned number originated and maintained by the sender     030 AK2 Transaction Set Response Header To start acknowledgment of a single transaction set AK2*856*001 M AK201   Code uniquely identifying a Transaction Set 810 Invoice 856 Ship Notice/Manifest   M AK202   Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set   M 040 AK3 Data Segment Note To report errors in a data segment and identify the location of the data segment AK3*TD3*9 O AK301 Segment ID Code Code defining the segment ID of the data segment in error (See Appendix A - Number 77)     AK302 Segment Position in Transaction Set The numerical count position of this data segment from the start of the transaction set: the transaction set header is count position 1     050 AK4 Data Element Note To report errors in a data element or composite data structure and identify the location of the data element AK4*2**2 O AK401 Position in Segment Code indicating the relative position of a simple data element, or the relative position of a composite data structure combined with the relative position of the component data element within the composite data structure, in error; the count starts with 1 for the simple data element or composite data structure immediately following the segment ID     AK402 Element Position in Segment This is used to indicate the relative position of a simple data element, or the relative position of a composite data structure with the relative position of the component within the composite data structure, in error; in the data segment the count starts with 1 for the simple data element or composite data structure immediately following the segment ID     AK403 Data Element Syntax Error Code Code indicating the error found after syntax edits of a data element 1 Mandatory Data Element Missing 2 Conditional Required Data Element Missing 3 Too Many Data Elements 4 Data Element Too Short 5 Data Element Too Long 6 Invalid Character in Data Element 7 Invalid Code Value 8 Invalid Date 9 Invalid Time 10 Exclusion Condition Violated     AK404 Copy of Bad Data Element This is a copy of the data element in error     060 AK5 AK5 Transaction Set Response Trailer To acknowledge acceptance or rejection and report errors in a transaction set AK5*A~ AK5*R*5~ M AK501 Transaction Set Acknowledgment Code Code indicating accept or reject condition based on the syntax editing of the transaction set A Accepted E Accepted But Errors Were Noted R Rejected     AK502 Transaction Set Syntax Error Code Code indicating error found based on the syntax editing of a transaction set 1 Transaction Set Not Supported 2 Transaction Set Trailer Missing 3 Transaction Set Control Number in Header and Trailer Do Not Match 4 Number of Included Segments Does Not Match Actual Count 5 One or More Segments in Error 6 Missing or Invalid Transaction Set Identifier 7 Missing or Invalid Transaction Set Control Number     070 AK9 Functional Group Response Trailer To acknowledge acceptance or rejection of a functional group and report the number of included transaction sets from the original trailer, the accepted sets, and the received sets in this functional group AK9*A*1*1*1~ AK9*R*1*1*0~ M AK901 Functional Group Acknowledge Code Code indicating accept or reject condition based on the syntax editing of the functional group A Accepted E Accepted, But Errors Were Noted. R Rejected     AK902 Number of Transaction Sets Included Total number of transaction sets included in the functional group or interchange (transmission) group terminated by the trailer containing this data element     AK903 Number of Received Transaction Sets Number of Transaction Sets received     AK904 Number of Accepted Transaction Sets Number of accepted Transaction Sets in a Functional Group     AK905 Functional Group Syntax Error Code Code indicating error found based on the syntax editing of the functional group header and/or trailer 1 Functional Group Not Supported 2 Functional Group Version Not Supported 3 Functional Group Trailer Missing 4 Group Control Number in the Functional Group Header and Trailer Do Not Agree 5 Number of Included Transaction Sets Does Not Match Actual Count 6 Group Control Number Violates Syntax     080 SE Transaction Set Trailer To indicate the end of the transaction set and provide the count of the transmitted segments (including the beginning (ST) and ending (SE) segments) SE*9*223~ M SE01 Number of Included Segments Total number of segments included in a transaction set including ST and SE segments     SE02 Transaction Set Control Number Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set

    Read the article

  • 11 Types of Developers

    - by Lee Brandt
    Jack Dawson Jack Dawson is the homeless drifter in Titanic. At one point in the movie he says, “I figure life’s a gift, and I don’t intend on wasting it.” He is happy to wander wherever life takes him. He works himself from place to place, making just enough money to make it to his next adventure. The “Jack Dawson” developer clings on to any new technology as the ‘next big thing’, and will find ways to shoe-horn it in to places where it is not a fit. He is very appealing to the other developers because they want to try the newest techniques and tools too, He will only stay until the new technology either bores him or becomes problematic. Jack will also be hard to find once the technology has been implemented, because he will be on to the next shiny thing. However, having a Jack Dawson on your team can be beneficial. Jack can be a great ally when attempting to convince a stodgy, corporate entity to upgrade. Jack usually has an encyclopedic recall of all the new features of the technology upgrade and is more than happy to interject them in any conversation. Tom Smykowski Tom is the neurotic employee in Office Space, and is deathly afraid of being fired. He will do only what is necessary to keep the status quo. He believes as long as nothing changes, his job is safe. He will scoff at anything new and be the naysayer during any change initiative. Tom can be useful in off-setting Jack Dawson. Jack will constantly be pushing for change and Tom will constantly be fighting it. When you see that Jack is getting kind of bored with a new technology and Tom has finally stopped wetting himself at the mere mention of it, then it is probably the sweet spot of beginning to implement that new technology (providing it is the right tool for the job). Ray Consella Ray is the guy who built the Field of Dreams. He took a risk. Sometimes he screwed it up, but he knew he didn’t want to end up regretting not attempting it. He constantly doubted himself, but he knew he had to keep going. Granted, he was doing what the voices in his head were telling him to do, but my point is he was driven to do something that most people considered crazy. Even when his friends, his wife and even he told himself he was crazy, somewhere inside himself, he knew it was the right thing to do. These are the innovators. These are the Bill Gates and Steve Jobs of the world. The take risks, they fail, they learn and the get better. Obviously, this kind of person thrives in start-ups and smaller companies, but that is due to their natural aversion to bureaucracy. They want to see their ideas put into motion quickly, and withdrawn quickly if it doesn’t work. Short feedback cycles are essential to Ray. He wants to know if his idea is working or not. He wants to modify or reverse his idea if it is not working or makes things worse. These are the agilistas. May I always be one.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >