Search Results

Search found 69645 results on 2786 pages for 'real time systems'.

Page 19/2786 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Rails timezone differences between Time and DateTime

    - by kjs3
    I have the timezone set. config.time_zone = 'Mountain Time (US & Canada)' Creating a Christmas event from the console... c = Event.new(:title = "Christmas") using Time: c.start = Time.new(2012,12,25) = 2012-12-25 00:00:00 -0700 #has correct offset c.end = Time.new(2012,12,25).end_of_day = 2012-12-25 23:59:59 -0700 #same deal using DateTime: c.start = DateTime.new(2012,12,25) = Tue, 25 Dec 2012 00:00:00 +0000 # no offset c.end = DateTime.new(2012,12,25).end_of_day = Tue, 25 Dec 2012 23:59:59 +0000 # same I've carelessly been using DateTime thinking that input was assumed to be in the config.time_zone but there's no conversion when this gets saved to the db. It's stored just the same as the return values (formatted for the db). Using Time is really no big deal but do I need to manually offset anytime I'm using DateTime and want it to be in the correct zone?

    Read the article

  • Making the user change the time in Android

    - by Casebash
    Android doesn't appear to provide a way for a user application to change the system time. What I would like to do instead is to get the user to change the time. It is easy to open up the Date & Time settings: startActivity(new Intent(android.provider.Settings.ACTION_DATE_SETTINGS)); What I would like to know is: Is it possible to link directly to the set time option? Is it possible to check that the user set the time correctly? I am aware of the TIME_CHANGED broadcast message, but I can't find any documentaion on it

    Read the article

  • Have set Expiration time: Still getting "Query string present but no explicit expiration time"

    - by oligofren
    I have one local Apache instance running with mod_cache (+ disk & mem) enabled, and it seems to cache content from my appserver fine. My app server sets Expiration headers and Last-modified. Yet, when deploying on a production server with the same modules enabled, I am getting the following error in my logs: blablabla not cached. Reason: Query string present but no explicit expiration time Any clues on why Apache is not caching content? The only difference is the Apache version. Locally I am running 2.2. This is from my config CacheRoot "/var/cache/apache2/" CacheEnable disk / This is example output < HTTP/1.1 200 OK < Date: Mon, 19 Nov 2012 16:09:13 GMT < Server: Sun GlassFish Enterprise Server v2.1.1 < X-Powered-By: Servlet/2.5 < Expires: Tue Nov 20 05:00:00 CET 2012 < Last-Modified: Mon Nov 19 17:09:13 CET 2012 < Cache-Control: no-transform < Content-Type: application/x-javascript < Transfer-Encoding: chunked

    Read the article

  • issues with Nginx + Passenger Production setup - Loading time/request time delay

    - by Dani Cela
    having a bit of an issue relating to request time. I have NGINX as a proxy server for a ruby on rails app running passenger. I also have a postgresql database server which is running on its own VM separate from my nginx/application server. My issue is that when I try and access my products page which does a lot of database queries, my query takes maybe 3-4 seconds. The second I flood the web server with requests, i will choke out the web server and have requests take almost 20-30 seconds to process. The rails server and database server do not crash, and the usage is not that high. Each server has more than enough memory, even cpu usage on the rails server isn't more than 85%, albeit thats high but its not maxing it out. Is my problem related to my nginx proxy server? I dont really know how to fully explain this so if you have a question please ask it and I can clarify what I mean. EDIT: to see exactly what i mean relating to the database query, see http://207.245.4.215/products

    Read the article

  • What are basic differences bettwen operating systems working as server? [closed]

    - by poz2k4444
    I'd like to know what are the differences bettwen differents operating systems working as server and what operating system is better using for an specific task like linux, unix, windows, solaris, novell, etcetera. It's not about a specific problem, it's just to know, for example, windows server works better with SQL Server, or Ruby with MacOS It's just I think some of us don't know what to choose when we have to use some specific technology

    Read the article

  • Html 5 Time Tag not recognized by IE8 when cloning

    - by matsientst
    I have been having trouble getting IE to recognize the new Time tag in this context. This all works great in FF. Here is the code: var origComment = $('.articleComment:first div'); if (origComment.length > 0) { var commentHtml = origComment.clone(true); commentHtml.find('time').text('today'); var html = '<article class="' + ((side == 'LEFT') ? '' : 'that') + '">' + commentHtml.html() + '</article>'; $(html).insertAfter('.articleComment:last'); The HTML looks something like this: <article class="articleComment that"> <div id="156" class="parent"> <div class="byline"> <p>Posted <time pubdate="pubdate" datetime="2010-05-07T09:11:08">today</time> by<br/> <a class="username" href="/u/matt">matt</a> </p> <p class="report"><a href="#">Report?</a></p> </div> <div class="comment">left</div> </div> </article> IE can find the Time tag but it returns a collection of 2 elements. I assume the beginning and ending. However, I cannot access it to modify it. I have tried val(), html() and text(). I also can't drop to the actual HTMLElement. I can't get(0).innerHTML. But, if I .get(0).tagName it actually is the Time tag I've got. Any ideas? I hope this makes sense.

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • ffmpeg - How to determine if -movflags faststart is enabled? PHP

    - by IIIOXIII
    While I am able to encode an mp4 file which I can plan on my local windows machine, I am having trouble encoding files to mp4 which are readable when streaming by safari, etc. After a bit of reading, I believe my issue is that I must move the metadata from the end of the file to the beginning in order for the converted mp4 files to be streamable. To that end, I am trying to find out if the build of ffmpeg that I am currently using is able to use the -movflags faststart option through php - as my current outputted mp4 files are not working when streamed online. This is the way I am now echoing the -help, -formats, -codecs, but I am not seeing anything about -movflags faststart in any of the lists: exec($ffmpegPath." -help", $codecArr); for($ii=0;$ii<count($codecArr);$ii++){ echo $codecArr[$ii].'</br>'; } Is there a similar method of determining if -movflags fastart is available to my ffmpeg build? Any other way? Should it be listed with any of the previously suggested commands? -help/-formats? Can someone that knows it is enabled in their version of ffmpeg check to see if it is listed under -help or -formats, etc.? TIA. EDIT: COMPLETE CONSOLE OUTPUT FOR BOTH THE CONVERSION COMMAND AND -MOVFLAGS COMMAND BELOW: COMMAND: ffmpeg_new -i C:\vidtests\Wildlife.wmv -s 640x480 C:\vidtests\Wildlife.mp4 OUTPUT: ffmpeg version N-54207-ge59fb3f Copyright (c) 2000-2013 the FFmpeg developers built on Jun 25 2013 21:55:00 with gcc 4.7.3 (GCC) configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-av isynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enab le-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetyp e --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --ena ble-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-l ibopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libsp eex --enable-libtheora --enable-libtwolame --enable-libvo-aacenc --enable-libvo- amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs -- enable-libxvid --enable-zlib libavutil 52. 37.101 / 52. 37.101 libavcodec 55. 17.100 / 55. 17.100 libavformat 55. 10.100 / 55. 10.100 libavdevice 55. 2.100 / 55. 2.100 libavfilter 3. 77.101 / 3. 77.101 libswscale 2. 3.100 / 2. 3.100 libswresample 0. 17.102 / 0. 17.102 libpostproc 52. 3.100 / 52. 3.100 [asf @ 00000000002ed760] Stream #0: not enough frames to estimate rate; consider increasing probesize Guessed Channel Layout for Input Stream #0.0 : stereo Input #0, asf, from 'C:\vidtests\Wildlife.wmv' : Metadata: SfOriginalFPS : 299700 WMFSDKVersion : 11.0.6001.7000 WMFSDKNeeded : 0.0.0.0000 comment : Footage: Small World Productions, Inc; Tourism New Zealand | Producer: Gary F. Spradling | Music: Steve Ball title : Wildlife in HD copyright : -¬ 2008 Microsoft Corporation IsVBR : 0 DeviceConformanceTemplate: AP@L3 Duration: 00:00:30.09, start: 0.000000, bitrate: 6977 kb/s Stream #0:0(eng): Audio: wmav2 (a[1][0][0] / 0x0161), 44100 Hz, stereo, fltp , 192 kb/s Stream #0:1(eng): Video: vc1 (Advanced) (WVC1 / 0x31435657), yuv420p, 1280x7 20, 5942 kb/s, 29.97 tbr, 1k tbn, 1k tbc [libx264 @ 00000000002e6980] using cpu capabilities: MMX2 SSE2Fast SSSE3 Cache64 [libx264 @ 00000000002e6980] profile High, level 3.0 [libx264 @ 00000000002e6980] 264 - core 133 r2334 a3ac64b - H.264/MPEG-4 AVC cod ec - Copyleft 2003-2013 - http://www.videolan.org/x264.html - options: cabac=1 r ef=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed _ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pski p=1 chroma_qp_offset=-2 threads=3 lookahead_threads=1 sliced_threads=0 nr=0 deci mate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_ adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=2 5 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.6 0 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'C:\vidtests\Wildlife.mp4': Metadata: SfOriginalFPS : 299700 WMFSDKVersion : 11.0.6001.7000 WMFSDKNeeded : 0.0.0.0000 comment : Footage: Small World Productions, Inc; Tourism New Zealand | Producer: Gary F. Spradling | Music: Steve Ball title : Wildlife in HD copyright : -¬ 2008 Microsoft Corporation IsVBR : 0 DeviceConformanceTemplate: AP@L3 encoder : Lavf55.10.100 Stream #0:0(eng): Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 6 40x480, q=-1--1, 30k tbn, 29.97 tbc Stream #0:1(eng): Audio: aac (libvo_aacenc) ([64][0][0][0] / 0x0040), 44100 Hz, stereo, s16, 128 kb/s Stream mapping: Stream #0:1 -> #0:0 (vc1 -> libx264) Stream #0:0 -> #0:1 (wmav2 -> libvo_aacenc) Press [q] to stop, [?] for help frame= 53 fps= 49 q=29.0 size= 0kB time=00:00:00.13 bitrate= 2.9kbits/ frame= 63 fps= 40 q=29.0 size= 0kB time=00:00:00.46 bitrate= 0.8kbits/ frame= 74 fps= 35 q=29.0 size= 0kB time=00:00:00.83 bitrate= 0.5kbits/ frame= 85 fps= 32 q=29.0 size= 0kB time=00:00:01.20 bitrate= 0.3kbits/ frame= 95 fps= 30 q=29.0 size= 0kB time=00:00:01.53 bitrate= 0.3kbits/ frame= 107 fps= 28 q=29.0 size= 0kB time=00:00:01.93 bitrate= 0.2kbits/ Queue input is backward in time [mp4 @ 00000000003ef800] Non-monotonous DTS in output stream 0:1; previous: 7616 , current: 7063; changing to 7617. This may result in incorrect timestamps in th e output file. frame= 118 fps= 28 q=29.0 size= 113kB time=00:00:02.30 bitrate= 402.6kbits/ frame= 129 fps= 26 q=29.0 size= 219kB time=00:00:02.66 bitrate= 670.7kbits/ frame= 141 fps= 26 q=29.0 size= 264kB time=00:00:03.06 bitrate= 704.2kbits/ frame= 152 fps= 25 q=29.0 size= 328kB time=00:00:03.43 bitrate= 782.2kbits/ frame= 163 fps= 25 q=29.0 size= 431kB time=00:00:03.80 bitrate= 928.1kbits/ frame= 174 fps= 24 q=29.0 size= 568kB time=00:00:04.17 bitrate=1116.3kbits/ frame= 190 fps= 25 q=29.0 size= 781kB time=00:00:04.70 bitrate=1359.9kbits/ frame= 204 fps= 25 q=29.0 size= 1006kB time=00:00:05.17 bitrate=1593.1kbits/ frame= 218 fps= 25 q=29.0 size= 1058kB time=00:00:05.63 bitrate=1536.8kbits/ frame= 229 fps= 25 q=29.0 size= 1093kB time=00:00:06.00 bitrate=1490.9kbits/ frame= 239 fps= 24 q=29.0 size= 1118kB time=00:00:06.33 bitrate=1444.4kbits/ frame= 251 fps= 24 q=29.0 size= 1150kB time=00:00:06.74 bitrate=1397.9kbits/ frame= 265 fps= 24 q=29.0 size= 1234kB time=00:00:07.20 bitrate=1402.3kbits/ frame= 278 fps= 25 q=29.0 size= 1332kB time=00:00:07.64 bitrate=1428.3kbits/ frame= 294 fps= 25 q=29.0 size= 1403kB time=00:00:08.17 bitrate=1405.7kbits/ frame= 308 fps= 25 q=29.0 size= 1547kB time=00:00:08.64 bitrate=1466.4kbits/ frame= 323 fps= 25 q=29.0 size= 1595kB time=00:00:09.14 bitrate=1429.5kbits/ frame= 337 fps= 25 q=29.0 size= 1702kB time=00:00:09.60 bitrate=1450.7kbits/ frame= 351 fps= 25 q=29.0 size= 1755kB time=00:00:10.07 bitrate=1427.1kbits/ frame= 365 fps= 25 q=29.0 size= 1820kB time=00:00:10.54 bitrate=1414.1kbits/ frame= 381 fps= 25 q=29.0 size= 1852kB time=00:00:11.07 bitrate=1369.6kbits/ frame= 396 fps= 26 q=29.0 size= 1893kB time=00:00:11.57 bitrate=1339.5kbits/ frame= 409 fps= 26 q=29.0 size= 1923kB time=00:00:12.01 bitrate=1311.8kbits/ frame= 421 fps= 25 q=29.0 size= 1967kB time=00:00:12.41 bitrate=1298.3kbits/ frame= 434 fps= 25 q=29.0 size= 1998kB time=00:00:12.84 bitrate=1274.0kbits/ frame= 445 fps= 25 q=29.0 size= 2018kB time=00:00:13.21 bitrate=1251.3kbits/ frame= 458 fps= 25 q=29.0 size= 2048kB time=00:00:13.64 bitrate=1229.5kbits/ frame= 471 fps= 25 q=29.0 size= 2067kB time=00:00:14.08 bitrate=1202.3kbits/ frame= 484 fps= 25 q=29.0 size= 2189kB time=00:00:14.51 bitrate=1235.5kbits/ frame= 497 fps= 25 q=29.0 size= 2260kB time=00:00:14.94 bitrate=1238.3kbits/ frame= 509 fps= 25 q=29.0 size= 2311kB time=00:00:15.34 bitrate=1233.3kbits/ frame= 523 fps= 25 q=29.0 size= 2429kB time=00:00:15.81 bitrate=1258.1kbits/ frame= 535 fps= 25 q=29.0 size= 2541kB time=00:00:16.21 bitrate=1283.5kbits/ frame= 548 fps= 25 q=29.0 size= 2718kB time=00:00:16.64 bitrate=1337.5kbits/ frame= 560 fps= 25 q=29.0 size= 2845kB time=00:00:17.05 bitrate=1367.1kbits/ frame= 571 fps= 25 q=29.0 size= 2965kB time=00:00:17.41 bitrate=1394.6kbits/ frame= 580 fps= 25 q=29.0 size= 3025kB time=00:00:17.71 bitrate=1398.7kbits/ frame= 588 fps= 25 q=29.0 size= 3098kB time=00:00:17.98 bitrate=1411.1kbits/ frame= 597 fps= 25 q=29.0 size= 3183kB time=00:00:18.28 bitrate=1426.1kbits/ frame= 606 fps= 24 q=29.0 size= 3279kB time=00:00:18.58 bitrate=1445.2kbits/ frame= 616 fps= 24 q=29.0 size= 3441kB time=00:00:18.91 bitrate=1489.9kbits/ frame= 626 fps= 24 q=29.0 size= 3650kB time=00:00:19.25 bitrate=1553.0kbits/ frame= 638 fps= 24 q=29.0 size= 3826kB time=00:00:19.65 bitrate=1594.7kbits/ frame= 649 fps= 24 q=29.0 size= 3950kB time=00:00:20.02 bitrate=1616.3kbits/ frame= 660 fps= 24 q=29.0 size= 4067kB time=00:00:20.38 bitrate=1634.1kbits/ frame= 669 fps= 24 q=29.0 size= 4121kB time=00:00:20.68 bitrate=1631.8kbits/ frame= 682 fps= 24 q=29.0 size= 4274kB time=00:00:21.12 bitrate=1657.9kbits/ frame= 696 fps= 24 q=29.0 size= 4446kB time=00:00:21.58 bitrate=1687.1kbits/ frame= 709 fps= 24 q=29.0 size= 4590kB time=00:00:22.02 bitrate=1707.3kbits/ frame= 719 fps= 24 q=29.0 size= 4772kB time=00:00:22.35 bitrate=1748.5kbits/ frame= 732 fps= 24 q=29.0 size= 4852kB time=00:00:22.78 bitrate=1744.3kbits/ frame= 744 fps= 24 q=29.0 size= 4973kB time=00:00:23.18 bitrate=1756.9kbits/ frame= 756 fps= 24 q=29.0 size= 5099kB time=00:00:23.59 bitrate=1770.8kbits/ frame= 768 fps= 24 q=29.0 size= 5149kB time=00:00:23.99 bitrate=1758.4kbits/ frame= 780 fps= 24 q=29.0 size= 5227kB time=00:00:24.39 bitrate=1755.7kbits/ frame= 797 fps= 24 q=29.0 size= 5377kB time=00:00:24.95 bitrate=1765.0kbits/ frame= 813 fps= 24 q=29.0 size= 5507kB time=00:00:25.49 bitrate=1769.5kbits/ frame= 828 fps= 24 q=29.0 size= 5634kB time=00:00:25.99 bitrate=1775.5kbits/ frame= 843 fps= 24 q=29.0 size= 5701kB time=00:00:26.49 bitrate=1762.9kbits/ frame= 859 fps= 24 q=29.0 size= 5830kB time=00:00:27.02 bitrate=1767.0kbits/ frame= 872 fps= 24 q=29.0 size= 5926kB time=00:00:27.46 bitrate=1767.7kbits/ frame= 888 fps= 24 q=29.0 size= 6014kB time=00:00:27.99 bitrate=1759.7kbits/ frame= 900 fps= 24 q=29.0 size= 6332kB time=00:00:28.39 bitrate=1826.9kbits/ frame= 901 fps= 24 q=-1.0 Lsize= 6717kB time=00:00:30.10 bitrate=1828.0kbits /s video:6211kB audio:472kB subtitle:0 global headers:0kB muxing overhead 0.513217% [libx264 @ 00000000002e6980] frame I:8 Avg QP:21.77 size: 39744 [libx264 @ 00000000002e6980] frame P:433 Avg QP:25.69 size: 11490 [libx264 @ 00000000002e6980] frame B:460 Avg QP:29.25 size: 2319 [libx264 @ 00000000002e6980] consecutive B-frames: 5.4% 78.6% 2.7% 13.3% [libx264 @ 00000000002e6980] mb I I16..4: 21.8% 48.8% 29.5% [libx264 @ 00000000002e6980] mb P I16..4: 0.7% 4.0% 1.3% P16..4: 37.1% 22.2 % 15.5% 0.0% 0.0% skip:19.2% [libx264 @ 00000000002e6980] mb B I16..4: 0.1% 0.5% 0.2% B16..8: 43.5% 7.0 % 2.1% direct: 2.2% skip:44.5% L0:36.4% L1:52.7% BI:10.9% [libx264 @ 00000000002e6980] 8x8 transform intra:62.8% inter:56.2% [libx264 @ 00000000002e6980] coded y,uvDC,uvAC intra: 74.2% 78.8% 44.0% inter: 2 3.6% 14.5% 1.0% [libx264 @ 00000000002e6980] i16 v,h,dc,p: 48% 24% 9% 20% [libx264 @ 00000000002e6980] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 16% 17% 15% 7% 8% 11% 8% 10% 8% [libx264 @ 00000000002e6980] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 19% 17% 15% 7% 10% 11% 8% 7% 7% [libx264 @ 00000000002e6980] i8c dc,h,v,p: 53% 21% 18% 7% [libx264 @ 00000000002e6980] Weighted P-Frames: Y:0.7% UV:0.0% [libx264 @ 00000000002e6980] ref P L0: 62.4% 19.0% 12.0% 6.6% 0.0% [libx264 @ 00000000002e6980] ref B L0: 90.5% 8.9% 0.7% [libx264 @ 00000000002e6980] ref B L1: 97.9% 2.1% [libx264 @ 00000000002e6980] kb/s:1692.37 AND THE –MOVFLAGS COMMAND: C:\XSITE\SITE>ffmpeg_new -i C:\vidtests\Wildlife.mp4 -movflags faststart C:\vidtests\Wildlife_fs.mp4 AND THE –MOVFLAGS OUTPUT ffmpeg version N-54207-ge59fb3f Copyright (c) 2000-2013 the FFmpeg developers built on Jun 25 2013 21:55:00 with gcc 4.7.3 (GCC) configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-av isynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enab le-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetyp e --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --ena ble-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-l ibopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libsp eex --enable-libtheora --enable-libtwolame --enable-libvo-aacenc --enable-libvo- amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs -- enable-libxvid --enable-zlib libavutil 52. 37.101 / 52. 37.101 libavcodec 55. 17.100 / 55. 17.100 libavformat 55. 10.100 / 55. 10.100 libavdevice 55. 2.100 / 55. 2.100 libavfilter 3. 77.101 / 3. 77.101 libswscale 2. 3.100 / 2. 3.100 libswresample 0. 17.102 / 0. 17.102 libpostproc 52. 3.100 / 52. 3.100 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'C:\vidtests\Wildlife.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 title : Wildlife in HD encoder : Lavf55.10.100 comment : Footage: Small World Productions, Inc; Tourism New Zealand | Producer: Gary F. Spradling | Music: Steve Ball copyright : -¬ 2008 Microsoft Corporation Duration: 00:00:30.13, start: 0.036281, bitrate: 1826 kb/s Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 640x480, 1692 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc Metadata: handler_name : VideoHandler Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 12 8 kb/s Metadata: handler_name : SoundHandler [libx264 @ 0000000004360620] using cpu capabilities: MMX2 SSE2Fast SSSE3 Cache64 [libx264 @ 0000000004360620] profile High, level 3.0 [libx264 @ 0000000004360620] 264 - core 133 r2334 a3ac64b - H.264/MPEG-4 AVC cod ec - Copyleft 2003-2013 - http://www.videolan.org/x264.html - options: cabac=1 r ef=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed _ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pski p=1 chroma_qp_offset=-2 threads=3 lookahead_threads=1 sliced_threads=0 nr=0 deci mate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_ adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=2 5 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.6 0 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'C:\vidtests\Wildlife_fs.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 title : Wildlife in HD copyright : -¬ 2008 Microsoft Corporation comment : Footage: Small World Productions, Inc; Tourism New Zealand | Producer: Gary F. Spradling | Music: Steve Ball encoder : Lavf55.10.100 Stream #0:0(eng): Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 6 40x480, q=-1--1, 30k tbn, 29.97 tbc Metadata: handler_name : VideoHandler Stream #0:1(eng): Audio: aac (libvo_aacenc) ([64][0][0][0] / 0x0040), 44100 Hz, stereo, s16, 128 kb/s Metadata: handler_name : SoundHandler Stream mapping: Stream #0:0 -> #0:0 (h264 -> libx264) Stream #0:1 -> #0:1 (aac -> libvo_aacenc) Press [q] to stop, [?] for help frame= 52 fps=0.0 q=29.0 size= 29kB time=00:00:01.76 bitrate= 133.9kbits/ frame= 63 fps= 60 q=29.0 size= 104kB time=00:00:02.14 bitrate= 397.2kbits/ frame= 74 fps= 47 q=29.0 size= 176kB time=00:00:02.51 bitrate= 573.2kbits/ frame= 87 fps= 41 q=29.0 size= 265kB time=00:00:02.93 bitrate= 741.2kbits/ frame= 101 fps= 37 q=29.0 size= 358kB time=00:00:03.39 bitrate= 862.8kbits/ frame= 113 fps= 34 q=29.0 size= 437kB time=00:00:03.79 bitrate= 943.7kbits/ frame= 125 fps= 33 q=29.0 size= 520kB time=00:00:04.20 bitrate=1012.2kbits/ frame= 138 fps= 32 q=29.0 size= 606kB time=00:00:04.64 bitrate=1069.8kbits/ frame= 151 fps= 31 q=29.0 size= 696kB time=00:00:05.06 bitrate=1124.3kbits/ frame= 163 fps= 30 q=29.0 size= 780kB time=00:00:05.47 bitrate=1166.4kbits/ frame= 176 fps= 30 q=29.0 size= 919kB time=00:00:05.90 bitrate=1273.9kbits/ frame= 196 fps= 31 q=29.0 size= 994kB time=00:00:06.57 bitrate=1237.4kbits/ frame= 213 fps= 31 q=29.0 size= 1097kB time=00:00:07.13 bitrate=1258.8kbits/ frame= 225 fps= 30 q=29.0 size= 1204kB time=00:00:07.53 bitrate=1309.8kbits/ frame= 236 fps= 30 q=29.0 size= 1323kB time=00:00:07.91 bitrate=1369.4kbits/ frame= 249 fps= 29 q=29.0 size= 1451kB time=00:00:08.34 bitrate=1424.6kbits/ frame= 263 fps= 29 q=29.0 size= 1574kB time=00:00:08.82 bitrate=1461.3kbits/ frame= 278 fps= 29 q=29.0 size= 1610kB time=00:00:09.30 bitrate=1416.9kbits/ frame= 296 fps= 30 q=29.0 size= 1655kB time=00:00:09.91 bitrate=1368.0kbits/ frame= 313 fps= 30 q=29.0 size= 1697kB time=00:00:10.48 bitrate=1326.4kbits/ frame= 330 fps= 30 q=29.0 size= 1737kB time=00:00:11.05 bitrate=1286.5kbits/ frame= 345 fps= 30 q=29.0 size= 1776kB time=00:00:11.54 bitrate=1260.4kbits/ frame= 361 fps= 30 q=29.0 size= 1813kB time=00:00:12.07 bitrate=1230.3kbits/ frame= 377 fps= 30 q=29.0 size= 1847kB time=00:00:12.59 bitrate=1201.4kbits/ frame= 395 fps= 30 q=29.0 size= 1880kB time=00:00:13.22 bitrate=1165.0kbits/ frame= 410 fps= 30 q=29.0 size= 1993kB time=00:00:13.72 bitrate=1190.2kbits/ frame= 424 fps= 30 q=29.0 size= 2080kB time=00:00:14.18 bitrate=1201.4kbits/ frame= 439 fps= 30 q=29.0 size= 2166kB time=00:00:14.67 bitrate=1209.4kbits/ frame= 455 fps= 30 q=29.0 size= 2262kB time=00:00:15.21 bitrate=1217.5kbits/ frame= 469 fps= 30 q=29.0 size= 2341kB time=00:00:15.68 bitrate=1223.0kbits/ frame= 484 fps= 30 q=29.0 size= 2430kB time=00:00:16.19 bitrate=1229.1kbits/ frame= 500 fps= 30 q=29.0 size= 2523kB time=00:00:16.71 bitrate=1236.3kbits/ frame= 515 fps= 30 q=29.0 size= 2607kB time=00:00:17.21 bitrate=1240.4kbits/ frame= 531 fps= 30 q=29.0 size= 2681kB time=00:00:17.73 bitrate=1238.2kbits/ frame= 546 fps= 30 q=29.0 size= 2758kB time=00:00:18.24 bitrate=1238.2kbits/ frame= 561 fps= 30 q=29.0 size= 2824kB time=00:00:18.75 bitrate=1233.4kbits/ frame= 576 fps= 30 q=29.0 size= 2955kB time=00:00:19.25 bitrate=1256.8kbits/ frame= 586 fps= 29 q=29.0 size= 3061kB time=00:00:19.59 bitrate=1279.6kbits/ frame= 598 fps= 29 q=29.0 size= 3217kB time=00:00:19.99 bitrate=1318.4kbits/ frame= 610 fps= 29 q=29.0 size= 3354kB time=00:00:20.39 bitrate=1347.2kbits/ frame= 622 fps= 29 q=29.0 size= 3483kB time=00:00:20.78 bitrate=1372.6kbits/ frame= 634 fps= 29 q=29.0 size= 3593kB time=00:00:21.19 bitrate=1388.6kbits/ frame= 648 fps= 29 q=29.0 size= 3708kB time=00:00:21.66 bitrate=1402.3kbits/ frame= 661 fps= 29 q=29.0 size= 3811kB time=00:00:22.08 bitrate=1413.5kbits/ frame= 674 fps= 29 q=29.0 size= 3978kB time=00:00:22.53 bitrate=1446.3kbits/ frame= 690 fps= 29 q=29.0 size= 4133kB time=00:00:23.05 bitrate=1468.4kbits/ frame= 706 fps= 29 q=29.0 size= 4263kB time=00:00:23.58 bitrate=1480.4kbits/ frame= 721 fps= 29 q=29.0 size= 4391kB time=00:00:24.08 bitrate=1493.8kbits/ frame= 735 fps= 29 q=29.0 size= 4524kB time=00:00:24.55 bitrate=1509.4kbits/ frame= 748 fps= 29 q=29.0 size= 4661kB time=00:00:24.98 bitrate=1528.2kbits/ frame= 763 fps= 29 q=29.0 size= 4835kB time=00:00:25.50 bitrate=1553.1kbits/ frame= 778 fps= 29 q=29.0 size= 4993kB time=00:00:25.99 bitrate=1573.6kbits/ frame= 795 fps= 29 q=29.0 size= 5149kB time=00:00:26.56 bitrate=1588.1kbits/ frame= 814 fps= 29 q=29.0 size= 5258kB time=00:00:27.18 bitrate=1584.4kbits/ frame= 833 fps= 29 q=29.0 size= 5368kB time=00:00:27.82 bitrate=1580.2kbits/ frame= 851 fps= 29 q=29.0 size= 5469kB time=00:00:28.43 bitrate=1575.9kbits/ frame= 870 fps= 29 q=29.0 size= 5567kB time=00:00:29.05 bitrate=1569.5kbits/ frame= 889 fps= 29 q=29.0 size= 5688kB time=00:00:29.70 bitrate=1568.4kbits/ Starting second pass: moving header on top of the file frame= 902 fps= 28 q=-1.0 Lsize= 6109kB time=00:00:30.14 bitrate=1659.8kbits /s dup=1 drop=0 video:5602kB audio:472kB subtitle:0 global headers:0kB muxing overhead 0.566600% [libx264 @ 0000000004360620] frame I:8 Avg QP:20.52 size: 39667 [libx264 @ 0000000004360620] frame P:419 Avg QP:25.06 size: 10524 [libx264 @ 0000000004360620] frame B:475 Avg QP:29.03 size: 2123 [libx264 @ 0000000004360620] consecutive B-frames: 3.2% 79.6% 0.3% 16.9% [libx264 @ 0000000004360620] mb I I16..4: 20.7% 52.3% 26.9% [libx264 @ 0000000004360620] mb P I16..4: 0.7% 4.2% 1.1% P16..4: 39.4% 21.4 % 13.8% 0.0% 0.0% skip:19.3% [libx264 @ 0000000004360620] mb B I16..4: 0.1% 0.9% 0.3% B16..8: 41.8% 6.4 % 1.7% direct: 1.7% skip:47.1% L0:36.4% L1:53.3% BI:10.3% [libx264 @ 0000000004360620] 8x8 transform intra:65.7% inter:58.8% [libx264 @ 0000000004360620] coded y,uvDC,uvAC intra: 71.2% 76.6% 35.7% inter: 2 0.7% 13.0% 0.5% [libx264 @ 0000000004360620] i16 v,h,dc,p: 48% 24% 8% 20% [libx264 @ 0000000004360620] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 17% 18% 15% 6% 8% 11% 8% 10% 8% [libx264 @ 0000000004360620] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 19% 16% 15% 7% 10% 11% 8% 8% 7% [libx264 @ 0000000004360620] i8c dc,h,v,p: 51% 22% 19% 9% [libx264 @ 0000000004360620] Weighted P-Frames: Y:0.7% UV:0.0% [libx264 @ 0000000004360620] ref P L0: 63.4% 19.7% 11.0% 5.9% 0.0% [libx264 @ 0000000004360620] ref B L0: 90.7% 8.7% 0.7% [libx264 @ 0000000004360620] ref B L1: 98.4% 1.6% [libx264 @ 0000000004360620] kb/s:1524.54

    Read the article

  • Where is the iPhone's Date & Time getting it's time zone list

    - by johnbdh
    I can get a list of time zones with [NSTimeZone knownTimeZoneNames], but that only gives the time zone IDs which include one or 2 cities in each time zone. The Date & Time settings has a great list of cities and I have seen a few other apps that have the same if not similar lookup lists. Where do these lists come from?? I do need to relate a picked city to it's time zone like Date & Time does. Thanks, John

    Read the article

  • Subtracting Delphi Time Ranges from a Date Range, Calculate Remaining Time

    - by Anagoge
    I'm looking for an algorithm that will help calculate a workday working time length. It would have an input date range and then allow subtracting partially or completely intersecting time range slices from that date range and the result would be the number of minutes (or the fraction/multiple of a day) left in the original date range, after subtracting out the various non-working time slices. For Example: Input date range: 1/4/2010 11:21 am - 1/5/2010 3:00 pm Subtract out any partially or completely intersecting slices like this: Remove all day Sunday Non-Sundays remove 11:00 - 12:00 Non-Sundays remove time after 5:00 pm Non-Sundays remove time before 8:00 am Non-Sundays remove time 9:15 - 9:30 am Output: # of minutes left in the input date range I don't need anything overly-general. I could hardcode the rules to simplify the code. If anyone knows of sample code or a library/function somewhere, or has some pseudo-code ideas, I'd love something to start with. I didn't see anything in DateUtils, for example. Even a basic function that calculates the number of minutes of overlap in two date ranges to subtract out would be a good start.

    Read the article

  • Angularjs showin time portion from date time

    - by J. Davidson
    Hi I have following input which displays datetime <div ng-repeat="item in items"> <input type="text" ng-model="item.name" /> <input ng-model="item.time" /> </div> The issue i have is that time is in following format. "2002-11-28T14:00:00Z" I want to just display the time portion. For which I would have to apply filter date: 'hh:mm a' I tried ng-model="labor.start_time | date: 'hh:mm a'" Please let me know how i can show only time portion in input box showin time only. I cant use span tag as the time a user can change so have to show in input tag. Thanks

    Read the article

  • Rsync root files between systems without specifying password

    - by xpt
    This seems very tricky to me. I've set up my two systems so that I can rsync files between them as me, without specifying password. Now the the problem is to rsync files that belong to root. On both of my systems, there are no root passwords. The only way to become root is via sudo. So I can neither give a password for sudo rsyn local root@remote:, no use my ssh-agent to supply pass phrase. I don't want to set up a root password on any systems; and I do need the files to be owned by root on both systems. EDIT: Using the files that belong to root is just an example, I need a way for my unprivileged account to read/write system (including root-owned) files easily. One example is to copy my configured /root environment into the freshly-installed system. The two systems are actually two VMs under a single host, so it's not a big concern for me to copy root-owned files between them. EDIT 2: If I only want to copy my configured /root environment into the freshly-installed system, I can use tar: sudo tar cvzf - /root | ssh me@remote sudo tar xvzf - -C / But I do need rsync to update from time to time. Any easy way to make it happen? EDIT 3: Formally formulate the question Alright, it all began with the question, how to rsync files that belong to root between two systems as a normal unprivileged user, without specifying password, under the condition that, The root account is locked on both of systems. I.e., there are no root passwords. The only way to become root is via sudo (recommended security practice, see http://help.ubuntu.com/community/RootSudo) I don't want a completely passwordless sudo but don’t want to be typing passwords all the time either. The normal unprivileged user has entered their ssh pass phrase into the ssh agent. Thanks

    Read the article

  • Systems Solutions at COLLABORATE12

    - by ferhat
    Want to connect with fellow Oracle users and learn more about how to maximize your Oracle software environments with Oracle Systems?   Pack your bags for Las Vegas!   COLLABORATE 12  is right around the corner! COLLABORATE 12 Conference will be held at the Mandalay Bay in Las Vegas, NV 22-26 April, 2012. This is an event designed and delivered by users just like you with sessions, interactive panel discussions and hands-on learning opportunities packed with first-hand experiences, case studies and practical “how-to” content.. This year’s event includes a number of educational sessions and demos for users interested in learning from the experts how to use Oracle Optimized Solutions to get the most out of their Oracle Technology and Application software. Oracle Optimized Solutions are proven blueprints that eliminate integration guesswork by combing best in class hardware and software components to deliver complete system architectures that are fully tested, and include documented best practices that reduce integration risks and deliver better application performance.  And because they are highly flexible by design,  Oracle Optimized Solutions   can be implemented as an end-to-end solution or easily adapted into existing environments. Follow Oracle Infrared at Twitter, Facebook, Google+, and LinkedIn  to catch the latest news, developments, announcements, and inside views from  Oracle Optimized Solutions. Please come by our Exhibition Booth #1273 to see the demos and meet 1-1 with the experts behind a number of  Oracle Optimized Solutions  including those for JD Edwards EnterpriseOne, E-Business Suite, PeopleSoft HCM, Oracle WebCenter, and Oracle Database.  Exhibitor Showcase Booth #1273 DAY TIME TITLE Monday  April 23 6:00 pm - 8:00 pm Welcome Reception in the Exhibitor Showcase Tuesday  April 24 10:15 am - 4:00 pm Exhibitor Showcase Open 1:00 pm - 2:00 pm Dedicated Exhibitor Showcase Time 5:30 pm - 7:00 pm Exhibitor Showcase Happy Hour Wednesday  April 25 10:30 am - 3:00 pm Exhibitor Showcase Open 2:15 pm -3:00 pm Afternoon Break in Exhibitor Showcase  There are also a number of deep dive, educational sessions covering deployment best practices using Oracle’s engineered systems and best-in-class hardware, operating system and virtualization technologies.  Education Sessions DAY TIME TITLE LOCATION Monday  April 23 9:45 am - 10:45 am Architecting and Implementing Backup and Recovery Solutions Surf E Tuesday  April 24 2:00 pm – 3:00 pm Oracle's High Performance Systems for JD Edwards EnterpriseOne Mandalay Bay GH 4:30 pm - 5:30 pm Virtualization Boot Camp: What's New with Oracle VM Server for x86 Mandalay Bay C 9:30 am - 10:30 am Oracle on Oracle VM - Expert Panel Mandalay Bay L Wednesday  April 25 9:30 am - 10:30 am Cloud Computing Directions: Part II Understanding Oracle's Cloud Directions South Seas E  And don’t forget the keynotes and software roadmap sessions! Keynotes and Roadmap Sessions DAY TIME TITLE LOCATION Sunday  April 22 3:20 pm – 4:20 pm Oracle’s Cloud Computing Strategy Breakers B Monday  April 23 11:00 am – 12:00 pm JD Edwards - Vision, Promises and Execution: IT'S THE WAY WE ROLL and Why it Matters! Mandalay Bay A 11:00 am – 12:00 pm PeopleSoft Executive Update and Roadmap Mandalay Bay J 1:15 pm - 2:15 pm Oracle Database - Engineered for Innovation Mandalay Bay L 2:30 pm - 3:30 pm Oracle E-Business Suite Applications Strategy and General Manager Update Mandalay Bay D Tuesday  April 24 9:15 am - 10:15 am IT at Oracle: The Art of IT Transformation to Enable Business Growth Mandalay Bay Ballroom H

    Read the article

  • Systems of Engagement

    - by Michael Snow
    12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}  Engagement Week 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} This week we’ll be looking at the ever evolving topic of systems of engagement. This topic continues generating widespread discussion around how we connect with businesses, employers, governments, and extended social communities across multiple channels spanning web, mobile and human face to face contact. Earlier in our Social Business Thought Leader Webcast Series, we had AIIM President John Mancini presenting "Moving from Records to Engagement to Insight" discussing the factors that are driving organizations to think more strategically about the intersection of content management, social technologies, and business processes. John spoke about how Content Management and Enterprise IT are being changed by social technologies and how new technologies are being used to drive innovation and transform processes along and what the implications of this transformation are for information professionals. He used these two slides below to illustrate the evolution from Systems of Record to Systems of Engagement. The AIIM White Paper is available for download from the AIIM website. Later this week (09/20), we'll have another session in our Social Business Thought Leader Webcast Series featuring  R “Ray” Wang (@rwang0) Principal Analyst & CEO from Constellation Research presenting: "Engaging Customers in the Era of Overexposure"  More info to come tomorrow on the upcoming webcast this week. ~~~~~~ In the spirit of spreading good karma - one of the first things that came to mind as I was thinking about "Engagement" was the evolution of the Marriage Proposal.  Someone sent me a link to this link a couple of months ago and it raises the bar on all proposals. I hope you'll enjoy!

    Read the article

  • Webcast Q&A: Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter

    - by kellsey.ruppel
    Last Thursday we had the third webcast in our WebCenter in Action webcast series, "Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter", where customer Sean Mattson from HDS and Rob Vandenberg from Oracle Partner Lingotek shared how Oracle WebCenter is powering Hitachi Data System’s externally facing website and providing a seamless experience for their customers. In case you missed it, here's a recap of the Q&A.   Sean Mattson, Hitachi Data Systems  Q: Did you run into any issues in the deployment of the platform?A: There were some challenges, we were one of the first enterprise ‘on premise’ installations for Lingotek and our WebCenter platform also has a lot of custom features.  There were a lot of iterations and back and forth working with Lingotek at first.  We both helped each other, learned a lot and in the end managed to resolve all issues and roll out a very compelling solution for HDS. Q: What has been the biggest benefit your end users have seen?A: Being able to manage and govern the content lifecycle globally and centrally and at the same time enabling the field to update, review and publish the incremental content changes without a lot of touchpoints has helped us streamline and simplify the entire publishing process. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: I wouldn't say resistance as much as skepticism that we could actually deploy an automated and self publishing solution.  Even if a solution is great, adoption of a new process can be a challenge and we are still pursuing our adoption targets.  One of the most important aspects is to include lots of training and support materials and offer as much helpdesk type support as needed to get the field self sufficient and confident in the capabilities of the system.  Rob Vandenberg, Lingotek  Q: Are there any limitations regarding supported languages such as support for French Canadian and Indian languages?A: Lingotek supports all language pairs. Including right to left languages and double byte languages such as Chinese, Japanese and Korean Q: Is the Lingotek solution integrated with the new 11g release of WebCenter Sites? A: Yes! In fact, Lingotek is the first OVI partner for Oracle WebCenter Sites  Q: Can translation memories help to improve the accuracy of machine translation?A: One of the greatest long term strategic benefits of using Lingotek is the accumulation of translation memories, or past human translations. These TMs can be used to "train" statistical machine translation engines to have higher and higher quality. This virtuous cycle is ongoing and will consistently improve both machine and human translations.  Q: We have existing translation memories from previous work with our translation service provider. Can they be easily imported in to the Lingotek solution for re-use? Q: Yes, Lingotek is standards compliant. We support TM import in both the TMX and XLIFF formats. Q: If we use Lingotek as a service to do our professional translation and also use the Lingotek software solution, do we get the translation memories to give us a means of just translating future adds and changes ourselves? A: Yes, all the data is yours, always. Lingotek can provide both the integrated translation software as well as the professional translation services. All the content and translation memories are yours. Q: Can you give us an example of where community translation has proved to be successful?A: The key word here is community. If you have a community that cares about you, your content, and the rest of the community, then community translation can work for you. We've seen effective use cases in Product User Groups content, Support Communities, and other types of User Generated content, like wikis and blogs.   If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!   Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter from Oracle WebCenter

    Read the article

  • Big Data – ClustrixDB – Extreme Scale SQL Database with Real-time Analytics, Releases Software Download – NewSQL

    - by Pinal Dave
    There are so many things to learn and there is so little time we all have. As we have little time we need to be selective to learn whatever we learn. I believe I know quite a lot of things in SQL but I still do not know what is around SQL. I have started to learn about NewSQL recently. If you wonder what is NewSQL I encourage all of you to read my blog post about NewSQL over here Big Data – Buzz Words: What is NewSQL – Day 10 of 21. NewSQL databases are quickly becoming popular – providing the scale of NoSQL with the SQL features and transactions. As a part of learning NewSQL database, I have recently started to learn about ClustrixDB. ClustrixDB has been the most mature NewSQL database used by some of the largest internet sites in the world for over 3 years, with extensive SQL support. In addition to scale, it provides fast real-time analytics by bringing massively parallel processing (MPP), available only in warehousing databases, to the transactional database. The reason I am more intrigued about learning ClustrixDB is their recent announcement on Oct 31. ClustrixDB was only available as an appliance, but now with their software release on Oct 31, everyone can use it. It is now available as forever free for up to 12 cores with community support, and there is a 45 day trial for unlimited cluster sizes. With the forever free world, I am indeed interested in ClustrixDB now. I know that few of the leading eCommerce sites in the world uses them for their transactional database. Here are few of the details I have quickly noted for ClustrixDB. ClustrixDB allows user to: Scale by simply adding nodes to the cluster with a single command Run billions of transactions a day Run fast real-time analytics Achieve high-availability with recovery from node failure Manages itself Easily migrate from MySQL as it is nearly plug-and-play compatible, use MySQL drivers, tools and replication. While I was going through the documentation I realized that ClustrixDB also has extensive support for SQL features including complex queries involving joins on a dozen or more tables, aggregates, sorts, sub-queries. It also supports stored procedures, triggers, foreign keys, partitioned and temporary tables, and fully online schema changes. It is indeed a very matured product and SQL solution. Indeed Clusterix sound very promising solution, I decided to dig a bit deeper to understand who are current customers of the Clustrix as they exist in the industry for quite a few years. Their client list is indeed very interesting and here is my quick research about them. Twoo.com – Europe’s largest social discovery (dating) site runs 4.4 Billion Transactions a day with table sizes over a Terabyte, on a 168 core cluster. EngageBDR – Top 3 in the online advertising category uses ClustrixDB to serve 6.9 billion ads a day through real-time bidding platform. Their reports went from 4 hours to 15 seconds. NoMoreRack – Top 2 fastest growing e-commerce company in US used ClustrixDB for high availability and fast growth through Amazon cloud. MakeMyTrip – India’s leading travel site runs on ClustrixDB with two clusters running as multi-master in Chennai and Bangalore. Many enterprises such as AOL, CSC, Rakuten, Symantec use ClustrixDB when their applications need scale. I must accept that I am impressed with the information I have learned so far and now is the time to do some hand’s on experience with their product. I want to learn this technology so in future when it is about NewSQL, I know what I am talking about. Read more why Clustrix explains why you ClustrixDB might be the right database for you. Download ClustrixDB with me today and install it on your machine so in future when we discuss the technical aspects of it, we all are on the same page. The software can be downloaded here. Reference : Pinal Dave (http://blog.SQLAuthority.com)Filed under: Big Data, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Clustrix

    Read the article

  • converting a UTC time to a local time zone in Java

    - by aloo
    I know this subject has been beaten to death but after searching for a few hours to this problem I had to ask. My Problem: do calculations on dates on a server based on the current time zone of a client app (iphone). The client app tells the server, in seconds, how far away its time zone is away from GMT. I would like to then use this information to do computation on dates in the server. The dates on the server are all stored as UTC time. So I would like to get the HOUR of a UTC Date object after it has been converted to this local time zone. My current attempt: int hours = (int) Math.floor(secondsFromGMT / (60.0 * 60.0)); int mins = (int) Math.floor((secondsFromGMT - (hours * 60.0 * 60.0)) / 60.0); String sign = hours > 0 ? "+" : "-"; Calendar now = Calendar.getInstance(); TimeZone t = TimeZone.getTimeZone("GMT" + sign + hours + ":" + mins); now.setTimeZone(t); now.setTime(someDateTimeObject); int hourOfDay = now.get(Calendar.HOUR_OF_DAY); The variables hour and mins represent the hour and mins the local time zone is away from GMT. After debugging this code - the variables hour, mins and sign are correct. The problem is hourOfDay does not return the correct hour - it is returning the hour as of UTC time and not local time. Ideas?

    Read the article

  • how to cause linux system datetime to run faster than real world datetime?

    - by JamesThomasMoon1979
    Background I want to monitor a running linux system over several days. It's a custom gentoo build and with much custom software on board. This software has ongoing maintenance timers and cron scripts and other clock driven events. I need to verify these scheduled events are working. Problem Waiting for the system to step through daily and weekly activity is a long wait time. And modifying all clock-based timers on the system would be time consuming. Yet, I often want to test a system's end-to-end scheduled activities without waiting a week. Potential Solution Have the linux system under test appear to run through it's daily cycle of activity within just a few hours. My Question for Serverfault Is there a way to cause the system's time to run faster than real world time? My first thought is manipulating the ntp daemon to repeatedly and smoothly increment the clock . Any other ideas? And yes, I know this may have strange side affects. However, the system has no important or time critical interactions with systems outside of itself. And this may be a valuable testing technique.

    Read the article

  • Free Time Tracking Web Application

    - by cwius
    Are you aware of any good time tracking web application (hosted, download) that is free or open source? I am looking for something to track the time I spend on projects and bug fixes. I am already looking at ASP .NET 2.0 Time Tracker Starter Kit but wanted to see if there is anything else out there or should I write my own application.

    Read the article

  • How to establish the real-time communication between Shopping cart running MySQL and Internal System Running PostgreSQL [closed]

    - by Andrew
    I am thinking about the way of establishing some-sort of real-time connection between MySQLpowered shopping cart and internal system that is running on PostgreSQL. Could you give me some sort of insight on this topic? For example, I can write some sort of csv export application, then enable remote MySQL for over the internet connection and then import csv to mysql directly from PC. Or upload csv and run cron on server. But this way of import-export causing delays; so I would like to link databased (or some msort). I have never done it before and would like to hear some opinions about this. Another way "just a thought" might to implement triggers that would initiate the update process via csv; but again, I would like to avoid csv. Do you have any good advise? Maybe some specific examples?

    Read the article

  • Programs minimized for long time takes long time to "wake up"

    - by bart
    I'm working in Photoshop CS6 and multiple browsers a lot. I'm not using them all at once, so sometimes some applications are minimized to taskbar for hours or days. The problem is, when I try to maximize them from the taskbar - it sometimes takes longer than starting them! Especially Photoshop feels really weird for many seconds after finally showing up, it's slow, unresponsive and even sometimes totally freezes for minute or two. It's not a hardware problem as it's been like that since always on all on my PCs. Would I also notice it after upgrading my HDD to SDD and adding RAM (my main PC holds 4 GB currently)? Could guys with powerful pcs / macs tell me - does it also happen to you? I guess OSes somehow "focus" on active software and move all the resources away from the ones that run, but are not used. Is it possible to somehow set RAM / CPU / HDD priorities or something, for let's say, Photoshop, so it won't slow down after long period of inactivity?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >