Search Results

Search found 59299 results on 2372 pages for 'time and attendance'.

Page 26/2372 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Google Analytics: Why is Avg Time on Site lower than Avg time on Page?

    - by Melanie
    I have the following Custom Report set up in Google Analytics: Metrics: Avg Time on Page Avg Time on Site Dimensions: Page So a report looks like this: Page Avg Time on Page Avg Time on Site /an-article 00:03:14 00:00:11 /another-article 00:05:11 00:01:07 /something-written 00:03:00 00:00:31 Why is it that for each 'page', the 'site views' are significantly lower?

    Read the article

  • Testing and Validation – You Really Do Have The Time

    - by BuckWoody
    One of the great advantages in my role as a Technical Specialist here at Microsoft is that I get to work with so many great clients. I get to see their environments and how they use them, and the way they work with SQL Server. I’ve been a data professional myself for many years. Over that time I’ve worked with many database platforms, lots of client applications, and written a lot of code in many industries. For a while I was also a consultant, so I got to see how other shops did things as well. But because I now focus on a “set” base of clients (over 500 professionals in over 150 companies) I get to see them over a longer period of time. Many of them help me understand how they use the product in their projects, and I even attend some DBA regular meetings. I see the way the product succeeds, and I see when it fails. Something that has really impacted my way of thinking is the level of importance any given shop is able to place on testing and validation. I’ve always been a big proponent of setting up a test system and following a very disciplined regimen to make sure it will work in production for any new projects, and then taking the lessons learned into production as standards. I know, I know – there’s never enough time to do things right like this. Yet the shops I see that do it have the same level of work that they output as the shops that don’t. They just make the time to do the testing and validation and create a standard that they will follow in production. And what I’ve found (surprise surprise) is that they have fewer production problems. OK, that might seem obvious – but I’ve actually tracked it and those places that do the testing and best practices really do save stress, time and trouble from that effort. We all think that’s a good idea, but we just “don’t have time”. OK – but from what I’m seeing, you can gain time if you spend a little up front. You may find that you’re actually already spending the same amount of time that you would spend in doing the testing, you’re just doing it later, at night, under the gun. Food for thought.  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Do you charge a client for email and chat communication as a freelancer? [closed]

    - by skyork
    For a project that is billed by hours, should a freelancer charge the client for the amount of time he/she spends on email/chat correspondence? For example, the client sends an email to the the freelancer, outlining the requirements. Should the freelancer charge the client for the time during which he/she reads the email and writes a reply. The same goes for chat conversations for clarifying the requirements. In particular, if the freelancer's English is not very good, so that he/she spends extra time on understanding what the client wants and explaining him/herself (e.g. copying and pasting into Google Translate), should such time be charged to the client too?

    Read the article

  • Where can I find some software development freelance/contract positions? [closed]

    - by m-y
    I currently have a full time job as a Microsoft.NET developer, but I'd like to supplement my income by doing some part-time freelance development work. Are there any websites (or other sources) out there that specialize in this? While the website (or other source) does not have to be geared specifically for part-time work or Microsoft.NET development, if it is than that would be just wonderful. The only thing I could find close to this was www.vworker.com which is not that great.

    Read the article

  • How do you make a precise countdown timer using clock_gettime? [migrated]

    - by Joshun
    Could somebody please explain how to make a countdown timer using clock_gettime, under Linux. I know you can use the clock() function to get cpu time, and multiply it by CLOCKS_PER_SEC to get actual time, but I'm told the clock() function is not well suited for this. So far I have attempted this (a billion is to pause for one second) #include <stdio.h> #include <time.h> #define BILLION 1000000000 int main() { struct timespec rawtime; clock_gettime(CLOCK_MONOTONIC_RAW, &rawtime); unsigned long int current = ( rawtime.tv_sec + rawtime.tv_nsec ); unsigned long int end = (( rawtime.tv_sec + rawtime.tv_nsec ) + BILLION ); while ( current < end ) { clock_gettime(CLOCK_MONOTONIC_RAW, &rawtime); current = ( rawtime.tv_sec + rawtime.tv_nsec ); } return 0; } I know this wouldn't be very useful on its own, but once I've found out how to time correctly I can use this in my projects. I know that sleep() can be used for this purpose, but I want to code the timer myself so that I can better integrate it in my projects - such as the possibility of it returning the time left, as opposed to pausing the whole program.

    Read the article

  • How can I improve my real-time behavior in multi-threaded app using pthreads and condition variables

    - by WilliamKF
    I have a multi-threaded application that is using pthreads. I have a mutex() lock and condition variables(). There are two threads, one thread is producing data for the second thread, a worker, which is trying to process the produced data in a real time fashion such that one chuck is processed as close to the elapsing of a fixed time period as possible. This works pretty well, however, occasionally when the producer thread releases the condition upon which the worker is waiting, a delay of up to almost a whole second is seen before the worker thread gets control and executes again. I know this because right before the producer releases the condition upon which the worker is waiting, it does a chuck of processing for the worker if it is time to process another chuck, then immediately upon receiving the condition in the worker thread, it also does a chuck of processing if it is time to process another chuck. In this later case, I am seeing that I am late processing the chuck many times. I'd like to eliminate this lost efficiency and do what I can to keep the chucks ticking away as close to possible to the desired frequency. Is there anything I can do to reduce the delay between the release condition from the producer and the detection that that condition is released such that the worker resumes processing? For example, would it help for the producer to call something to force itself to be context switched out? Bottom line is the worker has to wait each time it asks the producer to create work for itself so that the producer can muck with the worker's data structures before telling the worker it is ready to run in parallel again. This period of exclusive access by the producer is meant to be short, but during this period, I am also checking for real-time work to be done by the producer on behalf of the worker while the producer has exclusive access. Somehow my hand off back to running in parallel again results in significant delay occasionally that I would like to avoid. Please suggest how this might be best accomplished.

    Read the article

  • Using the StopWatch class to calculate the execution time of a block of code

    - by vik20000in
      Many of the times while doing the performance tuning of some, class, webpage, component, control etc. we first measure the current time taken in the execution of that code. This helps in understanding the location in code which is actually causing the performance issue and also help in measuring the amount of improvement by making the changes. This measurement is very important as it helps us understand the problem in code, Helps us to write better code next time (as we have already learnt what kind of improvement can be made with different code) . Normally developers create 2 objects of the DateTime class. The exact time is collected before and after the code where the performance needs to be measured.  Next the difference between the two objects is used to know about the time spent in the code that is measured. Below is an example of the sample code.             DateTime dt1, dt2;             dt1 = DateTime.Now;             for (int i = 0; i < 1000000; i++)             {                 string str = "string";             }             dt2 = DateTime.Now;             TimeSpan ts = dt2.Subtract(dt1);             Console.WriteLine("Time Spent : " + ts.TotalMilliseconds.ToString());   The above code works great. But the dot net framework also provides for another way to capture the time spent on the code without doing much effort (creating 2 datetime object, timespan object etc..). We can use the inbuilt StopWatch class to get the exact time spent. Below is an example of the same work with the help of the StopWatch class.             Stopwatch sw = Stopwatch.StartNew();             for (int i = 0; i < 1000000; i++)             {                 string str = "string";             }             sw.Stop();             Console.WriteLine("Time Spent : " +sw.Elapsed.TotalMilliseconds.ToString());   [Note the StopWatch class resides in the System.Diagnostics namespace] If you use the StopWatch class the time taken for measuring the performance is much better, with very little effort. Vikram

    Read the article

  • Database model for keeping track of likes/shares/comments on blog posts over time

    - by gage
    My goal is to keep track of the popular posts on different blog sites based on social network activity at any given time. The goal is not to simply get the most popular now, but instead find posts that are popular compared to other posts on the same blog. For example, I follow a tech blog, a sports blog, and a gossip blog. The tech blog gets waaay more readership than the other two blogs, so in raw numbers every post on the tech blog will always out number views on the other two. So lets say the average tech blog post gets 500 facebook likes and the other two get an average of 50 likes per post. Then when there is a sports blog post that has 200 fb likes and a gossip blog post with 300 while the tech blog posts today have 500 likes I want to highlight the sports and gossip blog posts (more likes than average vs tech blog with more # of likes but just average for the blog) The approach I am thinking of taking is to make an entry in a database for each blog post. Every x minutes (say every 15 minutes) I will check how many likes/shares/comments an entry has received on all the social networks (facebook, twitter, google+, linkeIn). So over time there will be a history of likes for each blog post, i.e post 1234 after 15 min: 10 fb likes, 4 tweets, 6 g+ after 30 min: 15 fb likes, 15 tweets, 10 g+ ... ... after 48 hours: 200 fb likes, 25 tweets, 15 g+ By keeping a history like this for each blog post I can know the average number of likes/shares/tweets at any give time interval. So for example the average number of fb likes for all blog posts 48hrs after posting is 50, and a particular post has 200 I can mark that as a popular post and feature/highlight it. A consideration in the design is to be able to easily query the values (likes/shares) for a specific time-frame, i.e. fb likes after 30min or tweets after 24 hrs in-order to compute averages with which to compare against (or should averages be stored in it's own table?) If this approach is flawed or could use improvement please let me know, but it is not my main question. My main question is what should a database scheme for storing this info look like? Assuming that the above approach is taken I am trying to figure out what a database schema for storing the likes over time would look like. I am brand new to databases, in doing some basic reading I see that it is advisable to make a 3NF database. I have come up with the following possible schema. Schema 1 DB Popular Posts Table: Post post_id ( primary key(pk) ) url title Table: Social Activity activity_id (pk) url (fk) type (i.e. facebook,twitter,g+) value timestamp This was my initial instinct (base on my very limited db knowledge). As far as I under stand this schema would be 3NF? I searched for designs of similar database model, and found this question on stackoverflow, http://stackoverflow.com/questions/11216080/data-structure-for-storing-height-and-weight-etc-over-time-for-multiple-users . The scenario in that question is similar (recording weight/height of users overtime). Taking the accepted answer for that question and applying it to my model results in something like: Schema 2 (same as above, but break down the social activity into 2 tables) DB Popular Posts Table: Post post_id (pk) url title Table: Social Measurement measurement_id (pk) post_id (fk) timestamp Table: Social stat stat_id (pk) measurement_id (fk) type (i.e. facebook,twitter,g+) value The advantage I see in schema 2 is that I will likely want to access all the values for a given time, i.e. when making a measurement at 30min after a post is published I will simultaneous check number of fb likes, fb shares, fb comments, tweets, g+, linkedIn. So with this schema it may be easier get get all stats for a measurement_id corresponding to a certain time, i.e. all social stats for post 1234 at time x. Another thought I had is since it doesn't make sense to compare number of fb likes with number of tweets or g+ shares, maybe it makes sense to separate each social measurement into it's own table? Schema 3 DB Popular Posts Table: Post post_id (pk) url title Table: fb_likes fb_like_id (pk) post_id (fk) timestamp value Table: fb_shares fb_shares_id (pk) post_id (fk) timestamp value Table: tweets tweets__id (pk) post_id (fk) timestamp value Table: google_plus google_plus_id (pk) post_id (fk) timestamp value As you can see I am generally lost/unsure of what approach to take. I'm sure this typical type of database problem (storing measurements overtime, i.e temperature statistic) that must have a common solution. Is there a design pattern/model for this, does it have a name? I tried searching for "database periodic data collection" or "database measurements over time" but didn't find anything specific. What would be an appropriate model to solve the needs of this problem?

    Read the article

  • Google I/O 2010 - Make your app real-time with PubSubHubbub

    Google I/O 2010 - Make your app real-time with PubSubHubbub Google I/O 2010 - Make your application real-time with PubSubHubbub Social Web 201 Brett Slatkin This session will go over how to add support for the PubSubHubbub protocol to your website. You'll learn how to turn Atom and RSS feeds into real-time streams. We'll go over how to consume real-time data streams and how to make your website reactive to what's happening on the web right now. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 5 0 ratings Time: 55:46 More in Science & Technology

    Read the article

  • How to display time in the top panel?

    - by Mörre
    I thought I already had the time up there in the top bar, and it may have been so in previous Ubuntu versions (don't remember, my Ubuntu laptop is just one of three computers I use). Only that I just noticed - me being someone who never wears a watch, has the cellphone turned off 95% of the time and relying on the computer to tell the time - that there is no time being displayed anywhere, and I had expected it in the top bar on the Unity desktop. I searched around but found no obvious solution, but I'm sure someone immediately knows how I can get my time (back?) into the top bar?

    Read the article

  • Why does my MacBook Pro have long ping times over Wi-Fi?

    - by randynov
    I have been having problems connecting with my Wi-Fi. It is weird, the ping times to the router (<30 feet away) seem to surge, often getting over 10 seconds before slowly coming back down. You can see the trend below. I'm on a MacBook Pro and have done the normal stuff (reset the PRAM and SMC, changed wireless channels, etc.). It happens across different routers, so I think it must be my laptop, but I don't know what it could be. The RSSI value hovers around -57, but I've seen the transmit rate flip between 0, 48 and 54. The signal strength is ~60% with 9% noise. Currently, there are 17 other wireless networks in range, but only one in the same channel. 1 - How can I figure out what's going on? 2 - How can I correct the situation? PING 192.168.1.1 (192.168.1.1): 56 data bytes 64 bytes from 192.168.1.1: icmp_seq=0 ttl=254 time=781.107 ms 64 bytes from 192.168.1.1: icmp_seq=1 ttl=254 time=681.551 ms 64 bytes from 192.168.1.1: icmp_seq=2 ttl=254 time=610.001 ms 64 bytes from 192.168.1.1: icmp_seq=3 ttl=254 time=544.915 ms 64 bytes from 192.168.1.1: icmp_seq=4 ttl=254 time=547.622 ms 64 bytes from 192.168.1.1: icmp_seq=5 ttl=254 time=468.914 ms 64 bytes from 192.168.1.1: icmp_seq=6 ttl=254 time=237.368 ms 64 bytes from 192.168.1.1: icmp_seq=7 ttl=254 time=229.902 ms 64 bytes from 192.168.1.1: icmp_seq=8 ttl=254 time=11754.151 ms 64 bytes from 192.168.1.1: icmp_seq=9 ttl=254 time=10753.943 ms 64 bytes from 192.168.1.1: icmp_seq=10 ttl=254 time=9754.428 ms 64 bytes from 192.168.1.1: icmp_seq=11 ttl=254 time=8754.199 ms 64 bytes from 192.168.1.1: icmp_seq=12 ttl=254 time=7754.138 ms 64 bytes from 192.168.1.1: icmp_seq=13 ttl=254 time=6754.159 ms 64 bytes from 192.168.1.1: icmp_seq=14 ttl=254 time=5753.991 ms 64 bytes from 192.168.1.1: icmp_seq=15 ttl=254 time=4754.068 ms 64 bytes from 192.168.1.1: icmp_seq=16 ttl=254 time=3753.930 ms 64 bytes from 192.168.1.1: icmp_seq=17 ttl=254 time=2753.768 ms 64 bytes from 192.168.1.1: icmp_seq=18 ttl=254 time=1753.866 ms 64 bytes from 192.168.1.1: icmp_seq=19 ttl=254 time=753.592 ms 64 bytes from 192.168.1.1: icmp_seq=20 ttl=254 time=517.315 ms 64 bytes from 192.168.1.1: icmp_seq=37 ttl=254 time=1.315 ms 64 bytes from 192.168.1.1: icmp_seq=38 ttl=254 time=1.035 ms 64 bytes from 192.168.1.1: icmp_seq=39 ttl=254 time=4.597 ms 64 bytes from 192.168.1.1: icmp_seq=21 ttl=254 time=18010.681 ms 64 bytes from 192.168.1.1: icmp_seq=22 ttl=254 time=17010.449 ms 64 bytes from 192.168.1.1: icmp_seq=23 ttl=254 time=16010.430 ms 64 bytes from 192.168.1.1: icmp_seq=24 ttl=254 time=15010.540 ms 64 bytes from 192.168.1.1: icmp_seq=25 ttl=254 time=14010.450 ms 64 bytes from 192.168.1.1: icmp_seq=26 ttl=254 time=13010.175 ms 64 bytes from 192.168.1.1: icmp_seq=27 ttl=254 time=12010.282 ms 64 bytes from 192.168.1.1: icmp_seq=28 ttl=254 time=11010.265 ms 64 bytes from 192.168.1.1: icmp_seq=29 ttl=254 time=10010.285 ms 64 bytes from 192.168.1.1: icmp_seq=30 ttl=254 time=9010.235 ms 64 bytes from 192.168.1.1: icmp_seq=31 ttl=254 time=8010.399 ms 64 bytes from 192.168.1.1: icmp_seq=32 ttl=254 time=7010.144 ms 64 bytes from 192.168.1.1: icmp_seq=33 ttl=254 time=6010.113 ms 64 bytes from 192.168.1.1: icmp_seq=34 ttl=254 time=5010.025 ms 64 bytes from 192.168.1.1: icmp_seq=35 ttl=254 time=4009.966 ms 64 bytes from 192.168.1.1: icmp_seq=36 ttl=254 time=3009.825 ms 64 bytes from 192.168.1.1: icmp_seq=40 ttl=254 time=16000.676 ms 64 bytes from 192.168.1.1: icmp_seq=41 ttl=254 time=15000.477 ms 64 bytes from 192.168.1.1: icmp_seq=42 ttl=254 time=14000.388 ms 64 bytes from 192.168.1.1: icmp_seq=43 ttl=254 time=13000.549 ms 64 bytes from 192.168.1.1: icmp_seq=44 ttl=254 time=12000.469 ms 64 bytes from 192.168.1.1: icmp_seq=45 ttl=254 time=11000.332 ms 64 bytes from 192.168.1.1: icmp_seq=46 ttl=254 time=10000.339 ms 64 bytes from 192.168.1.1: icmp_seq=47 ttl=254 time=9000.338 ms 64 bytes from 192.168.1.1: icmp_seq=48 ttl=254 time=8000.198 ms 64 bytes from 192.168.1.1: icmp_seq=49 ttl=254 time=7000.388 ms 64 bytes from 192.168.1.1: icmp_seq=50 ttl=254 time=6000.217 ms 64 bytes from 192.168.1.1: icmp_seq=51 ttl=254 time=5000.084 ms 64 bytes from 192.168.1.1: icmp_seq=52 ttl=254 time=3999.920 ms 64 bytes from 192.168.1.1: icmp_seq=53 ttl=254 time=3000.010 ms 64 bytes from 192.168.1.1: icmp_seq=54 ttl=254 time=1999.832 ms 64 bytes from 192.168.1.1: icmp_seq=55 ttl=254 time=1000.072 ms 64 bytes from 192.168.1.1: icmp_seq=58 ttl=254 time=1.125 ms 64 bytes from 192.168.1.1: icmp_seq=59 ttl=254 time=1.070 ms 64 bytes from 192.168.1.1: icmp_seq=60 ttl=254 time=2.515 ms

    Read the article

  • Why does my macbook pro have long ping times over wifi?

    - by randynov
    I have been having problems connecting with my wifi. It is weird, the ping times to the router (<30 feet away) seem to surge, often getting over 10s before slowly coming back down. You can see the trend below. I'm on a macbook pro and have done the normal stuff (reset the pram and smc, changed wireless channels, etc.). It happens across different routers, so I think it must be my laptop, but I don't know what it could be. The RSSI value hovers around -57, but I've seen the transmit rate flip between 0, 48 & 54. The signal strength is ~60% with 9% noise. Currently, there are 17 other wireless networks in range, but only one in the same channel. 1 - How can I figure out what's going on? 2 - How can I correct the situation? TIA! Randall PING 192.168.1.1 (192.168.1.1): 56 data bytes 64 bytes from 192.168.1.1: icmp_seq=0 ttl=254 time=781.107 ms 64 bytes from 192.168.1.1: icmp_seq=1 ttl=254 time=681.551 ms 64 bytes from 192.168.1.1: icmp_seq=2 ttl=254 time=610.001 ms 64 bytes from 192.168.1.1: icmp_seq=3 ttl=254 time=544.915 ms 64 bytes from 192.168.1.1: icmp_seq=4 ttl=254 time=547.622 ms 64 bytes from 192.168.1.1: icmp_seq=5 ttl=254 time=468.914 ms 64 bytes from 192.168.1.1: icmp_seq=6 ttl=254 time=237.368 ms 64 bytes from 192.168.1.1: icmp_seq=7 ttl=254 time=229.902 ms 64 bytes from 192.168.1.1: icmp_seq=8 ttl=254 time=11754.151 ms 64 bytes from 192.168.1.1: icmp_seq=9 ttl=254 time=10753.943 ms 64 bytes from 192.168.1.1: icmp_seq=10 ttl=254 time=9754.428 ms 64 bytes from 192.168.1.1: icmp_seq=11 ttl=254 time=8754.199 ms 64 bytes from 192.168.1.1: icmp_seq=12 ttl=254 time=7754.138 ms 64 bytes from 192.168.1.1: icmp_seq=13 ttl=254 time=6754.159 ms 64 bytes from 192.168.1.1: icmp_seq=14 ttl=254 time=5753.991 ms 64 bytes from 192.168.1.1: icmp_seq=15 ttl=254 time=4754.068 ms 64 bytes from 192.168.1.1: icmp_seq=16 ttl=254 time=3753.930 ms 64 bytes from 192.168.1.1: icmp_seq=17 ttl=254 time=2753.768 ms 64 bytes from 192.168.1.1: icmp_seq=18 ttl=254 time=1753.866 ms 64 bytes from 192.168.1.1: icmp_seq=19 ttl=254 time=753.592 ms 64 bytes from 192.168.1.1: icmp_seq=20 ttl=254 time=517.315 ms 64 bytes from 192.168.1.1: icmp_seq=37 ttl=254 time=1.315 ms 64 bytes from 192.168.1.1: icmp_seq=38 ttl=254 time=1.035 ms 64 bytes from 192.168.1.1: icmp_seq=39 ttl=254 time=4.597 ms 64 bytes from 192.168.1.1: icmp_seq=21 ttl=254 time=18010.681 ms 64 bytes from 192.168.1.1: icmp_seq=22 ttl=254 time=17010.449 ms 64 bytes from 192.168.1.1: icmp_seq=23 ttl=254 time=16010.430 ms 64 bytes from 192.168.1.1: icmp_seq=24 ttl=254 time=15010.540 ms 64 bytes from 192.168.1.1: icmp_seq=25 ttl=254 time=14010.450 ms 64 bytes from 192.168.1.1: icmp_seq=26 ttl=254 time=13010.175 ms 64 bytes from 192.168.1.1: icmp_seq=27 ttl=254 time=12010.282 ms 64 bytes from 192.168.1.1: icmp_seq=28 ttl=254 time=11010.265 ms 64 bytes from 192.168.1.1: icmp_seq=29 ttl=254 time=10010.285 ms 64 bytes from 192.168.1.1: icmp_seq=30 ttl=254 time=9010.235 ms 64 bytes from 192.168.1.1: icmp_seq=31 ttl=254 time=8010.399 ms 64 bytes from 192.168.1.1: icmp_seq=32 ttl=254 time=7010.144 ms 64 bytes from 192.168.1.1: icmp_seq=33 ttl=254 time=6010.113 ms 64 bytes from 192.168.1.1: icmp_seq=34 ttl=254 time=5010.025 ms 64 bytes from 192.168.1.1: icmp_seq=35 ttl=254 time=4009.966 ms 64 bytes from 192.168.1.1: icmp_seq=36 ttl=254 time=3009.825 ms 64 bytes from 192.168.1.1: icmp_seq=40 ttl=254 time=16000.676 ms 64 bytes from 192.168.1.1: icmp_seq=41 ttl=254 time=15000.477 ms 64 bytes from 192.168.1.1: icmp_seq=42 ttl=254 time=14000.388 ms 64 bytes from 192.168.1.1: icmp_seq=43 ttl=254 time=13000.549 ms 64 bytes from 192.168.1.1: icmp_seq=44 ttl=254 time=12000.469 ms 64 bytes from 192.168.1.1: icmp_seq=45 ttl=254 time=11000.332 ms 64 bytes from 192.168.1.1: icmp_seq=46 ttl=254 time=10000.339 ms 64 bytes from 192.168.1.1: icmp_seq=47 ttl=254 time=9000.338 ms 64 bytes from 192.168.1.1: icmp_seq=48 ttl=254 time=8000.198 ms 64 bytes from 192.168.1.1: icmp_seq=49 ttl=254 time=7000.388 ms 64 bytes from 192.168.1.1: icmp_seq=50 ttl=254 time=6000.217 ms 64 bytes from 192.168.1.1: icmp_seq=51 ttl=254 time=5000.084 ms 64 bytes from 192.168.1.1: icmp_seq=52 ttl=254 time=3999.920 ms 64 bytes from 192.168.1.1: icmp_seq=53 ttl=254 time=3000.010 ms 64 bytes from 192.168.1.1: icmp_seq=54 ttl=254 time=1999.832 ms 64 bytes from 192.168.1.1: icmp_seq=55 ttl=254 time=1000.072 ms 64 bytes from 192.168.1.1: icmp_seq=58 ttl=254 time=1.125 ms 64 bytes from 192.168.1.1: icmp_seq=59 ttl=254 time=1.070 ms 64 bytes from 192.168.1.1: icmp_seq=60 ttl=254 time=2.515 ms

    Read the article

  • Right-Time Retail Part 1

    - by David Dorf
    This is the first in a three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Revolution Technology enables some amazing feats in retail. I can order flowers for my wife while flying 30,000 feet in the air. I can order my groceries in the subway and have them delivered later that day. I can even see how clothes look on me without setting foot in a store. Who knew that a TV, diamond necklace, or even a car would someday be as easy to purchase as a candy bar? Can technology make a mattress an impulse item? Wake-up and your back is hurting, so you rollover and grab your iPad, then a new mattress is delivered the next day. Behind the scenes the many processes are being choreographed to make the sale happen. This includes moving data between systems with the least amount for friction, which in some cases is near real-time. But real-time isn’t appropriate for all the integrations. Think about what a completely real-time retailer would look like. A consumer grabs toothpaste off the shelf, and all systems are immediately notified so that the backroom clerk comes running out and pushes the consumer aside so he can replace the toothpaste on the shelf. Such a system is not only cost prohibitive, but it’s also very inefficient and ineffectual. Retailers must balance the realities of people, processes, and systems to find the right speed of execution. That’ what “right-time retail” means. Retailers used to sell during the day and count the money and restock at night, but global expansion and the Web have complicated that simplistic viewpoint. Our 24hr society demands not only access but also speed, which constantly pushes the boundaries of our IT systems. In the last twenty years, there have been three major technology advancements that have moved us closer to real-time systems. Networking is the first technology that drove the real-time trend. As systems became connected, it became easier to move data between them. In retail we no longer had to mail the daily business report back to corporate each day as the dial-up modem could transfer the data. That was soon replaced with trickle-polling, when sale transactions were occasionally sent from stores to corporate throughout the day, often through VSAT. Then we got terrestrial networks like DSL and Ethernet that allowed the constant stream of data between stores and corporate. When corporate could see the sales transactions coming from stores, it could better plan for replenishment and promotions. That drove the need for speed into the supply chain and merchandising, but for many years those systems were stymied by the huge volumes of data. Nordstrom has 150 million SKU/Store combinations when planning (RPAS); The Gap generates 110 million price changes during end-of-season (RPM); Argos does 1.78 billion calculations executed each day for replenishment planning (AIP). These areas are now being alleviated by the second technology, storage. The typical laptop disk drive runs at 5,400rpm with PCs stepping up to 7,200rpm and servers hitting 15,000rpm. But the platters can only spin so fast, so to squeeze more performance we’ve had to rely on things like disk striping. Then solid state drives (SSDs) were introduced and prices continue to drop. (Augmenting your harddrive with a SSD is the single best PC upgrade these days.) RAM continues to be expensive, but compressing data in memory has allowed more efficient use. So a few years back, Oracle decided to build a box that incorporated all these advancements to move us closer to real-time. This family of products, often categorized as engineered systems, combines the hardware and software so that they work together to provide better performance. How much better? If Exadata powered a 747, you’d go from New York to Paris in 42 minutes, and it would carry 5,000 passengers. If Exadata powered baseball, games would last only 18 minutes and Boston’s Fenway would hold 370,000 fans. The Exa-family enables processing more data in less time. So with faster networks and storage, that brings us to the third and final ingredient. If we continue to process data in traditional ways, we won’t be able to take advantage of the faster networks and storage. Enter what Harvard calls “The Sexiest Job of the 21st Century” – the data scientist. New technologies like the Hadoop-powered Oracle Big Data Appliance, Oracle Advanced Analytics, and Oracle Endeca Information Discovery change the way in which we organize data. These technologies allow us to extract actionable information from raw data at incredible speeds, often ad-hoc. So the foundation to support the real-time enterprise exists, but how does a retailer begin to take advantage? The most visible way is through real-time marketing, but I’ll save that for part 3 and instead begin with improved integrations for the assets you already have in part 2.

    Read the article

  • TimeZone Issue during DayLight Saving

    - by user1328293
    I just been bugged by the Day light saving hours I seem that 3rd November 2013 01:00:00 start EST time Now ever Time I set my time to 3rd November 2013 00:58:xx(some seconds) and run date it give me valid Time zone i.e EDT but even after the time pass 01:00:00 and I still query the date library I still see the Time zone as EDT and not EST have a look at this screenshot You can clearly see the Time zone saying as EDT even when it is EST any one has a clue for this Update There is one other finding I found if I restart my machine I see this

    Read the article

  • TimeZone Issue during DayLight Saving

    - by Viren
    I just been bugged by the Day light saving hours I seem that 3rd November 2013 01:00:00 start EST time Now ever Time I set my time to 3rd November 2013 00:58:xx(some seconds) and run date it give me valid Time zone i.e EDT but even after the time pass 01:00:00 and I still query the date library I still see the Time zone as EDT and not EST have a look at this screenshot You can clearly see the Time zone saying as EDT even when it is EST any one has a clue for this Update There is one other finding I found if I restart my machine I see this More Update Before Restart After Restart

    Read the article

  • How can I find files added to the system within X minutes of a specific time?

    - by Jack W-H
    I have done a fresh install of Mac OS X Mountain Lion today on a new MacBook. Because this was a new install, when I finally got round to configuring some of my own developer things, I was surprised to find some app had installed a binary into /usr/local/bin - a single binary called galileod. Interestingly, I can't find anything online about galileod. I had only installed the bare minimum of software at this point. Looking in the file columns I can see Date Modified was 9th November 2012, but Date Added to the system was today at 17:01. It's now 10:20PM and I can't remember which software I was installing at that point. So how do I find out which other files were installed to the system within, say, 5 minutes either side of 17:01? EDIT: I found out what galileod was by running galileod --help - it is a binary used with Fitbit to communicate with the USB dongle. So that's the mystery solved - but it would still be interesting to know how to find files added within X minutes of a timeframe for future reference.

    Read the article

  • Use Mac Pro as time machine, server and editing station?

    - by Dan
    Background: My fiancée needs a Mac Pro for movie editing and rendering. I need a web server and a backup solution for my MacBook Pro. Idea We thought we could split the costs of the Mac Pro and set it up to act as both a web server and a backup device. Question Is this a good idea? Specifically: Is it easy to set it up to incrementally backup one or several laptops over wifi? And what software would you recommend? Is it silent and stable enough to run a web server continuously? Will it manage all this, including simultaneous editing? Thanks.

    Read the article

  • How to have a soft-real-time process in presense of heavily swapping IO-intensive background load?

    - by Vi
    schedtool: PID 32301: PRIO 4, POLICY R: SCHED_RR , NICE -20, AFFINITY 0xf ionice: realtime: prio 4 But the music is stumbling anyway. Background load is low prio (SCHED_IDLEPRIO, idle ionice), but uses a lot of memory (more than is physically available) and does a lot of IO and calculations. Latencytop shows about 1500ms for: Following symlink Writing buffer to disk (sync) Page fault Writing a page to disk both for the bg load and for unrelated processes. Load average is 10 and counting. Why cannot it allocate, for example, 200MHZ of one of the cores and 32M of memory and not less than once per second opportunity for IO for mplayer to make it happy while continuing calculations on the background? Or: why it cannot leave background task and swap loving each other but keeping the rest of the system as if there were no background load? How to have RT processes AND heavy bg load simultaneously (without of virtual machines)?

    Read the article

  • Snapshots are disappearing from Time Machine. Why?

    - by AntonAL
    I have TimeMachine enabled for my external HDD. I'm triggering backups manually 2 times per week. I have noticed, that some snapshots, that i made, are gone. I noticed this behaviour several weeks later, but i had doubts. To make it sure, i've recorded screen video with listing snapshots in TimeMachine. After watching my screen recording today, I'm sure, that for Augusut i had 9 snapshots. Now, i have only 3 of them. What's happening ? I had no crash disk reports, errors etc

    Read the article

  • ASP.NET MVC: MVC Time Planner is available at CodePlex

    - by DigiMortal
    I get almost every week some e-mails where my dear readers ask for source code of my ASP.NET MVC and FullCalendar example. I have some great news, guys! I ported my sample application to Visual Studio 2010 and made it available at CodePlex. Feel free to visit the page of MVC Time Planner. NB! Current release of MVC Time Planner is initial one and it is basically conversion sfrom VS2008 example solution to VS2010. Current source code is not any study material but it gives you idea how to make calendars work together. Future releases will introduce many design and architectural improvements. I have planned also some new features. How MVC Time Planner looks? Image on right shows you how time planner looks like. It uses default design elements of ASP.NET MVC applications and jQueryUI. If I find some artist skills from myself I will improve design too, of course. :) Currently only day view of calendar is available, other views are coming in near future (I hope future will be week or two). Important links And here are some important links you may find useful. MVC Time Planner page @ CodePlex Documentation Release plan Help and support – questions, ideas, other communication Bugs and feature requests If you have any questions or you are interested in new features then please feel free to contact me through MVC Time Planner discussion forums.

    Read the article

  • Time Travel 101

    - by Jim Duffy
    I’m thinking maybe I should have used Time Crunching 101 as the title instead… or maybe ‘Duh Duffy, where have you been? Everyone knows that!” Ok, so maybe you won’t actually learn how to travel through time from this post but you will learn how to cram more learning into one day. We all know you can’t make it to every conference, every presentation, or every training session. The good news is that many of those events make their content available to either watch online or to download for off-line viewing. The problem is who has time to sit and watch all those presentations in real time? Not me. One trick I use is to view the content at an increased play rate. Why listen to a boring speaker like me drone on for the entire length of the session when you can listen to them drone on in almost half the time. :-) I view nearly all off-line content with Windows Media Player though I’m sure you can implement this idea with any media playback software. The idea is changing the playback speed you view the content at. With Windows Media Player you can change the play speed from the menu system. Once you have the Play Speed Setting panel open you can specify the playback speed. Depending on the content and the presenter I can typically listen between 1.6 and 2.0 times normal speed. My Florida edumacation taught me that playing the video back at twice the speed means I’ll listen to it twice as fast and that means I can view it in almost 1/2 the time.  Too bad it won’t make me twice as smart. :-) I hope this helps you speed your way through more training content. Have a day. :-|

    Read the article

  • Passing elapsed time to the update function from the game loop

    - by Sri Harsha Chilakapati
    I want to pass the time elapsed to the update() method as this would make easy to implement the animations and time related concepts. Here's my game-loop. public void gameLoop(){ boolean running = true; long gameTime = getCurrentTime(); long elapsedTime = 0; long lastUpdateTime = 0; int loops; while (running){ loops = 0; while(getCurrentTime()>gameTime && loops<Global.MAX_FRAMESKIP){ elapsedTime = getCurrentTime() - lastUpdateTime; lastUpdateTime = getCurrentTime(); update(elapsedTime); gameTime += SKIP_STEPS; loops++; } displayGame(); } } getCurrentTime() method public long getCurrentTime(){ return (System.nanoTime()/1000000); } update() method long time = 0; public void update(long elapsedTime){ time += elapsedTime; if (time>=1000){ System.out.println("A second elapsed"); time -= 1000; } } But this is printing the message for 3 seconds. Thanks.

    Read the article

  • Inconsistency between date-time in terminal and clock

    - by Franck
    I can't figure out why I have 5 hours difference between GUI clock and date command in terminal. My bios clock is set to GMT... Any ideas ? franck@franck-ThinkPad-T61:~$ date mercredi 11 avril 2012, 02:48:47 (UTC-0500) franck@franck-ThinkPad-T61:~$ sudo dpkg-reconfigure tzdata Current default time zone: 'Europe/Paris' Local time is now: Wed Apr 11 09:49:02 CEST 2012. Universal Time is now: Wed Apr 11 07:49:02 UTC 2012. franck@franck-ThinkPad-T61:~$ tail /etc/timezone Europe/Paris franck@franck-ThinkPad-T61:~$ date mercredi 11 avril 2012, 02:49:21 (UTC-0500) franck@franck-ThinkPad-T61:~$ sudo dpkg-reconfigure --frontend noninteractive tzdata Current default time zone: 'Europe/Paris' Local time is now: Wed Apr 11 09:49:27 CEST 2012. Universal Time is now: Wed Apr 11 07:49:27 UTC 2012. franck@franck-ThinkPad-T61:~$ date mercredi 11 avril 2012, 02:49:30 (UTC-0500) franck@franck-ThinkPad-T61:~$ sudo cat /etc/default/rcS # # /etc/default/rcS # # Default settings for the scripts in /etc/rcS.d/ # # For information about these variables see the rcS(5) manual page. # # This file belongs to the "initscripts" package. TMPTIME=0 SULOGIN=no DELAYLOGIN=no UTC=yes VERBOSE=no FSCKFIX=no franck@franck-ThinkPad-T61:~$ sudo hwclock --show mer. 11 avril 2012 07:49:48 CDT -0.555705 secondes

    Read the article

  • Using the Java SE 8 Date Time API with JPA 2.1

    - by reza_rahman
    Most of you are hopefully aware of the new Date Time API included in Java SE 8. If you are not, you should check them out right now using the Java Tutorial Trail dedicated to the topic. It is a significantly leap forward in processing temporal data in Java. For those who already use Joda-Time the changes will look very familiar - very simplistically speaking the Java SE 8 feature is basically Joda-Time standardized. Quite naturally you will likely want to use the new Date Time APIs in your JPA domain model to better represent temporal data. The problem is that JPA 2.1 will not support the new API out of the box. So what are you to do? Fortunately you can make use of fairly simple JPA 2.1 Type Converters to use the Date Time API in your JPA domain classes. Steven Gertiser shows you how to do it in an extremely well written blog entry. Besides explaining the problem and the solution the entry is actually very good for getting a better understanding of JPA 2.1 Type Converters as well. I think such a set of converters may be a good fit for Apache DeltaSpike as a Java EE 7 extension? In case you are wondering about Java SE 8 support in the JPA specification itself, Nick Williams has already entered an excellent, well researched JIRA entry asking for such support in a future version of the JPA specification that's well worth looking at. Another possibility of course is for JPA providers to start supporting the Date Time API natively before anything is formalized in the specification. What do you think?

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >