Search Results

Search found 10731 results on 430 pages for 'day'.

Page 36/430 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Changing wallpaper depending on time of day via script or batch file?

    - by Patrick
    I want to have 2 different wallpapers that change according to time of day (6 and 22 hours respectively) and only want to display the night one after 22 hours and the day one only after 6 hours and until 22 hours. I didn't find a program that can do this after a standby, so I thought it should be easy to realize with the task scheduler running a script. Now the question is not only how to realize such a script, but also if the script should include the time checking or the task scheduler. I'm not sure what would work better with long times of the PC being in standby. I tried a few scripts already from similar questions and hoped I could modify to them to my needs, but they didn't work at all. Anyone able to help me? TIA.

    Read the article

  • How can I fix a date that changes by 4 years and 1 day when pasted between Excel workbooks

    - by lcbrevard
    In Excel dates are represented internally by a floating point number where the integer part is the number of days since "some date" and the fractional part is how far into that day (hence the time). You can see this if you change the format of a date - like 4/10/2009 to a number 39905. But when pasting a date between two different workbooks the date shifts by 4 years and one day!!! In other words "some date" is different between the two workbooks. In one workbook the number 0.0 represents 1/0/1900 and in the other 0.0 represents 1/1/1904. Where is this set and is it controllable? Or does this represent a corrupted file? These workbooks where originally from Excel 2000 but have been worked on now in Excel 2007 and Excel 2003. I can demonstrate the problem between the two workbook files in both 2003 and 2010. The exact history of when they were created or what versions of Excel have been used on each is unknown.

    Read the article

  • Finding day of week in batch file? (Windows Server 2008)

    - by Daniel Magliola
    I have a process I run from a batch file, and i only want to run it on a certain day of the week. Is it possible to get the day of week? All the example I found, somehow rely on "date /t" to return "Friday, 12/11/2009", however, in my machine, "date /t" returns "12/11/2009". No weekday there. I've already checked the "regional settings" for my machine, and the long date format does include the weekday. The short date format doesn't, but i'd really rather not change that, since it'll affect a bunch of stuff I do. Any ideas here? Thanks! Daniel

    Read the article

  • SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

    - by pinaldave
    This guest post is submitted by Feodor. Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. In this article Feodor explains the server-client-server process, and concentrated on the mutual waits between client and SQL Server. This is essential in grasping the concept of waits in a ‘global’ application plan. Recently I was asked to write a blog post about the wait statistics in SQL Server and since I had been thinking about writing it for quite some time now, here it is. It is a wide-spread idea that the wait statistics in SQL Server will tell you everything about your performance. Well, almost. Or should I say – barely. The reason for this is that SQL Server is always a part of a bigger system – there are always other players in the game: whether it is a client application, web service, any other kind of data import/export process and so on. In short, the SQL Server surroundings look like this: This means that SQL Server, aside from its internal waits, also depends on external waits and settings. As we can see in the picture above, SQL Server needs to have an interface in order to communicate with the surrounding clients over the network. For this communication, SQL Server uses protocol interfaces. I will not go into detail about which protocols are best, but you can read this article. Also, review the information about the TDS (Tabular data stream). As we all know, our system is only as fast as its slowest component. This means that when we look at our environment as a whole, the SQL Server might be a victim of external pressure, no matter how well we have tuned our database server performance. Let’s dive into an example: let’s say that we have a web server, hosting a web application which is using data from our SQL Server, hosted on another server. The network card of the web server for some reason is malfunctioning (think of a hardware failure, driver failure, or just improper setup) and does not send/receive data faster than 10Mbs. On the other end, our SQL Server will not be able to send/receive data at a faster rate either. This means that the application users will notify the support team and will say: “My data is coming very slow.” Now, let’s move on to a bit more exciting example: imagine that there is a similar setup as the example above – one web server and one database server, and the application is not using any stored procedure calls, but instead for every user request the application is sending 80kb query over the network to the SQL Server. (I really thought this does not happen in real life until I saw it one day.) So, what happens in this case? To make things worse, let’s say that the 80kb query text is submitted from the application to the SQL Server at least 100 times per minute, and as often as 300 times per minute in peak times. Here is what happens: in order for this query to reach the SQL Server, it will have to be broken into a of number network packets (according to the packet size settings) – and will travel over the network. On the other side, our SQL Server network card will receive the packets, will pass them to our network layer, the packets will get assembled, and eventually SQL Server will start processing the query – parsing, allegorizing, generating the query execution plan and so on. So far, we have already had a serious network overhead by waiting for the packets to reach our Database Engine. There will certainly be some processing overhead – until the database engine deals with the 80kb query and its 20 subqueries. The waits you see in the DMVs are actually collected from the point the query reaches the SQL Server and the packets are assembled. Let’s say that our query is processed and it finally returns 15000 rows. These rows have a certain size as well, depending on the data types returned. This means that the data will have converted to packages (depending on the network size package settings) and will have to reach the application server. There will also be waits, however, this time you will be able to see a wait type in the DMVs called ASYNC_NETWORK_IO. What this wait type indicates is that the client is not consuming the data fast enough and the network buffers are filling up. Recently Pinal Dave posted a blog on Client Statistics. What Client Statistics does is captures the physical flow characteristics of the query between the client(Management Studio, in this case) and the server and back to the client. As you see in the image, there are three categories: Query Profile Statistics, Network Statistics and Time Statistics. Number of server roundtrips–a roundtrip consists of a request sent to the server and a reply from the server to the client. For example, if your query has three select statements, and they are separated by ‘GO’ command, then there will be three different roundtrips. TDS Packets sent from the client – TDS (tabular data stream) is the language which SQL Server speaks, and in order for applications to communicate with SQL Server, they need to pack the requests in TDS packets. TDS Packets sent from the client is the number of packets sent from the client; in case the request is large, then it may need more buffers, and eventually might even need more server roundtrips. TDS packets received from server –is the TDS packets sent by the server to the client during the query execution. Bytes sent from client – is the volume of the data set to our SQL Server, measured in bytes; i.e. how big of a query we have sent to the SQL Server. This is why it is best to use stored procedures, since the reusable code (which already exists as an object in the SQL Server) will only be called as a name of procedure + parameters, and this will minimize the network pressure. Bytes received from server – is the amount of data the SQL Server has sent to the client, measured in bytes. Depending on the number of rows and the datatypes involved, this number will vary. But still, think about the network load when you request data from SQL Server. Client processing time – is the amount of time spent in milliseconds between the first received response packet and the last received response packet by the client. Wait time on server replies – is the time in milliseconds between the last request packet which left the client and the first response packet which came back from the server to the client. Total execution time – is the sum of client processing time and wait time on server replies (the SQL Server internal processing time) Here is an illustration of the Client-server communication model which should help you understand the mutual waits in a client-server environment. Keep in mind that a query with a large ‘wait time on server replies’ means the server took a long time to produce the very first row. This is usual on queries that have operators that need the entire sub-query to evaluate before they proceed (for example, sort and top operators). However, a query with a very short ‘wait time on server replies’ means that the query was able to return the first row fast. However a long ‘client processing time’ does not necessarily imply the client spent a lot of time processing and the server was blocked waiting on the client. It can simply mean that the server continued to return rows from the result and this is how long it took until the very last row was returned. The bottom line is that developers and DBAs should work together and think carefully of the resource utilization in the client-server environment. From experience I can say that so far I have seen only cases when the application developers and the Database developers are on their own and do not ask questions about the other party’s world. I would recommend using the Client Statistics tool during new development to track the performance of the queries, and also to find a synchronous way of utilizing resources between the client – server – client. Here is another example: think about similar setup as above, but add another server to the game. Let’s say that we keep our media on a separate server, and together with the data from our SQL Server we need to display some images on the webpage requested by our user. No matter how simple or complicated the logic to get the images is, if the images are 500kb each our users will get the page slowly and they will still think that there is something wrong with our data. Anyway, I don’t mean to get carried away too far from SQL Server. Instead, what I would like to say is that DBAs should also be aware of ‘the big picture’. I wrote a blog post a while back on this topic, and if you are interested, you can read it here about the big picture. And finally, here are some guidelines for monitoring the network performance and improving it: Run a trace and outline all queries that return more than 1000 rows (in Profiler you can actually filter and sort the captured trace by number of returned rows). This is not a set number; it is more of a guideline. The general thought is that no application user can consume that many rows at once. Ask yourself and your fellow-developers: ‘why?’. Monitor your network counters in Perfmon: Network Interface:Output queue length, Redirector:Network errors/sec, TCPv4: Segments retransmitted/sec and so on. Make sure to establish a good friendship with your network administrator (buy them coffee, for example J ) and get into a conversation about the network settings. Have them explain to you how the network cards are setup – are they standalone, are they ‘teamed’, what are the settings – full duplex and so on. Find some time to read a bit about networking. In this short blog post I hope I have turned your attention to ‘the big picture’ and the fact that there are other factors affecting our SQL Server, aside from its internal workings. As a further reading I would still highly recommend the Wait Stats series on this blog, also I would recommend you have the coffee break conversation with your network admin as soon as possible. This guest post is written by Feodor Georgiev. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL

    Read the article

  • Site too large to officially use Google Analytics?

    - by Jeff Atwood
    We just got this email from the Google Analytics team: We love that you love our product and use it as much as you do. We have observed however, that a website you are tracking with Google Analytics is sending over 1 million hits per day to Google Analytics servers. This is well above the "5 million pageviews per month per account" limit specified in the Google Analytics Terms of Service. Processing this amount of data multiple times a day takes up valuable resources that enable us to continue to develop the product for all Google Analytics users. Processing this amount of data multiple times a day takes up valuable resources that enable us to continue to develop the product for all Google Analytics users. As such, starting August 23rd, 2010, the metrics in your reports will be updated once a day, as opposed to multiple times during the course of the day. You will continue to receive all the reports and features in Google Analytics as usual. The only change will be that data for a given day will appear the following day. We trust you understand the reasons for this change. I totally respect this decision, and I think it's very generous to not kick us out. But how do we do this the right way -- what's the official, blessed Google way to use Google Analytics if you're a "whale" website with lots of hits per day? Or, are there other analytics services that would be more appropriate for very large websites?

    Read the article

  • How to Rotate different data in days of the week in php [migrated]

    - by shihon
    I am working on a project in which i have to distribute different ad's per day, the ad's in form of array are: $ad = array( 'attribute1_value' => "12", 'attribute2_value' => "xyz", 'attribute3_value' => 'http://example.com', 'attribute4_value' => 'data'); The logic i am using with switch case : $day = date('w',time()); switch ($day) { case '0': if($day == '0') { $count = 0; echo $ad; $count++; } else { $count = 7; echo $ad; } break; case '1': if($day == '1') { $count = 1; echo $ad; $count++; } else { $count = 8; echo $ad; } break; Problem is if i have ~15 ad's then i want to distribute ad/day, date('w') output's the present day but after day 7 i.e saturday, on sunday ad number 8 initiate. I have to implement this scenario using date function. Also i have to send ad's to those user who are not experience this ad before. I am not expert in php, as a beginner working in php/mysql. Kindly help me to improve this concept

    Read the article

  • 10 lines of code per day is the global average!? -- true?

    - by Earlz
    Ok so last year I participated in a high school curriculum contest thing at a college(I currently attend this college). I actually got 1st in it but was still a bit angry I didn't get every single one right. The most baffling of questions on there was How many lines of code does the average programmer write per day? A. 5 B. 10 C. 25 D. 30 Aside from being a subjective question which depended on language and everything else I was more baffled at what they had as the correct answer. 10. Even on my bad days at my job I touch more than 10 lines of code(either adding, modifying, or deleting) per day. And when I took this test I had only programmed as a hobby where it was common for me to write a few hundred lines for one of my new projects per day. Where are they getting this random number of ten!? Is this published somewhere? A quick googling found me nothing.

    Read the article

  • MYSQL: How do I set a date (makedate?) with month, day, and year

    - by chongman
    Hi? I have three columns, y, m, and d (year, month, and day) and want to store this as a date. What function would I use on mySQL to do this? Apparently makedate uses year and day of year (see below), but I have month. I know I can use STR_TO_DATE(str,format), by constructing the string from (y,m,d), but I would guess there is an easier way to do it. REFERENCES MAKEDATE(year,dayofyear) Returns a date, given year and day-of-year values. dayofyear must be greater than 0 or the result is NULL.

    Read the article

  • how to calculate the beginning of a day given milliseconds?

    - by conman
    i want to figure out the time from the beginning of the day given a days milliseconds. so say i'm given this: 1340323100024 which is like mid day of 6/21/2012. now i want the milliseconds from the beginning of the day, which would be 1340262000000 (at least i think that's what it's supposed to be.) how do i get 1340262000000 from 1340323100024? i tried doing Math.floor(1340323100024/86400000) * 86400000 but that gives me 1340236800000, which if i create a date object out of it, says its the 20th. i know i can create a date object from 1340323100024, then get the month, year, and date, to create a new object which would give me 1340262000000, but i find it ridiculous i can't figure out something so simple. any help would be appreciated. btw, i'm doing this in javascript if it makes any difference.

    Read the article

  • SFINAE failing with enum template parameter

    - by zeroes00
    Can someone explain the following behaviour (I'm using Visual Studio 2010). header: #pragma once #include <boost\utility\enable_if.hpp> using boost::enable_if_c; enum WeekDay {MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, SUNDAY}; template<WeekDay DAY> typename enable_if_c< DAY==SUNDAY, bool >::type goToWork() {return false;} template<WeekDay DAY> typename enable_if_c< DAY!=SUNDAY, bool >::type goToWork() {return true;} source: bool b = goToWork<MONDAY>(); compiler this gives error C2770: invalid explicit template argument(s) for 'enable_if_c<DAY!=6,bool>::type goToWork(void)' and error C2770: invalid explicit template argument(s) for 'enable_if_c<DAY==6,bool>::type goToWork(void)' But if I change the function template parameter from the enum type WeekDay to int, it compiles fine: template<int DAY> typename enable_if_c< DAY==SUNDAY, bool >::type goToWork() {return false;} template<int DAY> typename enable_if_c< DAY!=SUNDAY, bool >::type goToWork() {return true;} Also the normal function template specialization works fine, no surprises there: template<WeekDay DAY> bool goToWork() {return true;} template<> bool goToWork<SUNDAY>() {return false;} To make things even weirder, if I change the source file to use any other WeekDay than MONDAY or TUESDAY, i.e. bool b = goToWork<THURSDAY>(); the error changes to this: error C2440: 'specialization' : cannot convert from 'int' to 'const WeekDay' Conversion to enumeration type requires an explicit cast (static_cast, C-style cast or function-style cast)

    Read the article

  • Big Data – What is Big Data – 3 Vs of Big Data – Volume, Velocity and Variety – Day 2 of 21

    - by Pinal Dave
    Data is forever. Think about it – it is indeed true. Are you using any application as it is which was built 10 years ago? Are you using any piece of hardware which was built 10 years ago? The answer is most certainly No. However, if I ask you – are you using any data which were captured 50 years ago, the answer is most certainly Yes. For example, look at the history of our nation. I am from India and we have documented history which goes back as over 1000s of year. Well, just look at our birthday data – atleast we are using it till today. Data never gets old and it is going to stay there forever.  Application which interprets and analysis data got changed but the data remained in its purest format in most cases. As organizations have grown the data associated with them also grew exponentially and today there are lots of complexity to their data. Most of the big organizations have data in multiple applications and in different formats. The data is also spread out so much that it is hard to categorize with a single algorithm or logic. The mobile revolution which we are experimenting right now has completely changed how we capture the data and build intelligent systems.  Big organizations are indeed facing challenges to keep all the data on a platform which give them a  single consistent view of their data. This unique challenge to make sense of all the data coming in from different sources and deriving the useful actionable information out of is the revolution Big Data world is facing. Defining Big Data The 3Vs that define Big Data are Variety, Velocity and Volume. Volume We currently see the exponential growth in the data storage as the data is now more than text data. We can find data in the format of videos, musics and large images on our social media channels. It is very common to have Terabytes and Petabytes of the storage system for enterprises. As the database grows the applications and architecture built to support the data needs to be reevaluated quite often. Sometimes the same data is re-evaluated with multiple angles and even though the original data is the same the new found intelligence creates explosion of the data. The big volume indeed represents Big Data. Velocity The data growth and social media explosion have changed how we look at the data. There was a time when we used to believe that data of yesterday is recent. The matter of the fact newspapers is still following that logic. However, news channels and radios have changed how fast we receive the news. Today, people reply on social media to update them with the latest happening. On social media sometimes a few seconds old messages (a tweet, status updates etc.) is not something interests users. They often discard old messages and pay attention to recent updates. The data movement is now almost real time and the update window has reduced to fractions of the seconds. This high velocity data represent Big Data. Variety Data can be stored in multiple format. For example database, excel, csv, access or for the matter of the fact, it can be stored in a simple text file. Sometimes the data is not even in the traditional format as we assume, it may be in the form of video, SMS, pdf or something we might have not thought about it. It is the need of the organization to arrange it and make it meaningful. It will be easy to do so if we have data in the same format, however it is not the case most of the time. The real world have data in many different formats and that is the challenge we need to overcome with the Big Data. This variety of the data represent  represent Big Data. Big Data in Simple Words Big Data is not just about lots of data, it is actually a concept providing an opportunity to find new insight into your existing data as well guidelines to capture and analysis your future data. It makes any business more agile and robust so it can adapt and overcome business challenges. Tomorrow In tomorrow’s blog post we will try to answer discuss Evolution of Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQLAuthority News – Technical Review of Learning at Koenig Solutions

    - by pinaldave
    Yesterday I finished my 3 days fast track in person learning of course End to End SQL Server Business Intelligence at Koenig Solutions. You can read my previous article over here regarding why am I learning SQL Server. Yesterday I blogged about my experience of arriving to Training Center and my induction with the center. The Training Days I had enrolled for three days training so my routine each of the three days was very much same. However, the content every day was different as I was learning something new every day. Let me describe a few of the interesting details of my daily routine. A Single Student Batch The best part of my training was that in my training batch, I am single student. Koenig is known to smaller batches and often they have single student batches as well. I was very much delighted to know that I will have dedicated access and attention from my trainer in my batch as I will be single student in my batch. In most of the labs I have observed there are no more than 4 students at any time. Prakash and Pinal 7:30 AM Breakfast Talk We all students gather at 7:30 in breakfast area. The best time of the day. I was the only Indian student in the group. The other students were from USA, Canada, Nigeria, Bhutan, Tanzania, and a few others from other countries. I immediately become the source of information and reference manual. Though the distance between Delhi and Bangalore is 2000+ KM I was considered as a local guy. 8:30 AMHeading to Training Center Every day without fail at 8:30 the van started from our accommodation to the training center. As mentioned in an earlier blog post the distance is about 5 minutes and we were able to reach at the location before 8:45. This gave us some time settle in before our class starts at 9:00 AM. 9:00 AM Order Lunch Food Well it may sound funny that we just had breakfast 30 minutes but the first thing everybody has to do is to order lunch as soon as the class starts. There is an online training portal to order food for the day. Everybody has to place their order early during the day so the food arrives on time during lunch time. Everybody can order whatever they want to order using an online ordering system. The options are plenty and everybody can order what they like. 9:05 AM Learning Starts After deciding the lunch we started the learning. I was very fortunate to have a very experienced trainer - Prakash Chheatry. Though I have never met him before I have heard a lot about Prakash. He is known as the top most SQL Server Trainer in India. His student list contains some of the very well known SQL Server Experts of the world and few of SQL Server “best seller” book authors. Learning continues till 1:00 PM with one tea-coffee break in between. 1:00 PM Lunch The lunch time is again the fun time. We all students get together in the afternoon and tell the stories of the world. Indeed the best part of the day beside learning new stuff. 4:55 PM Ready to Return We stop at 4:55 as at precisely 5:00 PM the van stops by the institute which takes us back to our accommodation. Trust me seriously long long day always but the amount of the learning is the win of the day. 7:30 PM Dinner Time After coming back to the accommodation I study till 7:30 and then rush for dinner. Dinner is world cuisine and deserts are really delicious. After dinner every day I have written a blog and retired early as the next day is always going to be busier than the present day. What did I learn As I mentioned earlier I know SQL Server fairly well. I had expressed the same in my conversation as well. This is the reason I was assigned a fairly senior trainer and we learned everything quite quickly. As I know quite a few things we went pretty fast in many topics. There were a few things, I wanted to learn in detail as well practice on the labs. We slowed down where we wanted and rush through the concepts where I was very comfortable. Here is the list of the things which we covered in action pack three days. Introduction to Business Intelligence (Intro) SQL Server Analysis Service (Theory and Lab) SQL Server Integration Service  (Theory and Lab) SQL Server Reporting Service  (Theory and Lab) SQL Server PowerPivot (Lab) UDM (Theory) SharePoint Concepts (Theory) Power View (Demo) Business Intelligence and Security (Discussion) Well, I was delighted that I was able to refresh lots of concepts during these three days. Thanks to my trainer and my friend who helped me to have a good learning experience. I believe all the learning  will help me in my growth and future career. With this I end my this experience. I am planning to have another online learning experience later this month. I will blog about my experience as I begin it. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • SQL SERVER – Capturing Wait Types and Wait Stats Information at Interval – Wait Type – Day 5 of 28

    - by pinaldave
    Earlier, I have tried to cover some important points about wait stats in detail. Here are some points that we had covered earlier. DMV related to wait stats reset when we reset SQL Server services DMV related to wait stats reset when we manually reset the wait types However, at times, there is a need of making this data persistent so that we can take a look at them later on. Sometimes, performance tuning experts do some modifications to the server and try to measure the wait stats at that point of time and after some duration. I use the following method to measure the wait stats over the time. -- Create Table CREATE TABLE [MyWaitStatTable]( [wait_type] [nvarchar](60) NOT NULL, [waiting_tasks_count] [bigint] NOT NULL, [wait_time_ms] [bigint] NOT NULL, [max_wait_time_ms] [bigint] NOT NULL, [signal_wait_time_ms] [bigint] NOT NULL, [CurrentDateTime] DATETIME NOT NULL, [Flag] INT ) GO -- Populate Table at Time 1 INSERT INTO MyWaitStatTable ([wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], [CurrentDateTime],[Flag]) SELECT [wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], GETDATE(), 1 FROM sys.dm_os_wait_stats GO ----- Desired Delay (for one hour) WAITFOR DELAY '01:00:00' -- Populate Table at Time 2 INSERT INTO MyWaitStatTable ([wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], [CurrentDateTime],[Flag]) SELECT [wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], GETDATE(), 2 FROM sys.dm_os_wait_stats GO -- Check the difference between Time 1 and Time 2 SELECT T1.wait_type, T1.wait_time_ms Original_WaitTime, T2.wait_time_ms LaterWaitTime, (T2.wait_time_ms - T1.wait_time_ms) DiffenceWaitTime FROM MyWaitStatTable T1 INNER JOIN MyWaitStatTable T2 ON T1.wait_type = T2.wait_type WHERE T2.wait_time_ms > T1.wait_time_ms AND T1.Flag = 1 AND T2.Flag = 2 ORDER BY DiffenceWaitTime DESC GO -- Clean up DROP TABLE MyWaitStatTable GO If you notice the script, I have used an additional column called flag. I use it to find out when I have captured the wait stats and then use it in my SELECT query to SELECT wait stats related to that time group. Many times, I select more than 5 or 6 different set of wait stats and I find this method very convenient to find the difference between wait stats. In a future blog post, we will talk about specific wait stats. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Big Data – Interacting with Hadoop – What is PIG? – What is PIG Latin? – Day 16 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the HIVE in Big Data Story. In this article we will understand what is PIG and PIG Latin in Big Data Story. Yahoo started working on Pig for their application deployment on Hadoop. The goal of Yahoo to manage their unstructured data. What is Pig and What is Pig Latin? Pig is a high level platform for creating MapReduce programs used with Hadoop and the language we use for this platform is called PIG Latin. The pig was designed to make Hadoop more user-friendly and approachable by power-users and nondevelopers. PIG is an interactive execution environment supporting Pig Latin language. The language Pig Latin has supported loading and processing of input data with series of transforming to produce desired results. PIG has two different execution environments 1) Local Mode – In this case all the scripts run on a single machine. 2) Hadoop – In this case all the scripts run on Hadoop Cluster. Pig Latin vs SQL Pig essentially creates set of map and reduce jobs under the hoods. Due to same users does not have to now write, compile and build solution for Big Data. The pig is very similar to SQL in many ways. The Ping Latin language provide an abstraction layer over the data. It focuses on the data and not the structure under the hood. Pig Latin is a very powerful language and it can do various operations like loading and storing data, streaming data, filtering data as well various data operations related to strings. The major difference between SQL and Pig Latin is that PIG is procedural and SQL is declarative. In simpler words, Pig Latin is very similar to SQ Lexecution plan and that makes it much easier for programmers to build various processes. Whereas SQL handles trees naturally, Pig Latin follows directed acyclic graph (DAG). DAGs is used to model several different kinds of structures in mathematics and computer science. DAG Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Zookeeper. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • TechEd 2012: Day 3 &ndash; Build Me A Solution

    - by Tim Murphy
    While digesting my lunch it was time to digest some TFS Build information. While much of my time is spent wearing my developer’s hat I am still a jack of all trades and automated builds are an important aspect of any project.  Because of this I was looking forward to finding out what new features are available in the latest release of Team Foundation Server. The first feature that caught my attention is the TFS Admin Client.  After being used to dealing with NAnt in the past it is nice to see a build a configuration GUI that is so flexible and well thought out.  The bonus is that it the tools that are incorporated in Visual Studio 2012 are just as feature rich.  Life is good. Since automated builds are the hub of your development process in a continuous integration shop I was really interested in the process related options. The biggest value add that I noticed was merge gated check-ins.  Merge or batch gated check-ins are an interesting concept.  If the build breaks with all the changes then TFS will run separate builds for each of the check-ins.  This ability to identify the actual offending check-in can save a lot of time and gray hair. The safari of TFS Build that was this session was packed with attractions.  How do you set it up builds, what are the different flavors of builds, how does the system report how the build went?  I would suggest anyone who is responsible for build automation spend some serious time with TFS 2012 and VS2012. del.icio.us Tags: Team Foundation Server 2012,TFS,Build,TechEd,TechEd 2012,Visual Studio 2012

    Read the article

  • Big Data – Operational Databases Supporting Big Data – Key-Value Pair Databases and Document Databases – Day 13 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Relational Database and NoSQL database in the Big Data Story. In this article we will understand the role of Key-Value Pair Databases and Document Databases Supporting Big Data Story. Now we will see a few of the examples of the operational databases. Relational Databases (Yesterday’s post) NoSQL Databases (Yesterday’s post) Key-Value Pair Databases (This post) Document Databases (This post) Columnar Databases (Tomorrow’s post) Graph Databases (Tomorrow’s post) Spatial Databases (Tomorrow’s post) Key Value Pair Databases Key Value Pair Databases are also known as KVP databases. A key is a field name and attribute, an identifier. The content of that field is its value, the data that is being identified and stored. They have a very simple implementation of NoSQL database concepts. They do not have schema hence they are very flexible as well as scalable. The disadvantages of Key Value Pair (KVP) database are that they do not follow ACID (Atomicity, Consistency, Isolation, Durability) properties. Additionally, it will require data architects to plan for data placement, replication as well as high availability. In KVP databases the data is stored as strings. Here is a simple example of how Key Value Database will look like: Key Value Name Pinal Dave Color Blue Twitter @pinaldave Name Nupur Dave Movie The Hero As the number of users grow in Key Value Pair databases it starts getting difficult to manage the entire database. As there is no specific schema or rules associated with the database, there are chances that database grows exponentially as well. It is very crucial to select the right Key Value Pair Database which offers an additional set of tools to manage the data and provides finer control over various business aspects of the same. Riak Rick is one of the most popular Key Value Database. It is known for its scalability and performance in high volume and velocity database. Additionally, it implements a mechanism for collection key and values which further helps to build manageable system. We will further discuss Riak in future blog posts. Key Value Databases are a good choice for social media, communities, caching layers for connecting other databases. In simpler words, whenever we required flexibility of the data storage keeping scalability in mind – KVP databases are good options to consider. Document Database There are two different kinds of document databases. 1) Full document Content (web pages, word docs etc) and 2) Storing Document Components for storage. The second types of the document database we are talking about over here. They use Javascript Object Notation (JSON) and Binary JSON for the structure of the documents. JSON is very easy to understand language and it is very easy to write for applications. There are two major structures of JSON used for Document Database – 1) Name Value Pairs and 2) Ordered List. MongoDB and CouchDB are two of the most popular Open Source NonRelational Document Database. MongoDB MongoDB databases are called collections. Each collection is build of documents and each document is composed of fields. MongoDB collections can be indexed for optimal performance. MongoDB ecosystem is highly available, supports query services as well as MapReduce. It is often used in high volume content management system. CouchDB CouchDB databases are composed of documents which consists fields and attachments (known as description). It supports ACID properties. The main attraction points of CouchDB are that it will continue to operate even though network connectivity is sketchy. Due to this nature CouchDB prefers local data storage. Document Database is a good choice of the database when users have to generate dynamic reports from elements which are changing very frequently. A good example of document usages is in real time analytics in social networking or content management system. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • iPad Jailbreak &ndash; On The Lam In A Single Day

    - by David Totzke
    Exploits to jailbreak the iPhone are well known.  The iPad runs on the iPhone 3.2 firmware.  What this means is that the iPad was shipped with known security vulnerabilities that would allow someone to gain root access to the device. Nice. It’s not like these are security vulnerabilities that are known but have no exploits.  The exploits are numerous and freely available. Of course, if you fit the demographic, you probably have nothing to worry about. Magical and Revolutionary?  Hardly. Dave Just because I can…

    Read the article

  • Oracle Applications Day 2012. Experience the Global Innovation of Management Applications

    - by antonella.buonagurio
    1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} 10 ottobre 2012 – Milano, East End Studios | 17 ottobre 2012 - Roma, Officine Farneto Partecipa all’appuntamento dedicato alla comunità di Clienti e Partner per fare networking e condividere le esperienze sulle soluzioni più innovative per affrontare le sfide attuali e future. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A Milano (10/10/2012) interverranno, tra gli altri:  Enrico Ancona, Amministratore Delegato - Imperia & Monferrina e Business Reply  Massimiliano Gerli, CIO - Amplifon e Michele Paolin, Senior Manager - Deloitte eXtended Business Services A Roma (17/10/2012) interverranno, tra gli altri: Giulio Carone, CFO - Enel Green Power e Claudio Arcudi, Senior Executive - Accenture Gianluca D’Aniello, CIO - Sky e Giorgio Pitruzzello, Manager - Deloitte Consulting Spartaco Parente, EPD Change & Label Control - Abbott e Business Reply Sono inoltre previsti i contributi delle aziende Abbott, Aeroporto di Napoli, Amplifon, Dema Aerospace, Enel Green Power, Fiera Milano, Imperia & Monferrina, La Rinascente, Safilo, Sky, Spal,Technogym, Tiscali e Tivù che parleranno di: Innovation for Human Resources Performance Management Excellence Empower Applications with Technology (Milano) Applications for Public Sector (Roma) Next Generation Global Operations Customer Experience Revolution Oltre dieci Instant Workshop ti permetteranno di conoscere e condividere l’esperienza dei Partner e delle aziende che utilizzano le soluzioni Oracle.In più, oltre dieci Instant Workshop per conoscere e condividere l’esperienza dei Partner e delle aziende che utilizzano con successo le soluzioni Oracle. Iscriviti sul sito Partecipa al concorso fotografico Oracle I.M.A.G.E. e vinci il tuo iPad! Scatta le immagini che per te descrivono i cinque concept dell’evento (Innovation, Management, Applications, Global, Experience) e inviale per e-mail. Per iscriverti al contest visita la pagina Concorso sul sito Non perdere l’evento più “social cool” dell’anno!

    Read the article

  • Big Data – How to become a Data Scientist and Learn Data Science? – Day 19 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the analytics in Big Data Story. In this article we will understand how to become a Data Scientist for Big Data Story. Data Scientist is a new buzz word, everyone seems to be wanting to become Data Scientist. Let us go over a few key topics related to Data Scientist in this blog post. First of all we will understand what is a Data Scientist. In the new world of Big Data, I see pretty much everyone wants to become Data Scientist and there are lots of people I have already met who claims that they are Data Scientist. When I ask what is their role, I have got a wide variety of answers. What is Data Scientist? Data scientists are the experts who understand various aspects of the business and know how to strategies data to achieve the business goals. They should have a solid foundation of various data algorithms, modeling and statistics methodology. What do Data Scientists do? Data scientists understand the data very well. They just go beyond the regular data algorithms and builds interesting trends from available data. They innovate and resurrect the entire new meaning from the existing data. They are artists in disguise of computer analyst. They look at the data traditionally as well as explore various new ways to look at the data. Data Scientists do not wait to build their solutions from existing data. They think creatively, they think before the data has entered into the system. Data Scientists are visionary experts who understands the business needs and plan ahead of the time, this tremendously help to build solutions at rapid speed. Besides being data expert, the major quality of Data Scientists is “curiosity”. They always wonder about what more they can get from their existing data and how to get maximum out of future incoming data. Data Scientists do wonders with the data, which goes beyond the job descriptions of Data Analysist or Business Analysist. Skills Required for Data Scientists Here are few of the skills a Data Scientist must have. Expert level skills with statistical tools like SAS, Excel, R etc. Understanding Mathematical Models Hands-on with Visualization Tools like Tableau, PowerPivots, D3. j’s etc. Analytical skills to understand business needs Communication skills On the technology front any Data Scientists should know underlying technologies like (Hadoop, Cloudera) as well as their entire ecosystem (programming language, analysis and visualization tools etc.) . Remember that for becoming a successful Data Scientist one require have par excellent skills, just having a degree in a relevant education field will not suffice. Final Note Data Scientists is indeed very exciting job profile. As per research there are not enough Data Scientists in the world to handle the current data explosion. In near future Data is going to expand exponentially, and the need of the Data Scientists will increase along with it. It is indeed the job one should focus if you like data and science of statistics. Courtesy: emc Tomorrow In tomorrow’s blog post we will discuss about various Big Data Learning resources. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >