Search Results

Search found 28762 results on 1151 pages for 'go goo go'.

Page 72/1151 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Translator by Moth v2

    - by Daniel Moth
    If you are looking for the full manual for this Windows Phone app you can find it here: "Translator by Moth". While the manual has no images (just text), in this post I will share images and if you like them, go get "Translator by Moth" from the Windows Phone marketplace. open the app from the app list or through a pinned tile (including secondary tiles for specific translations)    language picker (~40 languages)     "current" page     "saved" page    "about" page Like? Go get Translator by Moth! Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • How can I upgrade to 10.10 from 10.10 beta

    - by n179911
    Can you please tell me how can I upgrade to ubuntu 10.10 release from ubuntu 10.10 beta? I have go to update manager, it keeps saying there is no update. And what I go to synaptic package manager, I see this error: W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://extras.ubuntu.com maverick Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 16126D3A3E5C1192 W: Failed to fetch http://extras.ubuntu.com/ubuntu/dists/maverick/Release W: Some index files failed to download, they have been ignored, or old ones used instead.

    Read the article

  • It’s time that you ought to know what you don’t know

    - by fatherjack
    There is a famous quote about unknown unknowns and known knowns and so on but I’ll let you review that if you are interested. What I am worried about is that there are things going on in your environment that you ought to know about, indeed you have asked to be told about but you are not getting the information. When you schedule a SQL Agent job you can set it to send an email to an inbox monitored by someone who needs to know and indeed can do something about it. However, what happens if the email process isnt successful? Check your servers with this: USE [msdb] GO /* This code selects the top 10 most recent SQLAgent jobs that failed to complete successfully and where the email notification failed too. Jonathan Allen Jul 2012 */ DECLARE @Date DATETIME SELECT @Date = DATEADD(d, DATEDIFF(d, '19000101', GETDATE()) - 1, '19000101') SELECT TOP 10 [s].[name] , [sjh].[step_name] , [sjh].[sql_message_id] , [sjh].[sql_severity] , [sjh].[message] , [sjh].[run_date] , [sjh].[run_time] , [sjh].[run_duration] , [sjh].[operator_id_emailed] , [sjh].[operator_id_netsent] , [sjh].[operator_id_paged] , [sjh].[retries_attempted] FROM [dbo].[sysjobhistory] AS sjh INNER JOIN [dbo].[sysjobs] AS s ON [sjh].[job_id] = [s].[job_id] WHERE EXISTS ( SELECT * FROM [dbo].[sysjobs] AS s INNER JOIN [dbo].[sysjobhistory] AS s2 ON [s].[job_id] = [s2].[job_id] WHERE [sjh].[job_id] = [s2].[job_id] AND [s2].[message] LIKE '%failed to notify%' AND CONVERT(DATETIME, CONVERT(VARCHAR(15), [s2].[run_date])) >= @date AND [s2].[run_status] = 0 ) AND sjh.[run_status] = 0 AND sjh.[step_id] != 0 AND CONVERT(DATETIME, CONVERT(VARCHAR(15), [run_date])) >= @date ORDER BY [sjh].[run_date] DESC , [sjh].[run_time] DESC go USE [msdb] go /* This code summarises details of SQLAgent jobs that failed to complete successfully and where the email notification failed too. Jonathan Allen Jul 2012 */ DECLARE @Date DATETIME SELECT @Date = DATEADD(d, DATEDIFF(d, '19000101', GETDATE()) - 1, '19000101') SELECT [s].name , [s2].[step_id] , CONVERT(DATETIME, CONVERT(VARCHAR(15), [s2].[run_date])) AS [rundate] , COUNT(*) AS [execution count] FROM [dbo].[sysjobs] AS s INNER JOIN [dbo].[sysjobhistory] AS s2 ON [s].[job_id] = [s2].[job_id] WHERE [s2].[message] LIKE '%failed to notify%' AND CONVERT(DATETIME, CONVERT(VARCHAR(15), [s2].[run_date])) >= @date AND [s2].[run_status] = 0 GROUP BY name , [s2].[step_id] , [s2].[run_date] ORDER BY [s2].[run_dateDESC] These two result sets will show if there are any SQL Agent jobs that have run on your servers that failed and failed to successfully email about the failure. I hope it’s of use to you. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • How to find entry level positions in a new city.

    - by sixtyfootersdude
    I am just graduating from a computer science degree (tomorrow is my last exam). I have been thinking about job hunting this semester but I wanted to focus on my studies and part time job so I am a bit late on the job hunt. I want to find a job in a city that I have very little professional network in (Ottawa, Ontario, Canada). How would you go about job hunting in a new city? I do not live there yet and I cannot easy go there so that makes finding places to apply a bit trickier. Normally I would ask people that I studied and worked with but I have few contacts in Ottawa. Where would you look to find jobs? I have been using Craigs-list My Universities job listings (but they are mostly focused on the east coast) This government job listing page: http://www.careerbeacon.com/ Anyone have any great job finding resources?

    Read the article

  • SQL SERVER – Service Broker and CAP_CPU_PERCENT – Limiting SQL Server Instances to CPU Usage

    - by pinaldave
    I have mentioned several times on this blog that the best part of blogging is the questions I receive from readers. They are often very interesting. The questions from readers give me a good idea what other readers might be thinking as well. After reading my earlier article Simple Example to Configure Resource Governor – Introduction to Resource Governor – I received an email from a reader and we exchanged a few emails. After exchanging emails we both figured out what is going on. It was indeed interesting and reader suggested to that I should blog about it.  I asked for permission to publish his name but he does not like the attention so we will just call him Jeff. I have converted our emails into chat for easy consumption. Jeff: Your script does not work at all. I think either there is a bug in SQL Server. Pinal: Would you please explain in detail? Jeff: Your code does not limit the CPU usage? Pinal: How did you measure it? Jeff: Well, we have third party tools for it but let us say I have limited the resources for Reporting Services and used your script described in your blog. After that I ran only reporting service workload the CPU is still used more than 100% and it is not limited to 30% as described in your script. Clearly something is wrong somewhere. Pinal: Did you say you ONLY ran reporting server load? Jeff: Yeah, to validate I ran ONLY reporting server load and CPU did not throttle at 30% as per your script. Pinal: Oh! I get it here is the answer - CAP_CPU_PERCENT = 30. Use it. Jeff: What is that, I think your earlier script says it will throttle the Reporting Service workload and Application/OLTP workload and balance it. Pinal: Exactly, that is correct. Jeff: You need to write more in email buddy! Just like your blogs, your answers do not make sense! No Offense! Pinal: Hmm…feedback well taken. Let me try again. In SQL Server 2012 there are a few enhancements with regards to SQL Server Resource Governor. One of the enhancement is how the resources are allocated. Let me explain you with examples. Configuration: [Read Earlier Post] Reporting Workload: MIN_CPU_PERCENT=0, MAX_CPU_PERCENT=30 Application/OLTP Workload: MIN_CPU_PERCENT=50, MAX_CPU_PERCENT=100 Example 1: If there is only Reporting Workload on the server: SQL Server will not limit usage of CPU to only 30% workload but SQL Server instance will use all available CPU (if needed). In another word in this scenario it will use more than 30% CPU. Example 2: If there is Reproting Workload and heavy Application/OLTP workload: SQL Server will allocate a maximum of 30% CPU resources to Reporting Workload and allocate remaining resources to heavy application/OLTP workload. The reason for this enhancement is for better utilization of the resources. Let us think, if there is only single workload, which we have limited to max CPU usage to 30%. The other unused available CPU resources is now wasted. In this situation SQL Server allows the workload to use more than 30% resources leading to overall improved/optimized performance. However, in the case of multiple workload where lots of resources are needed the limits specified in MAX_CPU_PERCENT are acknowledged. Example 3: If there is a situation where the max CPU workload has to be enforced: This is a very interesting scenario, in the case when the max CPU workload has to be enforced irrespective of the workload and enhanced algorithm, the keyword CAP_CPU_PERCENT is essential. It specifies a hard cap on the CPU bandwidth that all requests in the resource pool will receive. It will never let CPU usage for reporting workload to go over 30% in our case. You can use the key word as follows: -- Creating Resource Pool for Report Server CREATE RESOURCE POOL ReportServerPool WITH ( MIN_CPU_PERCENT=0, MAX_CPU_PERCENT=30, CAP_CPU_PERCENT=40, MIN_MEMORY_PERCENT=0, MAX_MEMORY_PERCENT=30) GO Notice that there is MAX_CPU_PERCENT=30 and CAP_CPU_PERCENT=40, what it means is that when SQL Server Instance is under heavy load under different workload it will use the maximum CPU at 30%. However, when the SQL Server instance is not under workload it will go over the 30% limit. However, as CAP_CPU_PERCENT is set to 40, it will not go over 40% in any case by limiting the usage of CPU. CAP_CPU_PERCENT puts a hard limit on the resources usage by workload. Jeff: Nice Pinal, you should blog about it. [A day passes by] Pinal: Jeff, it is done! Click here to read it. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Service Broker

    Read the article

  • Super-Charge GIMP’s Image Editing Capabilities with G’MIC [Cross-Platform]

    - by Asian Angel
    Recently we showed you how to enhance GIMP’s image editing power and today we help you super-charge GIMP even more. G’MIC (GREYC’s Magic Image Converter) will add an impressive array of filters and effects to your GIMP installation for image editing goodness. Note: We applied the Contrast Swiss Mask filter to the image shown in the screenshot above to create a nice, warm sunset effect. To add the new PPA open the Ubuntu Software Center, go to the Edit Menu, and select Software Sources. Access the Other Software Tab in the Software Sources Window and add the first of the PPAs shown below (outlined in red). The second PPA will be automatically added to your system. Once you have the new PPAs set up, go back to the Ubuntu Software Center and do a search for “G’MIC”. You will find two listings available and can select either one to add G’MIC to your system (both work equally well). Click on More Info for the listing that you choose and scroll down to where Add-ons are listed. Make sure to select the Add-on listed, click Apply Changes when it appears, and then click Install. We have both shown here for your convenience… When you get ready to use G’MIC to enhance an image, go to the Filters Menu and select G’MIC. A new window will appear where you can select from an impressive array of filters available for your use. Have fun! Command Line Installation For those of you who prefer using the command line for installation use the following commands: sudo add-apt-repository ppa:ferramroberto/gimp sudo apt-get update sudo apt-get install gmic gimp-gmic Links Note: G’MIC is available for Linux, Windows, and Mac. G’MIC PPA at Launchpad [via Web Upd8] G’MIC Homepage at Sourceforge *Downloads for all three platforms available here. Bonus The anime wallpaper shown in the screenshots above can be found here: anime sport [DesktopNexus] Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Access and Manage Your Ubuntu One Account in Chrome and Iron Mouse Over YouTube Previews YouTube Videos in Chrome Watch a Machine Get Upgraded from MS-DOS to Windows 7 [Video] Bring the Whole Ubuntu Gang Home to Your Desktop with this Mascots Wallpaper Hack Apart a Highlighter to Create UV-Reactive Flowers [Science] Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron

    Read the article

  • Migrating from SQL Trace to Extended Events

    - by extended_events
    In SQL Server codenamed “Denali” we are moving our diagnostic tracing capabilities forward by building a system on top of Extended Events. With every new system you face the specter of migration which is always a bit of a hassle. I’m obviously motivated to see everyone move their diagnostic tracing systems over to the new extended events based system, so I wanted to make sure we lowered the bar for the migration process to help ease your trials. In my initial post on Denali CTP 1 I described a couple tables that we created that will help map the existing SQL Trace Event Classes to the equivalent Extended Events events. In this post I’ll describe the tables in a bit more details, explain the relationship between the SQL Trace objects (Event Class & Column) and Extended Event objects (Events & Actions) and at the end provide some sample code for a managed stored procedure that will take an existing SQL Trace session (eg. a trace that you can see in sys.Traces) and converts it into event session DDL. Can you relate? In some ways, SQL Trace and Extended Events is kind of like the Standard and Metric measuring systems in the United States. If you spend too much time trying to figure out how to convert between the two it will probably make your head hurt. It’s often better to just use the new system without trying to translate between the two. That said, people like to relate new things to the things they’re comfortable with, so, with some trepidation, I will now explain how these two systems are related to each other. First, some terms… SQL Trace is made up of Event Classes and Columns. The Event Class occurs as the result of some activity in the database engine, for example, SQL:Batch Completed fires when a batch has completed executing on the server. Each Event Class can have any number of Columns associated with it and those Columns contain the data that is interesting about the Event Class, such as the duration or database name. In Extended Events we have objects named Events, EventData field and Actions. The Event (some people call this an xEvent but I’ll stick with Event) is equivalent to the Event Class in SQL Trace since it is the thing that occurs as the result of some activity taking place in the server. An  EventData field (from now on I’ll just refer to these as fields) is a piece of information that is highly correlated with the event and is always included as part of the schema of an Event. An Action is something that can be associated with any Event and it will cause some additional “action” to occur when ever the parent Event occurs. Actions can do a number of different things for example, there are Actions that collect additional data and, take memory dumps. When mapping SQL Trace onto Extended Events, Columns are covered by a combination of both fields and Actions. Knowing exactly where a Column is covered by a field and where it is covered by an Action is a bit of an art, so we created the mapping tables to make you an Artist without the years of practice. Let me draw you a map. Event Mapping The table dbo.trace_xe_event_map exists in the master database with the following structure: Column_name Type trace_event_id smallint package_name nvarchar xe_event_name nvarchar By joining this table sys.trace_events using trace_event_id and to the sys.dm_xe_objects using xe_event_name you can get a fair amount of information about how Event Classes are related to Events. The most basic query this lends itself to is to match an Event Class with the corresponding Event. SELECT     t.trace_event_id,     t.name [event_class],     e.package_name,     e.xe_event_name FROM sys.trace_events t INNER JOIN dbo.trace_xe_event_map e     ON t.trace_event_id = e.trace_event_id There are a couple things you’ll notice as you peruse the output of this query: For the most part, the names of Events are fairly close to the original Event Class; eg. SP:CacheMiss == sp_cache_miss, and so on. We’ve mostly stuck to a one to one mapping between Event Classes and Events, but there are a few cases where we have combined when it made sense. For example, Data File Auto Grow, Log File Auto Grow, Data File Auto Shrink & Log File Auto Shrink are now all covered by a single event named database_file_size_change. This just seemed like a “smarter” implementation for this type of event, you can get all the same information from this single event (grow/shrink, Data/Log, Auto/Manual growth) without having multiple different events. You can use Predicates if you want to limit the output to just one of the original Event Class measures. There are some Event Classes that did not make the cut and were not migrated. These fall into two categories; there were a few Event Classes that had been deprecated, or that just did not make sense, so we didn’t migrate them. (You won’t find an Event related to mounting a tape – sorry.) The second class is bigger; with rare exception, we did not migrate any of the Event Classes that were related to Security Auditing using SQL Trace. We introduced the SQL Audit feature in SQL Server 2008 and that will be the compliance and auditing feature going forward. Doing this is a very deliberate decision to support separation of duties for DBAs. There are separate permissions required for SQL Audit and Extended Events tracing so you can assign these tasks to different people if you choose. (If you’re wondering, the permission for Extended Events is ALTER ANY EVENT SESSION, which is covered by CONTROL SERVER.) Action Mapping The table dbo.trace_xe_action_map exists in the master database with the following structure: Column_name Type trace_column_id smallint package_name nvarchar xe_action_name nvarchar You can find more details by joining this to sys.trace_columns on the trace_column_id field. SELECT     c.trace_column_id,     c.name [column_name],     a.package_name,     a.xe_action_name FROM sys.trace_columns c INNER JOIN    dbo.trace_xe_action_map a     ON c.trace_column_id = a.trace_column_id If you examine this list, you’ll notice that there are relatively few Actions that map to SQL Trace Columns given the number of Columns that exist. This is not because we forgot to migrate all the Columns, but because much of the data for individual Event Classes is included as part of the EventData fields of the equivalent Events so there is no need to specify them as Actions. Putting it all together If you’ve spent a bunch of time figuring out the inner workings of SQL Trace, and who hasn’t, then you probably know that the typically set of Columns you find associated with any given Event Class in SQL Profiler is not fix, but is determine by the contents of the table sys.trace_event_bindings. We’ve used this table along with the mapping tables to produce a list of Event + Action combinations that duplicate the SQL Profiler Event Class definitions using the following query, which you can also find in the Books Online topic How To: View the Extended Events Equivalents to SQL Trace Event Classes. USE MASTER; GO SELECT DISTINCT    tb.trace_event_id,    te.name AS 'Event Class',    em.package_name AS 'Package',    em.xe_event_name AS 'XEvent Name',    tb.trace_column_id,    tc.name AS 'SQL Trace Column',    am.xe_action_name as 'Extended Events action' FROM (sys.trace_events te LEFT OUTER JOIN dbo.trace_xe_event_map em    ON te.trace_event_id = em.trace_event_id) LEFT OUTER JOIN sys.trace_event_bindings tb    ON em.trace_event_id = tb.trace_event_id LEFT OUTER JOIN sys.trace_columns tc    ON tb.trace_column_id = tc.trace_column_id LEFT OUTER JOIN dbo.trace_xe_action_map am    ON tc.trace_column_id = am.trace_column_id ORDER BY te.name, tc.name As you might imagine, it’s also possible to map an existing trace definition to the equivalent event session by judicious use of fn_trace_geteventinfo joined with the two mapping tables. This query extracts the list of Events and Actions equivalent to the trace with ID = 1, which is most likely the Default Trace. You can find this query, along with a set of other queries and steps required to migrate your existing traces over to Extended Events in the Books Online topic How to: Convert an Existing SQL Trace Script to an Extended Events Session. USE MASTER; GO DECLARE @trace_id int SET @trace_id = 1 SELECT DISTINCT el.eventid, em.package_name, em.xe_event_name AS 'event'    , el.columnid, ec.xe_action_name AS 'action' FROM (sys.fn_trace_geteventinfo(@trace_id) AS el    LEFT OUTER JOIN dbo.trace_xe_event_map AS em       ON el.eventid = em.trace_event_id) LEFT OUTER JOIN dbo.trace_xe_action_map AS ec    ON el.columnid = ec.trace_column_id WHERE em.xe_event_name IS NOT NULL AND ec.xe_action_name IS NOT NULL You’ll notice in the output that the list doesn’t include any of the security audit Event Classes, as I wrote earlier, those were not migrated. But wait…there’s more! If this were an infomercial there’d by some obnoxious guy next to me blogging “Well Mike…that’s pretty neat, but I’m sure you can do more. Can’t you make it even easier to migrate from SQL Trace?”  Needless to say, I’d blog back, in an overly excited way, “You bet I can' obnoxious blogger side-kick!” What I’ve got for you here is a Extended Events Team Blog only special – this tool will not be sold in any store; it’s a special offer for those of you reading the blog. I’ve wrapped all the logic of pulling the configuration information out of an existing trace and and building the Extended Events DDL statement into a handy, dandy CLR stored procedure. Once you load the assembly and register the procedure you just supply the trace id (from sys.traces) and provide a name for the event session. Run the procedure and out pops the DDL required to create an equivalent session. Any aspects of the trace that could not be duplicated are included in comments within the DDL output. This procedure does not actually create the event session – you need to copy the DDL out of the message tab and put it into a new query window to do that. It also requires an existing trace (but it doesn’t have to be running) to evaluate; there is no functionality to parse t-sql scripts. I’m not going to spend a bunch of time explaining the code here – the code is pretty well commented and hopefully easy to follow. If not, you can always post comments or hit the feedback button to send us some mail. Sample code: TraceToExtendedEventDDL   Installing the procedure Just in case you’re not familiar with installing CLR procedures…once you’ve compile the assembly you can load it using a script like this: -- Context to master USE master GO -- Create the assembly from a shared location. CREATE ASSEMBLY TraceToXESessionConverter FROM 'C:\Temp\TraceToXEventSessionConverter.dll' WITH PERMISSION_SET = SAFE GO -- Create a stored procedure from the assembly. CREATE PROCEDURE CreateEventSessionFromTrace @trace_id int, @session_name nvarchar(max) AS EXTERNAL NAME TraceToXESessionConverter.StoredProcedures.ConvertTraceToExtendedEvent GO Enjoy! -Mike

    Read the article

  • links for 2010-04-02

    - by Bob Rhubart
    Jeff Victor: Solaris Virtualization Book Jeff Victor with an update on the status of the book, "Oracle Solaris 10 System Virtualization Essentials." (tags: sun solaris virtualization) Mitch Denny: Architecture vs. Design It's an old post but it still resonates: "In the consumer electronics business, some people are actually hired to go through a system and remove components until it stops working – they do this to remove the cost before they go into mass production. We need more of this in the software business." -- Mitch Denny (tags: architecture design development) @vambenepe: Enterprise application integration patterns for IT management: a blast from the past or from the future? "In a recent blog post, Don Ferguson (CTO at CA) describes CA Catalyst, a major architectural overall which “applies enterprise application integration patterns to the problem of integrating IT management systems”. Reading this was fascinating to me. Not because the content was some kind of revelation, but exactly for the opposite reason. Because it is so familiar." -- William Vambenepe (tags: otn oracle eai)

    Read the article

  • A few things I learned regarding Azure billing policies

    - by Vincent Grondin
    An hour of small computing time: 0,12$ per hour A Gig of storage in the cloud: 0,15$ per hour 1 Gig of relational database using Azure SQL: 9,99$  per month A Visual Studio Professional with MSDN Premium account: 2500$ per year Winning an MSDN Professional account that comes preloaded with 750 free hours of Azure per month:  PRICELESS !!!      But was it really free???? Hmmm… Let’s see.....   Here's a few things I learned regarding Azure billing policies when I attended a promotional training at Microsoft last week...   1)  An instance deployed in the cloud really means whatever you upload in there... it doesn't matter if it's in STAGING OR PRODUCTION!!!!   Your MSDN account comes with 750 free hours of small computing time per month which should be enough hours per month for one instance of one application deployed in the cloud...  So we're cool, the application you run in the cloud doesn't cost you a penny....  BUT the one that's in staging is still consuming time!!!   So if you don’t want to end up having to pay 42$ at the end of the month on your credit card like this happened to a friend of mine, DELETE them staging applications once you’ve put them in production! This also applies to the instance count you can modify in the configuration file… So stop and think before you decide you want to spawn 50 of those hello world apps  .     2) If you have an MSDN account, then you have the promotional 750 hours of Azure credits per month and can use the Azure credits to explore the Cloud! But be aware, this promotion ends in 8 months (maybe more like 7 now) and then you will most likely go back to the standard 250 hours of Azure credits. If you do not delete your applications by then, you’ll get billed for the extra hours, believe me…   There is a switch that you can toggle and which will STOP your automatic enrollment after the promotion and prevent you from renewing the Azure Account automatically. Yes the default setting is to automatically renew your account and remember, you entered your credit card information in the registration process so, yes, you WILL be billed…  Go disable that ASAP    Log into your account, go to “Windows Azure Platform” then click the “Subscriptions” tab and on the right side, you’ll see a drop down with different “Actions” into it… Choose “Opt out of auto renew” and, NOW you’re safe…   Still, this is a great offer by Microsoft and I think everyone that has a chance should play a bit with Azure to get to know this technology a bit more...     Happy Cloud Computing All

    Read the article

  • Certificate error with Web Platform Installer

    - by findleyd
     A friend of mine was having an issue getting the Web Platform Installer to work on his Windows Server 2008 R2 box. He said there was some sort of cert error and asked me to try https://go.microsoft.com/fwlink/?LinkId=158722 on my local machine to see if I got the cert error.  I tried it and I did get a cert error on Windows 7 64bit. I happened to notice that that url simply redirects to https://www.microsoft.com/web/webpi/2.0/WebProductList.xml . Out of curiosity I dropped to a command line and tried to run .\WebPlatformInstaller.exe /? to see if there were any command line options. It gave an error that said invalid URI. So we tried running it with the product list url like: "WebPlatformInstaller.exe https://www.microsoft.com/web/webpi/2.0/WebProductList.xml" . This seems to get around the expired cert that is on go.microsoft.com.  Here's a screen shot of the error: 

    Read the article

  • Upgraded to new Google Admob, now cannot resubmit Google Adsense application

    - by GPS
    I tried to apply for a google adsense account some time ago, but it was rejected due to some policy issues. Then I started using Legacy Admob account for the same email id. It was working fine. But now Google has deprecated the Legacy Admob so I upgraded to the new Google Admob. But now I want to resubmit my application for Adsense but whenever I go to the link https://www.google.com/adsense/ it takes to my homepage, where it shows older message that My account was not approved. It does not show the option to resubmit the application. Second way it shows to go to My Ads tab and then Under “Add AdSense for content", click Apply now. Complete the AdSense application form, then click Submit my application." But I cannot see Submit My Application or Add Adsense for content option in my My Ads Tab. Please can anybody tell me what should I do? Thanks.

    Read the article

  • Limiting DOPs &ndash; Who rules over whom?

    - by jean-pierre.dijcks
    I've gotten a couple of questions from Dan Morgan and figured I start to answer them in this way. While Dan is running on a big system he is running with Database Resource Manager and he is trying to make sure the system doesn't go crazy (remember end user are never, ever crazy!) on very high DOPs. Q: How do I control statements with very high DOPs driven from user hints in queries? A: The best way to do this is to work with DBRM and impose limits on consumer groups. The Max DOP setting you can set in DBRM allows you to overwrite the hint. Now let's go into some more detail here. Assume my object (and for simplicity we assume there is a single object - and do remember that we always pick the highest DOP when in doubt and when conflicting DOPs are available in a query) has PARALLEL 64 as its setting. Assume that the query that selects something cool from that table lives in a consumer group with a max DOP of 32. Assume no goofy things (like running out of parallel_max_servers) are happening. A query selecting from this table will run at DOP 32 because DBRM caps the DOP. As of 11.2.0.1 we also use the DBRM cap to create the original plan (at compile time) and not just enforce the cap at runtime. Now, my user is smart and writes a query with a parallel hint requesting DOP 128. This query is still capped by DBRM and DBRM overrules the hint in the statement. The statement, despite the hint, runs at DOP 32. Note that in the hinted scenario we do compile the statement with DOP 128 (the optimizer obeys the hint). This is another reason to use table decoration rather than hints. Q: What happens if I set parallel_max_servers higher than processes (e.g. the max number of processes allowed to run on my machine)? A: Processes rules. It is important to understand that processes are fixed at startup time. If you increase parallel_max_servers above the number of processes in the processes parameter you should get a warning in the alert log stating it can not take effect. As a follow up, a hinted query requesting more parallel processes than either parallel_max_servers or processes will not be able to acquire the requested number. Parallel_max_processes will prevent this. And since parallel_max_servers should be lower than max processes you can never go over either...

    Read the article

  • Silverlight Death Trolls Dancing on XAML&rsquo;s Grave

    - by D'Arcy Lussier
    I’m starting to see a whole bunch of tweets and blog posts on how Silverlight/WPF is dead, or how the XAML team has been disbanded at Microsoft, or how someone predicted Silverlight would die, blah blah blah. They all have a similar ring to it though: “Told ya so!” “They were stupid ideas anyway!” “Serves Microsoft right, boy are they dumb!” Let me tell you something, all those that are gleefully raving about Silverlight/WPF’s demise are nothing more than death trolls. Let’s assume that everything out there is true. Microsoft is obviously moving towards HTML 5 in a huge way (TechCrunch pointed out that SkyDrive has replaced its Silverlight based version with an HTML 5 one), and not just on the web as we’ve seen with recent announcements about how HTML 5 apps will be natively supported on Windows 8. WPF never caught on in the marketplace, regardless of its superior technology offering to Winforms. And Silverlight…well, it gave Flash a good run for its money, but plug-in based web applications are becoming passé in light of HTML 5. (It’s interesting that at a developer conference I put on just a few weeks ago, only 1 out of 60+ sessions included Silverlight. 5 focussed on HTML 5.) So what does this *death* of Silverlight/WPF/XAML mean then in the grand scheme of things (again, assuming that they truly *are* dying/dead)? Well, nothing really…at least nothing bad. Silverlight has given us some fantastic applications and experiences (Vancouver Olympics anyone?), and WP7 couldn’t have launched without Silverlight as its development platform. And WPF, although it had putrid adoption, has had some great success stories. A Canadian company that I talked to recently showed me how they re-wrote their point-of-sale application entirely in WPF, and the product is a huge success providing features their competitors aren’t. Arguably (and I say that only because I know I’m going to get WTF comments for this), VS.NET 2010 is a great example of what a WPF app can provide over previous C++ based applications. Technologies evolve. In a decade we’ve had 5 versions of the .NET framework, seen languages like J# come and go, seen F# appear, see communications layers change with WCF, seen EF go through multiple evolutions and traditional ADO.NET Datasets go extinct (from actual use anyway), and ASP.NET Webforms be replaced with ASP.NET MVC as a preferred web platform. Is Silverlight and WPF done? Maybe…probably?…thing is, it doesn’t really affect me personally in any way, or you…so why would we care if its gets replaced with something better and more robust that we can build better solutions with? Just remember the golden rule: don’t feed the trolls.

    Read the article

  • Google I/O 2010 - Optimize your site with Page Speed

    Google I/O 2010 - Optimize your site with Page Speed Google I/O 2010 - Optimize every bit of your site serving and web pages with Page Speed Tech Talks Richard Rabbat, Bryan McQuade Page Speed is an open-source Firefox/Firebug Add-on. Webmasters and web developers can use Page Speed to evaluate the performance of their web pages and to get suggestions on how to improve them. Learn about the latest rules of web development we've added, updated optimizations, go over a new refreshed UI, see how to collect data through beacons to track progress over time, cut and paste fixes, and how to work with 3rd party libraries more effectively, including Google Analytics. For all I/O 2010 sessions, please go to code.google.com/events/io/2010/sessions.html From: GoogleDevelopers Views: 6 0 ratings Time: 47:15 More in Science & Technology

    Read the article

  • When Less is More

    - by aditya.agarkar
    How do you reconcile the fact that while the overall warehouse volume is down you still need more workers in the warehouse to ship all the orders? A WMS customer recently pointed out this seemingly perplexing fact in a customer conference. So what is going on? Didn't we tell you before that for a warehouse the customer is really the "king"? In this case customers are merely responding to a low overall low demand and uncertainty. They do not want to hold down inventory and one of the ways to do that is by decreasing the order size and ordering more frequently. Overall impact to the warehouse? Two words: "More work!!" This is not all. Smaller order sizes also mean challenges from a transportation perspective including a rise in costlier parcel or LTL shipments instead of cheaper TL shipments. Here is a hypothetical scenario where a customer reduces the order size by 10% and increases the order frequency by 10%. As you can see in the following table, the overall volume declines by 1% but the warehouse has to ship roughly 10% more lines. Order Frequency (Line Count)Order Size (Units)Total VolumeChange (%)10010010,000 -110909,900-1% If you want to see how "Less is More" in graphical terms, this is how it appears: Even though the volume is down, there is going to be more work in the warehouse in terms of number of lines shipped. The operators need to pick more discrete orders, pack them into more shipping containers and ship more deliveries. What do you do differently if you are facing this situation?In this case here are some obvious steps to take:Uno: Change your pick methods. If you are used to doing order picks, it needs to go out the door. You need to evaluate batch picking and grouping techniques. Go for cluster picking, go for zone picking, pick and pass...anything that improves your picker productivity. More than anything, cluster picking works like a charm and above all, its simple and very effective. Dos: Are you minimize "touch" points in your pick process? Consider doing one step pick, pack and confirm i.e. pick and pack stuff directly into shipping cartons. Done correctly the container will not require any more "touch" points all the way to the trailer loading. Use cartonization!Tres: Are the being picked from an optimized pick face? Are the items slotted correctly? This needs to be looked into. Consider automated "pull" or "push" replenishment into your pick face and also make sure that high demand items are occupying the golden zones.  Cuatro: Are you tracking labor productivity? If not there needs to be a concerted push for having labor standards in place. Hope you found these ideas useful.

    Read the article

  • SQL SERVER – Display Datetime in Specific Format – SQL in Sixty Seconds #033 – Video

    - by pinaldave
    A very common requirement of developers is to format datetime to their specific need. Every geographic location has different need of the date formats. Some countries follow the standard of mm/dd/yy and some countries as dd/mm/yy. The need of developer changes as geographic location changes. In SQL Server there are various functions to aid this requirement. There is function CAST, which developers have been using for a long time as well function CONVERT which is a more enhanced version of CAST. In the latest version of SQL Server 2012 a new function FORMAT is introduced as well. In this SQL in Sixty Seconds video we cover two different methods to display the datetime in specific format. 1) CONVERT function and 2) FORMAT function. Let me know what you think of this video. Here is the script which is used in the video: -- http://blog.SQLAuthority.com -- SQL Server 2000/2005/2008/2012 onwards -- Datetime SELECT CONVERT(VARCHAR(30),GETDATE()) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),10) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),110) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),5) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),105) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),113) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),114) AS DateConvert; GO -- SQL Server 2012 onwards -- Various format of Datetime SELECT CONVERT(VARCHAR(30),GETDATE(),113) AS DateConvert; SELECT FORMAT ( GETDATE(), 'dd mon yyyy HH:m:ss:mmm', 'en-US' ) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),114) AS DateConvert; SELECT FORMAT ( GETDATE(), 'HH:m:ss:mmm', 'en-US' ) AS DateConvert; GO -- Specific usage of Format function SELECT FORMAT(GETDATE(), N'"Current Time is "dddd MMMM dd, yyyy', 'en-US') AS CurrentTimeString; This video discusses CONVERT and FORMAT in simple manner but the subject is much deeper and there are lots of information to cover along with it. I strongly suggest that you go over related blog posts in next section as there are wealth of knowledge discussed there. Related Tips in SQL in Sixty Seconds: Get Date and Time From Current DateTime – SQL in Sixty Seconds #025 Retrieve – Select Only Date Part From DateTime – Best Practice Get Time in Hour:Minute Format from a Datetime – Get Date Part Only from Datetime DATE and TIME in SQL Server 2008 Function to Round Up Time to Nearest Minutes Interval Get Date Time in Any Format – UDF – User Defined Functions Retrieve – Select Only Date Part From DateTime – Best Practice – Part 2 Difference Between DATETIME and DATETIME2 Saturday Fun Puzzle with SQL Server DATETIME2 and CAST What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • An Alphabet of Eponymous Aphorisms, Programming Paradigms, Software Sayings, Annoying Alliteration

    - by Brian Schroer
    Malcolm Anderson blogged about “Einstein’s Razor” yesterday, which reminded me of my favorite software development “law”, the name of which I can never remember. It took much Wikipedia-ing to find it (Hofstadter’s Law – see below), but along the way I compiled the following list: Amara’s Law: We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. Brook’s Law: Adding manpower to a late software project makes it later. Clarke’s Third Law: Any sufficiently advanced technology is indistinguishable from magic. Law of Demeter: Each unit should only talk to its friends; don't talk to strangers. Einstein’s Razor: “Make things as simple as possible, but not simpler” is the popular paraphrase, but what he actually said was “It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience”, an overly complicated quote which is an obvious violation of Einstein’s Razor. (You can tell by looking at a picture of Einstein that the dude was hardly an expert on razors or other grooming apparati.) Finagle's Law of Dynamic Negatives: Anything that can go wrong, will—at the worst possible moment. - O'Toole's Corollary: The perversity of the Universe tends towards a maximum. Greenspun's Tenth Rule: Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp. (Morris’s Corollary: “…including Common Lisp”) Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law. Issawi’s Omelet Analogy: One cannot make an omelet without breaking eggs - but it is amazing how many eggs one can break without making a decent omelet. Jackson’s Rules of Optimization: Rule 1: Don't do it. Rule 2 (for experts only): Don't do it yet. Kaner’s Caveat: A program which perfectly meets a lousy specification is a lousy program. Liskov Substitution Principle (paraphrased): Functions that use pointers or references to base classes must be able to use objects of derived classes without knowing it Mason’s Maxim: Since human beings themselves are not fully debugged yet, there will be bugs in your code no matter what you do. Nils-Peter Nelson’s Nil I/O Rule: The fastest I/O is no I/O.    Occam's Razor: The simplest explanation is usually the correct one. Parkinson’s Law: Work expands so as to fill the time available for its completion. Quentin Tarantino’s Pie Principle: “…you want to go home have a drink and go and eat pie and talk about it.” (OK, he was talking about movies, not software, but I couldn’t find a “Q” quote about software. And wouldn’t it be cool to write a program so great that the users want to eat pie and talk about it?) Raymond’s Rule: Computer science education cannot make anybody an expert programmer any more than studying brushes and pigment can make somebody an expert painter.  Sowa's Law of Standards: Whenever a major organization develops a new system as an official standard for X, the primary result is the widespread adoption of some simpler system as a de facto standard for X. Turing’s Tenet: We shall do a much better programming job, provided we approach the task with a full appreciation of its tremendous difficulty, provided that we respect the intrinsic limitations of the human mind and approach the task as very humble programmers.  Udi Dahan’s Race Condition Rule: If you think you have a race condition, you don’t understand the domain well enough. These rules didn’t exist in the age of paper, there is no reason for them to exist in the age of computers. When you have race conditions, go back to the business and find out actual rules. Van Vleck’s Kvetching: We know about as much about software quality problems as they knew about the Black Plague in the 1600s. We've seen the victims' agonies and helped burn the corpses. We don't know what causes it; we don't really know if there is only one disease. We just suffer -- and keep pouring our sewage into our water supply. Wheeler’s Law: All problems in computer science can be solved by another level of indirection... Except for the problem of too many layers of indirection. Wheeler also said “Compatibility means deliberately repeating other people's mistakes.”. The Wrong Road Rule of Mr. X (anonymous): No matter how far down the wrong road you've gone, turn back. Yourdon’s Rule of Two Feet: If you think your management doesn't know what it's doing or that your organisation turns out low-quality software crap that embarrasses you, then leave. Zawinski's Law of Software Envelopment: Every program attempts to expand until it can read mail. Zawinski is also responsible for “Some people, when confronted with a problem, think 'I know, I'll use regular expressions.' Now they have two problems.” He once commented about X Windows widget toolkits: “Using these toolkits is like trying to make a bookshelf out of mashed potatoes.”

    Read the article

  • IIS7 Handler Mapping Migration from Sites Config to Server Config [migrated]

    - by Danomite
    We have a bunch of sites running with about 8 handler mappings in their web.config files. Unfortunately, they were getting copied site to site every time a new one was added. Now the time has come for me to get these out of all the web.config's and get them into the server's Handler Mappings. If I add the mapping to the the server while it still exists in the web.config, IIS throws an error when you browse to the site. I have a few dozen web.config's to edit here with about 10 mappings in each. Is there a way to add these mappings to the server without having to go in an edit each web.config file manually? Otherwise, every site will be down for a few minutes while I go into each file and remove the handlers. Thanks!

    Read the article

  • Creating Entity as an aggregation

    - by Jamie Dixon
    I recently asked about how to separate entities from their behaviour and the main answer linked to this article: http://cowboyprogramming.com/2007/01/05/evolve-your-heirachy/ The ultimate concept written about here is that of: OBJECT AS A PURE AGGREGATION. I'm wondering how I might go about creating game entities as pure aggregation using C#. I've not quite grasped the concept of how this might work yet. (Perhaps the entity is an array of objects implementing a certain interface or base type?) My current thinking still involves having a concrete class for each entity type that then implements the relevant interfaces (IMoveable, ICollectable, ISpeakable etc). How can I go about creating an entity purely as an aggregation without having any concrete type for that entity?

    Read the article

  • SQL In The City Charlotte - Fundamentals of Database Design

    - by drsql
    Next Monday, October 14, at Red-Gate's SQL In The City conference in Charlotte, NC (one day before PASS), I will be presenting my Fundamentals of Database Design session. It is my big-time chestnut session, the one that I do the most and have the most fun with. This will be the "single" version of the session, weighing in at just under an hour, and it is a lot of material to go over (even with no code samples to go awry to take up time.)  In this hour long session (presented in widescreen...(read more)

    Read the article

  • Truly understand the threshold for document set in document library in SharePoint

    - by ybbest
    Recently, I am working on an issue with threshold. The problem is that when the user navigates to a view of the document library, it displays the error message “list view threshold is exceeded”. However, in the view, it has no data. The list view threshold limit is 5000 by default for the non-admin user. This limit is not the number of items returned by your query; it is the total number of items the database needs to read to calculate the returned result set. So although the view does not return any result but to calculate the result (no data to show), it needs to access more than 5000 items in the database. To fix the issue, you need to create an index for the column that you use in the filter for the view. Let’s look at the problem in details. You can download a solution to replicate this issue here. 1. Go to Central Admin ==> Web Application Management ==>General Settings==> Click on Resource Throttling 2. Change the list view threshold in web application from 5000 to 2000 so that I can show the problem without loading more than 5000 items into the list. FROM TO 3. Go to the page that displays the approved view of the Loan application document set. It displays the message as shown below although I do not have any data returned for this view. 4. To get around this, you need to create an index column. Go to list settings and click on the Index columns. 5. Click on the “Create a new index” link. 6. Select the LoanStatus field as I use this filed as the filter to create the view. 7. After the index is created now I can access the approved view, as you can see it does not return any data. Notes: List View Threshold: Specify the maximum number of items that a database operation can involve at one time. Operations that exceed this limit are prohibited. References: SharePoint lists V: Techniques for managing large lists Manage large SharePoint lists for better performance http://blogs.technet.com/b/speschka/archive/2009/10/27/working-with-large-lists-in-sharepoint-2010-list-throttling.aspx

    Read the article

  • PASS Summit 2010 Recap

    - by AjarnMark
    Last week I attended my eighth PASS Summit in nine years, and every year it is a fantastic event!  I was fortunate my first year to have a contact (Bill Graziano (blog | Twitter) from SQLTeam) that I was expecting to meet, and who got me started on a good track of making new contacts.  Each year I have made a few more, and renewed friendships from years past.  Many of the attendees agree that the pure networking opportunities are one of the best benefits of attending the Summit.  And there’s a lot of great technical stuff, too, some of the things that stick out for me this year include… Pre-Con Monday: PowerShell with Allen White (blog | Twitter).  This was the first time that I attended a pre-con.  For those not familiar with the concept, the regular sessions for the conference are 75-90 minutes long.  For an extra fee, you can attend a full-day session on a single topic during a pre- or post-conference training day.  I had been meaning for several months to dive in and learn PowerShell, but just never seemed to find (or make) the time for it, so when I saw this was one of the all-day sessions, and I was planning to be there on Monday anyway, I decided to go for it.  And it was well worth it!  I definitely came out of there with a good foundation to build my own PowerShell scripts, plus several sample scripts that he showed which already cover the first four or five things I was planning to do with PowerShell anyway.  This looks like the right tool for me to build an automated version of our software deployment process, which right now contains many repeated steps.  Thanks Allen! Service Broker with Denny Cherry (blog | Twitter).  I remembered reading Denny’s blog post on Using Service Broker instead of Replication, and ever since then I have been thinking about using this to populate a new reporting-focused Data Repository that we will be building in the near future.  When I saw he was doing this session, I thought it would be great to get more information and be able to ask the author questions.  When I brought this idea back to my boss, he really liked it, as we had previously been discussing doing nightly data loads, with an option to manually trigger a mid-day load if up-to-the-minute data was needed for something.  If we go the Service Broker route, we can keep the Repository current in near real-time.  Hooray! DBA Mythbusters with Paul Randal (blog | Twitter).  Even though I read every one of the posts in Paul’s blog series of the same name, I had to go see the legend in person.  It was great, and I still learned something new! How to Conduct Effective Meetings with Joe Webb (blog | Twitter).  I always like to sit in on a session that Joe does.  I met Joe several years ago when both he and Bill Graziano were on the PASS Board of Directors together, and we have kept in touch.  Joe is very well-spoken and has great experience with both SQL Server and business.  And we could certainly use some pointers at my work (probably yours, too) on making our meetings more effective and to run on-time.  Of course, now that I’m the Chapter Leader for the Professional Development virtual chapter, I also had to sit in on this ProfDev session and recruit Joe to do a presentation or two for the chapter next year. Query Optimization with David DeWitt.  Anyone who has seen Dr. David DeWitt present the 3rd keynote at a PASS Summit over the last three years knows what a great time it is to sit and listen to him make some really complicated and advanced topic easy to understand (although it still makes your head hurt).  It still amazes me that the simple two-table join query from pubs that he used in his example can possibly have 22 million possible physical query plans.  Ouch! Exhibit Hall:  This year I spent more serious time in the exhibit hall than any year past.  I have talked my boss into making a significant (for us) investment in monitoring tools next year, and this was a great opportunity to talk with all the big-hitters.  Readers of mine may recall that I fell in love with the SQL Sentry Power Suite several months ago and wrote a blog entry about it just from the trial version.  Well as things turned out, short-term budget priorities shifted, and we weren’t able to make the purchase then.  I have it in the budget for next year, but since I was going to the Summit, my boss wanted me to look at the other options to see if this was really the one that we wanted.  I spent a couple of hours talking with representatives from Red-Gate, Idera, Confio, and Quest about their offerings, and giving them each the same 3 scenarios that I wanted to be able to accomplish based on the questions and issues that arise in our company.  It was interesting to discover the different approaches or “world view” that each vendor takes to the subject of performance monitoring and troubleshooting.  I may write a separate article that goes into this in more depth, but the product that best aligned with our point of view, and met the current needs we have is still the SQL Sentry Power Suite.  I’m not saying that the others are bad or wrong or anything like that, just that the way they tackled the issue did not align as well with our particular needs as does SQL Sentry’s product.  And that was something I learned too, when you go shopping for these products, you really need to know what you want to get from them.  It’s best if you have a few example scenarios from work that you can use to test out how well each tool fits your particular needs. Overall, another GREAT event.  I can’t wait to get the DVDs so I can sit in on a bunch of other sessions that I couldn’t get to because I was in one of the ones above.  And I can hardly wait until next year!

    Read the article

  • Friday Fun: Snow Crusher

    - by Asian Angel
    It has probably been a long week whether you have already returned to work or are finishing up the last of your vacation time. If you are in need of some stress relief, then we have we the perfect game for you. This week you get to be totally fiendish and use a monster size snowball to destroy as many cars as possible at the local snow lodge. Snow Crusher The object of the game is simple…create as large of a monster snowball as you can and then send it down over the side of the mountain to destroy the cars at the snow lodge. You can choose from three different sizes of monster snowballs to create. We chose the “Snowflake Size” for our reign of destruction. Once you have chosen a monster snowball size, all that is left to do is select the control method that works best for you. As soon as you select the control method, your monster snowball creation will automatically begin. Keep in mind that the faster your snowball goes the harder it can become to steer if you make sudden movements… At the top you can watch your progress towards the drop-off point and the green boxes highlighted at the bottom indicate how large of an item (such as trees or boulders) your snowball can roll over and add to the total mass. Snowball speed is shown in the lower right corner. Time to roll! As soon as the first green box is lit up you can start adding small trees to your snowball’s mass. You will want to avoid larger items as you go because they will penalize your score, slow you down, and reduce the size of your snowball! Halfway to the drop-off point and our snowball is now able to grab up larger trees. If you have not hit any large items along the way, your snowball will definitely be moving along at a good rate by now. When you reach the end of the mass building area, your snowball will pop out into the open and get ready to drop off over the side of the mountain. Go snowball go! Yes! Thirteen cars crushed and ready for the scrap yard… If the “Snowflake Size” snowball can do this, just think what the “Avalanche Size” can do with three minutes of time to build up mass! Have fun with those monster snowballs! Play Snow Crusher Latest Features How-To Geek ETC The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Classic Super Mario Brothers Theme for Chrome and Iron Experimental Firefox Builds Put Tabs on the Title Bar (Available for Download) Android Trojan Found in the Wild Chaos, Panic, and Disorder Wallpaper Enjoy Christmas Beyond the Holiday with Christmas Eve Crisis Parrotfish Extends the Number of Services Accessible in Twitter Previews

    Read the article

  • SQL SERVER – Find Referenced or Referencing Object in SQL Server using sys.sql_expression_dependencies

    - by pinaldave
    A very common question which I often receive are: How do I find all the tables used in a particular stored procedure? How do I know which stored procedures are using a particular table? Both are valid question but before we see the answer of this question – let us understand two small concepts – Referenced and Referencing. Here is the sample stored procedure. CREATE PROCEDURE mySP AS SELECT * FROM Sales.Customer GO Reference: The table Sales.Customer is the reference object as it is being referenced in the stored procedure mySP. Referencing: The stored procedure mySP is the referencing object as it is referencing Sales.Customer table. Now we know what is referencing and referenced object. Let us run following queries. I am using AdventureWorks2012 as a sample database. If you do not have SQL Server 2012 here is the way to get SQL Server 2012 AdventureWorks database. Find Referecing Objects of a particular object Here we are finding all the objects which are using table Customer in their object definitions (regardless of the schema). USE AdventureWorks GO SELECT referencing_schema_name = SCHEMA_NAME(o.SCHEMA_ID), referencing_object_name = o.name, referencing_object_type_desc = o.type_desc, referenced_schema_name, referenced_object_name = referenced_entity_name, referenced_object_type_desc = o1.type_desc, referenced_server_name, referenced_database_name --,sed.* -- Uncomment for all the columns FROM sys.sql_expression_dependencies sed INNER JOIN sys.objects o ON sed.referencing_id = o.[object_id] LEFT OUTER JOIN sys.objects o1 ON sed.referenced_id = o1.[object_id] WHERE referenced_entity_name = 'Customer' The above query will return all the objects which are referencing the table Customer. Find Referenced Objects of a particular object Here we are finding all the objects which are used in the view table vIndividualCustomer. USE AdventureWorks GO SELECT referencing_schema_name = SCHEMA_NAME(o.SCHEMA_ID), referencing_object_name = o.name, referencing_object_type_desc = o.type_desc, referenced_schema_name, referenced_object_name = referenced_entity_name, referenced_object_type_desc = o1.type_desc, referenced_server_name, referenced_database_name --,sed.* -- Uncomment for all the columns FROM sys.sql_expression_dependencies sed INNER JOIN sys.objects o ON sed.referencing_id = o.[object_id] LEFT OUTER JOIN sys.objects o1 ON sed.referenced_id = o1.[object_id] WHERE o.name = 'vIndividualCustomer' The above query will return all the objects which are referencing the table Customer. I am just glad to write above query. There are more to write to this subject. In future blog post I will write more in depth about other DMV which also aids in finding referenced data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Dual booting 12.10 and Win 7 - boots directly to Win 7

    - by user110174
    and thank you kindly for you help! I'll preface this with saying that I realize this is a common problem, with lots of trouble-shooting guides available online; however, after multiple attempts with different guides, I've made zero progress and am hoping to someone could help me with my specific scenario. First, my story: -Initially, I installed Ubuntu 12.10 with the "Something Else" option with no problems. Used 4 GB Swap Logical Partition, 26 GB Primary Root Partition. Wanting to trying out Mint 13, I booted into Windows from GRUB2, used the latest version of EasyBCD (v2.2) to restore the Windows 7 bootloader to the MBR, deleted the Ubuntu partitions, reformatted them in NTFS. I then created a 30 GB partition of free space for Mint. I installed Mint using the same partitioning described above for Ubuntu 12.10, using /dev/sda for the boot installation files, and everything seemed to go well, until I re-booted my computer and it went straight to Windows - I could find no way to get into Mint. So I went into windows, restored windows bootloader to the MBR w/ EasyBCD, deleted partitions, etc., as I figured I'd done enough messing around and would go with Ubuntu 12.10. Now the problem: I restarted my computer booting from the same Ubuntu USB key I originally used. Briefly, "error: "prefix" is not set" flashed on screen, and instead of being greeted with the GUI menu of "try vs. install Ubuntu", there was a menu with minimal graphics (like a BIOS menu) where I could select install, run from USB, etc. After selecting "Install Ubuntu", the familiar install wizard with a GUI came up, I partitioned my drive as described, /dev/sda for the boot installation files, install went well, rebooted and...straight to Windows. This is where I'm at. Fixes I've tried: -This guide: How can I repair grub? (How to get Ubuntu back after installing Windows?) to ensure Grub is on the MBR. I followed all steps, but still when I reboot, I go directly into Windows. -Installing 12.04 instead of 12.10 - same issue -Re-installed Ubuntu, writing the boot files to their own partition, then using EasyBCD to to add a boot option for Ubuntu using the Windows bootloader, ensuring I instruct EasyBCD to look at the partition I created with the Ubuntu installer (instructions here http://neosmart.net/wiki/display/EBCD/Ubuntu). When I reboot, I select the Ubuntu option, and it puts me in GRUB4DOS, with a cursor waiting for input. I have no idea what to put here, so I would just type "reboot" to exit out. And this is where I am now. Any clue as to why I can't boot into Ubuntu? My computer specs are: ASUS UX31A Core i7, Win 7 64 Pro, 256 GB SSD, Intel HM76 Chipset and Integrated Intel HD 4000 Graphics, 4 GB memory I've tried to be as clear as possible, but I'd be happy to provide any info that would help anyone along. Thanks for your patience in reading this! Sincerely, -MN

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >