Search Results

Search found 2010 results on 81 pages for 'scan'.

Page 44/81 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Windows 7,find who accessed my computer on network?

    - by pg2012
    It seems something tried to delete root c:/ folders on my computer and it did started delete process alphabetical order. Not sure what is cause. any known virus? We have company network edition of antivirus running, and full scan did not found any virus activity. That makes me suspious about someone on company network attempt to delete? can this possible? If it is how do we know from windows server that who accessed my computer over network?

    Read the article

  • Problem Implementing StructureMap in VB.Net Conversion of SharpArchitecture

    - by Monkeeman69
    I work in a VB.Net environment and have recently been tasked with creating an MVC enviroment to use as a base to work from. I decided to convert the latest SharpArchitecture release (Q3 2009) into VB, which on the whole has gone fine after a bit of hair pulling. I came across a problem with Castle Windsor where my custom repository interface (lives in the core/domain project) that was reference in the constructor of my test controller was not getting injected with the concrete implementation (from the data project). I hit a brick wall with this so basically decided to switch out Castle Windsor for StructureMap. I think I have implemented this ok as everything compiles and runs and my controller ran ok when referencing a custom repository interface. It appears now that I have/or cannot now setup my generic interfaces up properly (I hope this makes sense so far as I am new to all this). When I use IRepository(Of T) (wanting it to be injected with a concrete implementation of Repository(Of Type)) in the controller constructor I am getting the following runtime error: "StructureMap Exception Code: 202 No Default Instance defined for PluginFamily SharpArch.Core.PersistenceSupport.IRepository`1[[DebtRemedy.Core.Page, DebtRemedy.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]], SharpArch.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=b5f559ae0ac4e006" Here are my code excerpts that I am using (my project is called DebtRemedy). My structuremap registry class Public Class DefaultRegistry Inherits Registry Public Sub New() ''//Generic Repositories AddGenericRepositories() ''//Custom Repositories AddCustomRepositories() ''//Application Services AddApplicationServices() ''//Validator [For](GetType(IValidator)).Use(GetType(Validator)) End Sub Private Sub AddGenericRepositories() ''//ForRequestedType(GetType(IRepository(Of ))).TheDefaultIsConcreteType(GetType(Repository(Of ))) [For](GetType(IEntityDuplicateChecker)).Use(GetType(EntityDuplicateChecker)) [For](GetType(IRepository(Of ))).Use(GetType(Repository(Of ))) [For](GetType(INHibernateRepository(Of ))).Use(GetType(NHibernateRepository(Of ))) [For](GetType(IRepositoryWithTypedId(Of ,))).Use(GetType(RepositoryWithTypedId(Of ,))) [For](GetType(INHibernateRepositoryWithTypedId(Of ,))).Use(GetType(NHibernateRepositoryWithTypedId(Of ,))) End Sub Private Sub AddCustomRepositories() Scan(AddressOf SetupCustomRepositories) End Sub Private Shared Sub SetupCustomRepositories(ByVal y As IAssemblyScanner) y.Assembly("DebtRemedy.Core") y.Assembly("DebtRemedy.Data") y.WithDefaultConventions() End Sub Private Sub AddApplicationServices() Scan(AddressOf SetupApplicationServices) End Sub Private Shared Sub SetupApplicationServices(ByVal y As IAssemblyScanner) y.Assembly("DebtRemedy.ApplicationServices") y.With(New FirstInterfaceConvention) End Sub End Class Public Class FirstInterfaceConvention Implements ITypeScanner Public Sub Process(ByVal type As Type, ByVal graph As PluginGraph) Implements ITypeScanner.Process If Not IsConcrete(type) Then Exit Sub End If ''//only works on concrete types Dim firstinterface = type.GetInterfaces().FirstOrDefault() ''//grabs first interface If firstinterface IsNot Nothing Then graph.AddType(firstinterface, type) Else ''//registers type ''//adds concrete types with no interfaces graph.AddType(type) End If End Sub End Class I have tried both ForRequestedType (which I think is now deprecated) and For. IRepository(Of T) lives in SharpArch.Core.PersistenceSupport. Repository(Of T) lives in SharpArch.Data.NHibernate. My servicelocator class Public Class StructureMapServiceLocator Inherits ServiceLocatorImplBase Private container As IContainer Public Sub New(ByVal container As IContainer) Me.container = container End Sub Protected Overloads Overrides Function DoGetInstance(ByVal serviceType As Type, ByVal key As String) As Object Return If(String.IsNullOrEmpty(key), container.GetInstance(serviceType), container.GetInstance(serviceType, key)) End Function Protected Overloads Overrides Function DoGetAllInstances(ByVal serviceType As Type) As IEnumerable(Of Object) Dim objList As New List(Of Object) For Each obj As Object In container.GetAllInstances(serviceType) objList.Add(obj) Next Return objList End Function End Class My controllerfactory class Public Class ServiceLocatorControllerFactory Inherits DefaultControllerFactory Protected Overloads Overrides Function GetControllerInstance(ByVal requestContext As RequestContext, ByVal controllerType As Type) As IController If controllerType Is Nothing Then Return Nothing End If Try Return TryCast(ObjectFactory.GetInstance(controllerType), Controller) Catch generatedExceptionName As StructureMapException System.Diagnostics.Debug.WriteLine(ObjectFactory.WhatDoIHave()) Throw End Try End Function End Class The initialise stuff in my gloabal.asax Dim container As IContainer = New Container(New DefaultRegistry) ControllerBuilder.Current.SetControllerFactory(New ServiceLocatorControllerFactory()) ServiceLocator.SetLocatorProvider(Function() New StructureMapServiceLocator(container)) My test controller Public Class DataCaptureController Inherits BaseController Private ReadOnly clientRepository As IClientRepository() Private ReadOnly pageRepository As IRepository(Of Page) Public Sub New(ByVal clientRepository As IClientRepository(), ByVal pageRepository As IRepository(Of Page)) Check.Require(clientRepository IsNot Nothing, "clientRepository may not be null") Check.Require(pageRepository IsNot Nothing, "pageRepository may not be null") Me.clientRepository = clientRepository Me.pageRepository = pageRepository End Sub Function Index() As ActionResult Return View() End Function The above works fine when I take out everything to do with the pageRepository which is IRepository(Of T). Any help with this would be greatly appreciated.

    Read the article

  • How to find the insertion point in an array using binary search?

    - by ????
    The basic idea of binary search in an array is simple, but it might return an "approximate" index if the search fails to find the exact item. (we might sometimes get back an index for which the value is larger or smaller than the searched value). For looking for the exact insertion point, it seems that after we got the approximate location, we might need to "scan" to left or right for the exact insertion location, so that, say, in Ruby, we can do arr.insert(exact_index, value) I have the following solution, but the handling for the part when begin_index >= end_index is a bit messy. I wonder if a more elegant solution can be used? (this solution doesn't care to scan for multiple matches if an exact match is found, so the index returned for an exact match may point to any index that correspond to the value... but I think if they are all integers, we can always search for a - 1 after we know an exact match is found, to find the left boundary, or search for a + 1 for the right boundary.) My solution: DEBUGGING = true def binary_search_helper(arr, a, begin_index, end_index) middle_index = (begin_index + end_index) / 2 puts "a = #{a}, arr[middle_index] = #{arr[middle_index]}, " + "begin_index = #{begin_index}, end_index = #{end_index}, " + "middle_index = #{middle_index}" if DEBUGGING if arr[middle_index] == a return middle_index elsif begin_index >= end_index index = [begin_index, end_index].min return index if a < arr[index] && index >= 0 #careful because -1 means end of array index = [begin_index, end_index].max return index if a < arr[index] && index >= 0 return index + 1 elsif a > arr[middle_index] return binary_search_helper(arr, a, middle_index + 1, end_index) else return binary_search_helper(arr, a, begin_index, middle_index - 1) end end # for [1,3,5,7,9], searching for 6 will return index for 7 for insertion # if exact match is found, then return that index def binary_search(arr, a) puts "\nSearching for #{a} in #{arr}" if DEBUGGING return 0 if arr.empty? result = binary_search_helper(arr, a, 0, arr.length - 1) puts "the result is #{result}, the index for value #{arr[result].inspect}" if DEBUGGING return result end arr = [1,3,5,7,9] b = 6 arr.insert(binary_search(arr, b), b) p arr arr = [1,3,5,7,9,11] b = 6 arr.insert(binary_search(arr, b), b) p arr arr = [1,3,5,7,9] b = 60 arr.insert(binary_search(arr, b), b) p arr arr = [1,3,5,7,9,11] b = 60 arr.insert(binary_search(arr, b), b) p arr arr = [1,3,5,7,9] b = -60 arr.insert(binary_search(arr, b), b) p arr arr = [1,3,5,7,9,11] b = -60 arr.insert(binary_search(arr, b), b) p arr arr = [1] b = -60 arr.insert(binary_search(arr, b), b) p arr arr = [1] b = 60 arr.insert(binary_search(arr, b), b) p arr arr = [] b = 60 arr.insert(binary_search(arr, b), b) p arr and result: Searching for 6 in [1, 3, 5, 7, 9] a = 6, arr[middle_index] = 5, begin_index = 0, end_index = 4, middle_index = 2 a = 6, arr[middle_index] = 7, begin_index = 3, end_index = 4, middle_index = 3 a = 6, arr[middle_index] = 5, begin_index = 3, end_index = 2, middle_index = 2 the result is 3, the index for value 7 [1, 3, 5, 6, 7, 9] Searching for 6 in [1, 3, 5, 7, 9, 11] a = 6, arr[middle_index] = 5, begin_index = 0, end_index = 5, middle_index = 2 a = 6, arr[middle_index] = 9, begin_index = 3, end_index = 5, middle_index = 4 a = 6, arr[middle_index] = 7, begin_index = 3, end_index = 3, middle_index = 3 the result is 3, the index for value 7 [1, 3, 5, 6, 7, 9, 11] Searching for 60 in [1, 3, 5, 7, 9] a = 60, arr[middle_index] = 5, begin_index = 0, end_index = 4, middle_index = 2 a = 60, arr[middle_index] = 7, begin_index = 3, end_index = 4, middle_index = 3 a = 60, arr[middle_index] = 9, begin_index = 4, end_index = 4, middle_index = 4 the result is 5, the index for value nil [1, 3, 5, 7, 9, 60] Searching for 60 in [1, 3, 5, 7, 9, 11] a = 60, arr[middle_index] = 5, begin_index = 0, end_index = 5, middle_index = 2 a = 60, arr[middle_index] = 9, begin_index = 3, end_index = 5, middle_index = 4 a = 60, arr[middle_index] = 11, begin_index = 5, end_index = 5, middle_index = 5 the result is 6, the index for value nil [1, 3, 5, 7, 9, 11, 60] Searching for -60 in [1, 3, 5, 7, 9] a = -60, arr[middle_index] = 5, begin_index = 0, end_index = 4, middle_index = 2 a = -60, arr[middle_index] = 1, begin_index = 0, end_index = 1, middle_index = 0 a = -60, arr[middle_index] = 9, begin_index = 0, end_index = -1, middle_index = -1 the result is 0, the index for value 1 [-60, 1, 3, 5, 7, 9] Searching for -60 in [1, 3, 5, 7, 9, 11] a = -60, arr[middle_index] = 5, begin_index = 0, end_index = 5, middle_index = 2 a = -60, arr[middle_index] = 1, begin_index = 0, end_index = 1, middle_index = 0 a = -60, arr[middle_index] = 11, begin_index = 0, end_index = -1, middle_index = -1 the result is 0, the index for value 1 [-60, 1, 3, 5, 7, 9, 11] Searching for -60 in [1] a = -60, arr[middle_index] = 1, begin_index = 0, end_index = 0, middle_index = 0 the result is 0, the index for value 1 [-60, 1] Searching for 60 in [1] a = 60, arr[middle_index] = 1, begin_index = 0, end_index = 0, middle_index = 0 the result is 1, the index for value nil [1, 60] Searching for 60 in [] [60]

    Read the article

  • Performing Aggregate Functions on Multi-Million Row Tables

    - by Daniel Short
    I'm having some serious performance issues with a multi-million row table that I feel I should be able to get results from fairly quick. Here's a run down of what I have, how I'm querying it, and how long it's taking: I'm running SQL Server 2008 Standard, so Partitioning isn't currently an option I'm attempting to aggregate all views for all inventory for a specific account over the last 30 days. All views are stored in the following table: CREATE TABLE [dbo].[LogInvSearches_Daily]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [Inv_ID] [int] NOT NULL, [Site_ID] [int] NOT NULL, [LogCount] [int] NOT NULL, [LogDay] [smalldatetime] NOT NULL, CONSTRAINT [PK_LogInvSearches_Daily] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY] ) ON [PRIMARY] This table has 132,000,000 records, and is over 4 gigs. A sample of 10 rows from the table: ID Inv_ID Site_ID LogCount LogDay -------------------- ----------- ----------- ----------- ----------------------- 1 486752 48 14 2009-07-21 00:00:00 2 119314 51 16 2009-07-21 00:00:00 3 313678 48 25 2009-07-21 00:00:00 4 298863 0 1 2009-07-21 00:00:00 5 119996 0 2 2009-07-21 00:00:00 6 463777 534 7 2009-07-21 00:00:00 7 339976 503 2 2009-07-21 00:00:00 8 333501 570 4 2009-07-21 00:00:00 9 453955 0 12 2009-07-21 00:00:00 10 443291 0 4 2009-07-21 00:00:00 (10 row(s) affected) I have the following index on LogInvSearches_Daily: /****** Object: Index [IX_LogInvSearches_Daily_LogDay] Script Date: 05/12/2010 11:08:22 ******/ CREATE NONCLUSTERED INDEX [IX_LogInvSearches_Daily_LogDay] ON [dbo].[LogInvSearches_Daily] ( [LogDay] ASC ) INCLUDE ( [Inv_ID], [LogCount]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] I need to pull inventory only from the Inventory for a specific account id. I have an index on the Inventory as well. I'm using the following query to aggregate the data and give me the top 5 records. This query is currently taking 24 seconds to return the 5 rows: StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- SELECT TOP 5 Sum(LogCount) AS Views , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , Inv_ID FROM LogInvSearches_Daily D (NOLOCK) WHERE LogDay DateAdd(d, -30, getdate()) AND EXISTS( SELECT NULL FROM propertyControlCenter.dbo.Inventory (NOLOCK) WHERE Acct_ID = 18731 AND Inv_ID = D.Inv_ID ) GROUP BY Inv_ID (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |--Top(TOP EXPRESSION:((5))) |--Sequence Project(DEFINE:([Expr1007]=dense_rank)) |--Segment |--Segment |--Sort(ORDER BY:([Expr1006] DESC, [D].[Inv_ID] DESC)) |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1006]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1011], [Expr1012], [Expr1010])) | |--Compute Scalar(DEFINE:(([Expr1011],[Expr1012],[Expr1010])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1011] AND [D].[LogDay] < [Expr1012]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID]), SEEK:([propertyControlCenter].[dbo].[Inventory].[Acct_ID]=(18731) AND [propertyControlCenter].[dbo].[Inventory].[Inv_ID]=[LOA (13 row(s) affected) I tried using a CTE to pick up the rows first and aggregate them, but that didn't run any faster, and gives me essentially the same execution plan. (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --SET SHOWPLAN_TEXT ON; WITH getSearches AS ( SELECT LogCount -- , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , D.Inv_ID FROM LogInvSearches_Daily D (NOLOCK) INNER JOIN propertyControlCenter.dbo.Inventory I (NOLOCK) ON Acct_ID = 18731 AND I.Inv_ID = D.Inv_ID WHERE LogDay DateAdd(d, -30, getdate()) -- GROUP BY Inv_ID ) SELECT Sum(LogCount) AS Views, Inv_ID FROM getSearches GROUP BY Inv_ID (1 row(s) affected) StmtText ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1004]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1008], [Expr1009], [Expr1007])) | |--Compute Scalar(DEFINE:(([Expr1008],[Expr1009],[Expr1007])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1008] AND [D].[LogDay] < [Expr1009]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID] AS [I]), SEEK:([I].[Acct_ID]=(18731) AND [I].[Inv_ID]=[LOALogs].[dbo].[LogInvSearches_Daily].[Inv_ID] as [D].[Inv_ID]) ORDERED FORWARD) (8 row(s) affected) (1 row(s) affected) So given that I'm getting good Index Seeks in my execution plan, what can I do to get this running faster? Thanks, Dan

    Read the article

  • What container type provides better (average) performance than std::map?

    - by Truncheon
    In the following example a std::map structure is filled with 26 values from A - Z (for key) and 0 - 26 for value. The time taken (on my system) to lookup the last entry (10000000 times) is roughly 250 ms for the vector, and 125 ms for the map. (I compiled using release mode, with O3 option turned on for g++ 4.4) But if for some odd reason I wanted better performance than the std::map, what data structures and functions would I need to consider using? I apologize if the answer seems obvious to you, but I haven't had much experience in the performance critical aspects of C++ programming. UPDATE: This example is rather trivial and hides the true complexity of what I'm trying to achieve. My real world project is a simple scripting language that uses a parser, data tree, and interpreter (instead of a VM stack system). I need to use some kind of data structure (perhaps map) to store the variables names created by script programmers. These are likely to be pretty randomly named, so I need a lookup method that can quickly find a particular key within a (probably) fairly large list of names. #include <ctime> #include <map> #include <vector> #include <iostream> struct mystruct { char key; int value; mystruct(char k = 0, int v = 0) : key(k), value(v) { } }; int find(const std::vector<mystruct>& ref, char key) { for (std::vector<mystruct>::const_iterator i = ref.begin(); i != ref.end(); ++i) if (i->key == key) return i->value; return -1; } int main() { std::map<char, int> mymap; std::vector<mystruct> myvec; for (int i = 'a'; i < 'a' + 26; ++i) { mymap[i] = i - 'a'; myvec.push_back(mystruct(i, i - 'a')); } int pre = clock(); for (int i = 0; i < 10000000; ++i) { find(myvec, 'z'); } std::cout << "linear scan: milli " << clock() - pre << "\n"; pre = clock(); for (int i = 0; i < 10000000; ++i) { mymap['z']; } std::cout << "map scan: milli " << clock() - pre << "\n"; return 0; }

    Read the article

  • What's the fastest lookup algorithm for a pair data structure (i.e, a map)?

    - by truncheon
    In the following example a std::map structure is filled with 26 values from A - Z (for key) and 0 – 26 for value. The time taken (on my system) to lookup the last entry (10000000 times) is roughly 250 ms for the vector, and 125 ms for the map. (I compiled using release mode, with O3 option turned on for g++ 4.4) But if for some odd reason I wanted better performance than the std::map, what data structures and functions would I need to consider using? I apologize if the answer seems obvious to you, but I haven't had much experience in the performance critical aspects of C++ programming. UPDATE: This example is rather trivial and hides the true complexity of what I'm trying to achieve. My real world project is a simple scripting language that uses a parser, data tree, and interpreter (instead of a VM stack system). I need to use some kind of data structure (perhaps map) to store the variables names created by script programmers. These are likely to be pretty randomly named, so I need a lookup method that can quickly find a particular key within a (probably) fairly large list of names. #include <ctime> #include <map> #include <vector> #include <iostream> struct mystruct { char key; int value; mystruct(char k = 0, int v = 0) : key(k), value(v) { } }; int find(const std::vector<mystruct>& ref, char key) { for (std::vector<mystruct>::const_iterator i = ref.begin(); i != ref.end(); ++i) if (i->key == key) return i->value; return -1; } int main() { std::map<char, int> mymap; std::vector<mystruct> myvec; for (int i = 'a'; i < 'a' + 26; ++i) { mymap[i] = i - 'a'; myvec.push_back(mystruct(i, i - 'a')); } int pre = clock(); for (int i = 0; i < 10000000; ++i) { find(myvec, 'z'); } std::cout << "linear scan: milli " << clock() - pre << "\n"; pre = clock(); for (int i = 0; i < 10000000; ++i) { mymap['z']; } std::cout << "map scan: milli " << clock() - pre << "\n"; return 0; }

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • Login to Windows 8 Desktop Mode Automatically with ClassicStarter [Downloads]

    - by Asian Angel
    The other day we shared a quick keyboard tip for going straight to Desktop Mode in Windows 8 when you logged in. Today we are back with a small app that gets you straight to Desktop Mode with ‘set it and forget it’ ease. You will need to install ClassicStarter after you have extracted the contents of the zip file. Once that is done simply start the app up and this is what you will see… The only thing you will need to do is click on the Classic Desktop Button. Once you have clicked on the Classic Desktop Button it will ‘grey out’. Simply exit the app, log out, and then log back into your system. The Start Screen will display for a moment or two, but everything will shift over to Desktop Mode automatically without any additional actions required on your part. To reverse the process and set the Start Screen as the default just start the app up again and click on the Metro Desktop Button, exit the app, and then log out/log back in. Download ClassicStarter (MediaFire) VirusTotal Scan Results for the ClassicStarter Zip File [via NirmalTV.com] How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8 How to Banish Duplicate Photos with VisiPic

    Read the article

  • Wireless doesn't work on a Lenovo V570

    - by Stephen
    I've had Ubuntu installed on my HD for about 3 months but ever since I ran into this wireless issue I kinda lost my lust of Ubuntu. I have zero experience getting around with/ using the console command. I have a Lenovo V570. I got the driver update for the broadcom networking card via the Additional Drivers application but that did nothing. I love the look and feel of using Ubuntu but I have no technological experience for the matter. Any help would be awesome. When I scan for wireless connections while in Ubuntu, my computer picks up nothing, while on Win7 it will pick up the handful of wireless networks around my area. My wired connection is fine, but the use of not having wireless on a laptop is rather contradictory to it as a feature. Cheers! Also, I just installed 11.10, if that helps any. Yes, I used the search before I posted this, but again I have ZERO understanding of the command stuff and need a meat and potatoes answer(s). stephen@ubuntu:~$ sudo lshw -class network [sudo] password for stephen: *-network UNCLAIMED description: Network controller product: BCM4313 802.11b/g/n Wireless LAN Controller vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:03:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:f1900000-f1903fff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:04:00.0 logical name: eth0 version: 06 serial: f0:de:f1:63:98:14 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-2.fw ip=192.168.1.78 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:2000(size=256) memory:f1804000-f1804fff memory:f1800000-f1803fff stephen@ubuntu:~$ rfkill list all 0: ideapad_wlan: Wireless LAN Soft blocked: yes Hard blocked: no 1: acer-wireless: Wireless LAN Soft blocked: yes Hard blocked: no

    Read the article

  • Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware

    - by The Geek
    You might be wondering why we have a screenshot of what appears to be AVG Anti-Virus, but is in fact a fake anti-virus malware that holds your computer hostage until you pay them. Here’s a really simple tip to defeating these types of malware, and a quick review of other options. Not sure what we’re talking about? Be sure to check out our previous articles on cleaning up fake antivirus infections. How To Remove Internet Security 2010 and other Rogue/Fake Antivirus Malware How To Remove Antivirus Live and Other Rogue/Fake Antivirus Malware How To Remove Advanced Virus Remover and Other Rogue/Fake Antivirus Malware How To Remove Security Tool and other Rogue/Fake Antivirus Malware So what’s the problem? Can’t you just run a anti-virus scan? Well… it’s not quite that simple. What actually happens is that these pieces of malware block you from running almost anything on your PC, and often prevent you from running apps from a Flash drive, with an error like this: Once you encounter this error, there’s a couple things you can do. The first one is almost stupidly simple, and works some of the time Latest Features How-To Geek ETC Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? Project M Brings Classic Super Smash Bro Style Gameplay to the Wii Now Together and Complete – McBain: The Movie [Simpsons Video] Be Creative by Using Hex and RGB Codes for Crayola Crayon Colors on Your Next Web or Art Project [Geek Fun] Flash Updates; Finally Supports Full Screen Video on Multiple Monitors 22 Ways to Recycle an Altoids Mint Tin Make Your Desktop Go Native with the Tribal Arts Theme for Windows 7

    Read the article

  • Smithsonian Showcases Video Game History with The Art of Video Games [Video]

    - by Jason Fitzpatrick
    The Art of Video Games is the Smithsonian’s look at the history of video games; check out this video trailer to see what the exhibition is all about and hear from some notable folks. From the Smithsonian listing for the exhibition: The Art of Video Games is one of the first exhibitions to explore the forty-year evolution of video games as an artistic medium, with a focus on striking visual effects and the creative use of new technologies. It features some of the most influential artists and designers during five eras of game technology, from early pioneers to contemporary designers. The exhibition focuses on the interplay of graphics, technology and storytelling through some of the best games for twenty gaming systems ranging from the Atari VCS to the PlayStation 3. The exhibit will be at the Smithsonian until the end of September and will then begin touring the country. Hit up the link below for more information. The Art of Video Games Tour [via Neatorama] How To Properly Scan a Photograph (And Get An Even Better Image) The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage

    Read the article

  • Stream Media and Live TV Across the Internet with Orb

    - by DigitalGeekery
    Looking for a way to stream your media collection across the Internet? Or perhaps watch and record TV remotely? Today we are going to look at how to do all that and more with Orb. Requirements Windows XP / Vista / 7 or Intel based Mac w/ OS X 10.5 or later. 1 GB RAM or more Pentium 4 2.4 GHz or higher / AMD Athlon 3200+ Broadband connections TV Tuner for streaming and recording live TV (optional) Note: Slower internet connections may result in stuttering during playback. Installation and Setup Download and install Orb on your home computer. (Download link below) You’ll want to take the defaults for the initial portion of the install. When we get to the Orb Account setup portion of the install is when we will have to enter information and make some decisions. Choose your language and click Next. We’ll need to create and user account and password. A valid email address is required as we’ll need to confirm the account later. Click Next.   Now you’ll want to choose your media sources. Orb will automatically look for folders that may contain media files. You can add or remove folders click on the (+) or (-) buttons. To remove a folder, click on it once to select it from the list and then click the minus (-) button. To add a folder, click the plus (+) button and browse for the folder. You can add local folders as well as shared folders from networked computers and USB attached storage. Note: Both the host computer running Orb and the networked computer will need to be running to access shared network folders remotely. When you’ve selected all your media files, click Next. Orb will proceed to index your media files… When the indexing is complete, click Next. Orb TV Setup Note: Streaming Live TV to Macs is not currently supported. If you have a TV tuner card connected to your PC, you can opt to configure Orb to stream live or recorded TV. Click Next  to configure TV. Or, choose Skip if you don’t wish to configure Orb for TV.   If you have a Digital tuner card, type in your Zip Code and click Get List to pull your channel listings. Select a TV provider from the list and click Next. If not, click Skip.   You can select or deselect any channels by checking or un-checking the box to each channel. Select Auto Scan to let Orb find more channels or disable the ones with no reception. Click Next when finished.   Next choose an analog provider, if necessary, and click Next.   Select “Yes” or “No” for a set top box and click Next. Just as we did with the Digital tuner, select or deselect any channels by checking or un-checking the box to each channel. Select Auto Scan to let Orb find more channels or disable the ones with no reception. Click Next when finished.   Now we’re finished with the setup. Click Close. Accessing your Media Remotely Media files are accessed through a web-based interface. Before we go any further, however, we’ll need to confirm our username and password. Check your inbox for an email from Orb Networks. Click the enclosed confirmation link. You’ll be prompted to enter the username and password you selected in your browser then click Next.   Your account will be confirmed. Now, we’re ready to enjoy our media remotely. To get started, point your browser to the MyCast website from your remote computer. (See link below) Enter your credentials and click Log In. Once logged in, you’ll be presented with the MyCast Home screen. By default you’ll see a handful of “channels” such as a TV program guide, random audio and photos, video favorites, and weather. You can add, remove, or customize channels. To add additional channels, click on Add Channels at the top right…   …and select from the dropdown list. To access your full media libraries, click Open Application at the top left and select from one of the options. Live and Recorded TV If you have a TV tuner card you configured for Orb, you’ll see your program guide on the TV / Webcams screen. To watch or record a show, click on the program listing to bring up a detail box. Then click the red button to record, or the green button to play. When recording a show, you’ll see a pulsating red icon at the top right of the listing in the program guide. If you want to watch Live TV, you may be prompted to choose your media player, depending on your browser and settings. Playback should begin shortly.   Note for Windows Media Center Users If you try to stream live TV in Orb while Windows Media Center is running on your PC, you’ll get an error message. Click the Stop MediaCenter button and then try again.   Audio On the Audio screen, you’ll find your music files indexed by genre, artist, and album. You can play a selection by clicking once and then clicking the green play button, or by simply double-clicking.   Playback will begin in the default media player for the streaming format.   Video Video works essentially the same as audio. Click on a selection and press the green play button, or double-click on the video title. Video playback will begin in the default media player for the streaming format.   Streaming Formats You can change the default streaming format in the control panel settings. To access the Control Panel, click on Open Applications  and select Control Panel. You can also click Settings at the top right.   Select General from the drop down list and then click on the Streaming Formats tab. You are provided four options. Flash, Windows Media, .SDP, and .PLS.   Creating Playlists To create playlists, drag and drop your media title to the playlist work area on the right, or click Add to playlist on the top menu. Click Save when finished.    Sharing your Media Orb allows you to share media playlists across the Internet with friends and family. There are a few ways to accomplish this. We’ll start by click the Share button at the bottom of the playlist work area after you’ve compiled your playlist. You’ll be prompted to choose a method by which to share your playlist. You’ll have the option to share your playlist publicly or privately. You can share publically through links, blogs, or on your Orb public profile.  By choosing the Public Profile option, Orb will automatically create a profile page for you with a URL like http://public.orb.com/username that anyone can easily access on the Internet. The private sharing option allows you to invite friends by email and requires recipients to register with Orb. You can also give your playlist a custom name, or accept the auto-generated title. Click OK when finished. Users who visit your public profile will be able to view and stream any of your shared playlists to their computer or supported device.   Portable Media Devices and Smartphones Orb can stream media to many portable devices and 3G phones. Streaming audio is supported on the iPhone and iPod Touch through the Safari browser. However, video and live TV streaming requires the Orb Live iPhone App.  Orb Live is available in the App store for $9.99. To stream media to your portable device, go to the MyCast website in your mobile browser and login. Browse for your media or playlist. Make a selection and play the media. Playback will begin. We found streaming music to both the Droid and the iPhone to work quite nicely. Video playback on the Droid, however, left a bit to be desired. The video looked good, but the audio tended to be out of sync. System Tray Control Panel By default Orb runs in the system tray on start up. To access the System Tray Control Panel, right-click on the Orb icon in the system tray and select Control Panel. Login with your Orb username and  password and click OK.   From here you can add or remove media sources, add manage accounts, change your password, and more. If you’d rather not run Orb on Startup, click the General icon.   Unselect the checkbox next to Start Orb when the system starts. Conclusion It may seem like a lot of steps, but getting Orb up and running isn’t terribly difficult. Orb is available for both Windows and Intel based Macs. It also supports streaming to many Game Consoles such as the Wii, PS3, and XBox 360. If you are running Windows 7 on multiple computers, you may want to check out our write-up on how to stream music and video over the Internet with Windows Media Player 12. Downloads Download Orb Logon to MyCast Similar Articles Productive Geek Tips Stream Music and Video Over the Internet with Windows Media Player 12Enable Media Streaming in Windows Home Server to Windows Media PlayerStream Media from Windows 7 to XP with VLC Media PlayerShare Digital Media With Other Computers on a Home Network with Windows 7Automatically Start Windows 7 Media Center in Live TV Mode TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go

    Read the article

  • Awareness Makes Sure You’re Not Tuned Out While Listening to Music

    - by Jason Fitzpatrick
    iOS: You want to listen to your music but you don’t want to be totally unaware of your surroundings. What do you do? Install Awareness, a simple app that will alert you if your device mic picks up loud noises. Whether you want to fall asleep listening to some relaxing music while still hearing bumbs in the night or you want to listen to music when you’re out and about without being completely started by events your earbuds are filtering out. To use the app just run it in tandem with your music, adjust the sensitivity to account for the ambient sound, and you’re in business. If any sounds exceed that level, they’ll be piped in through your headphones to alert you. Awareness is currently free in the App Store (down from a high of $6.99) so if you’d like to play around with it, now’s the time to grab a copy. Awareness [via Addictive Tips] How To Be Your Own Personal Clone Army (With a Little Photoshop) How To Properly Scan a Photograph (And Get An Even Better Image) The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume

    Read the article

  • Convert a Row to a Column in Excel the Easy Way

    - by Matthew Guay
    Sometimes we’ve entered data in a column in Excel, only to realize later that it would be better to have this data in a row, or vise-versa.  Here’s a simple trick to convert any row or set of rows into a column, or vise-versa, in Excel. Please Note: This is tested in Excel 2003, 2007, and 2010.  Here we took screenshots from Excel 2010 x64, but it works the same on the other versions. Convert a Row to a Column Here’s our data in Excel: We want to change these two columns into rows.  Select all the cells you wish to convert, right-click, and select copy (or simply press Ctrl+C): Now, right-click in the cell where you want to put the data in rows, and select “Paste Special…”   Check the box at the bottom that says “Transpose”, and then click OK. Now your data that was in columns is in rows! This works the exact same for converting rows into columns.  Here’s some data in rows:   After copying and pasting special with Transpose selected, here’s the data in columns! This is a great way to get your data organized just like you want in Excel. Similar Articles Productive Geek Tips Convert Older Excel Documents to Excel 2007 FormatHow To Import a CSV File Containing a Column With a Leading 0 Into ExcelExport an Access 2003 Report Into Excel SpreadsheetMake Row Labels In Excel 2007 Freeze For Easier ReadingKeyboard Ninja: Insert Tables in Word 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Increase the size of Taskbar Previews (Win 7) Scan your PC for nasties with Panda ActiveScan CleanMem – Memory Cleaner AceStock – The Personal Stock Monitor Add Multiple Tabs to Office Programs The Wearing of the Green – St. Patrick’s Day Theme (Firefox)

    Read the article

  • Writing algorithm on 2D data set in plain english

    - by Alexandre P. Levasseur
    I have started an introductory Java class and the material is absolutely horrendous and I have to get excellent grades to be accepted into the master's degree, hence my very beginner question: In my assignment I have to write algorithms (no pseudo-code yet) to solve a board game (Sudoku). Essentially, the notes say that an algorithm is specification of the input(s), the output(s) and the treatments applied to the input to get the output. My question lies on the wording of algorithms because I could probably code it but I can't seem to put it on paper in a coherent way. The game has a 9x9 board and one of the algorithms to write has to find the solution by looking at 3 squares (either horizontal or vertical) and see if one of the three sub-squares match the number you are looking for. If none match then the number you are looking to place is in one of the other 2 set of 3 sub-squares (see image to get a better idea). I really can't get my head around how to formulate the solution into the terms described above or maybe it's just too simple, here's what I was thinking: Input: A 2-dimensional set of data of size 9 by 9 to be solved and a number to search for. Ouput: A 2-dimensional set of data of size 9 by 9 either solved or partially solved. Treatment: Scan each set of 3x9 and 9x3 squares. For each line or column of a 3x3 square check if the number matches a line (or column). If it does then move to the next line (or column). If not then proceed to the next 3x3 square in the same line (or column). Rinse and repeat. Does that make sense as an algorithm written in plain english ? I'm not looking for an answer to the algorithm per se but rather on the formulation of algorithms in plain english.

    Read the article

  • You Can't Win on Price

    - by David Dorf
    This year I did the majority of my Christmas shopping from the comfort of my home office. There aren't many things in stores you can't find online these days. I find it easier to search, research, and compare products online rather than walking the mall anyway. But there's a segment of the population that likes to be in the store, touching the products. For those people, smartphones avail them some of the e-commerce features I mentioned right there in the aisles. First it was RedLaser, then TheFind, ShopSavvy and many others. But the one that should be scaring retailers is Amazon's PriceCheck application. It lets you scan the product barcode, take a picture of the product, or speak the product's name. Once the product is identified, it shows the online prices, with Amazon at the top of the list. Within 10 seconds you can order the item and Amazon Prime members get free 2-day shipping too. I don't think fashion and grocery retailers need to worry much, but I have to believe smartphones are helping Amazon win a little more of the brand-name hardgoods market. So what's a retailer to do? Best Buy has begun to put QR Codes on their shelf labels that are easily scanned by smartphones and take the consumer to a Best Buy Web page where they can get extended information about the product. The consumer is getting the additional information they want, and Best Buy avoids the price comparisons. Of course if a consumer chooses to use the Amazon PriceCheck app, then all bets are off. That's when Best Buy has to hope the in-store experience and customer service will save the sale. My point is that the internet makes information available to everyone, and smartphones make it available anywhere. Unless you want your store to be Amazon's local showroom, you need to be price-competitive but differentiate on other aspects of the shopping experience. With the cost of running a physical store, you can't win on price.

    Read the article

  • Move Firefox’s Tab Bar to the Top

    - by Asian Angel
    Would you prefer to have Firefox’s Tab Bar located at the top of the browser instead of its’ default location? See how easy it is to move the Tab Bar back and forth between the top and current positions “flip switch style” with the Tabs On Top extension. Note: Tabs On Top extension supports multi-row feature in TabMixPlus. Before You can see the “Tab Bar” in its’ default location here in our test browser…not bad but what if you prefer having it located at the top of the browser? After As soon as you have installed the extension and restarted Firefox the “Tab Bar” will have automatically moved to the top of the browser. You will most likely notice a slight decrease in tab height as well (which occurred during our tests). To move the “Tab Bar” back and forth between the top and default locations just select/deselect “Tab Bar on top” in the “Toolbars Context Menu”. You can quickly reduce the size of the upper UI after hiding some of the other toolbars and go even further if you like using extensions that will hide the “Title Bar”. This is definitely a good UI matching extension for anyone using a Chrome based theme in Firefox. Conclusion If you are unhappy with default location for Firefox’s “Tab Bar” then this extension will certainly provide an alternative option for you. Links Download the Tabs On Top extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Use the Keyboard to Move Items Up or Down in Microsoft WordAdd Copy To / Move To on Windows 7 or Vista Right-Click MenuBring Misplaced Off-Screen Windows Back to Your Desktop (Keyboard Trick)Moving Your Personal Data Folders in Windows Vista the Easy WayAdd Copy To / Move To to the Windows Explorer Right Click Menu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Live PDF Searches PDF Files and Ebooks Converting Mp4 to Mp3 Easily Use Quick Translator to Translate Text in 50 Languages (Firefox) Get Better Windows Search With UltraSearch Scan News With NY Times Article Skimmer SpeedyFox Claims to Speed up your Firefox

    Read the article

  • Get the MakeUseOf eBook Guide to Speeding Up Windows for Free

    - by ETC
    Our friends over at MakeUseOf.com have released yet another eBook in their series of Guides to, well, just about everything. This one gives you some tips for speeding up your Windows PC. The guide has a ton of different tips, and while I wouldn’t necessarily say you follow every single tip to the letter (since everybody’s setup is different), it does give you lots of great ideas for speeding up your PC, as well as links to resources, and instructions for how to perform various cleanup tasks. The best tips? Make sure to keep your PC crapware-free, upgrade your RAM if you’re low, scan for viruses, and run some type of disk cleanup on a regular basis. Download the MakeUseOf Windows on Speed Guide (PDF) [Direct Download Link] Windows on Speed [MakeUseOf] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic] Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox Androidify Turns You into an Android-style Avatar Reader for Android Updates; Now with Feed Widgets and More

    Read the article

  • Can&rsquo;t eject external USB or Firewire drive in Windows 7

    - by Kelly Jones
    As a SharePoint developer, I work a lot with Virtual Machines (presently using Windows Virtual PC, with Windows 7).  I’m using these VMs with my laptop, and in order to get better performance, I’ve moved them to external hard drives.  (These drives have faster RPMs, larger caches, and a larger capacity, than the internal drive.)  I have one large external drive at home, another similar drive at the office, and a small, slow portable drive that I carry with me. So, at the end of each day at the office, I copy the files from the external drive to my portable drive and then once I get home, I copy them from the portable to the larger external drive I leave at home.  I do this for a couple of reasons: so I can work at home and secondly, so I have backup copies.  (Often, I feel like I’m in the movie “Office Space” when copying the files before I leave the office). Anyway, after the files are copied, I safely eject the external drives, and then hibernate my laptop.  I’ve been doing this for over a year now, but within the last couple of months I started to have issues disconnecting the drives.  Intermittently, some application/process would have a lock on some file on the drive that would keep Windows from safely ejecting it. After looking into it, I found that it was actually the Windows search service that was accessing the drive! Since I wasn’t using Windows search to look for stuff on these drives, I removed them as locations to index. To do this in Windows 7, you need to go to Indexing Options (just type “Indexing” into the search box in the Start menu…).  One of the choices displayed will then be Indexing Options, so click on it and you should then see a window similar to this:   Click on the Modify button and you’ll see this window: Notice the different drives listed above.  My “FreeAgent XTreme (F:)” drive was checked for some reason, which was causing the indexing service to scan the drive looking for new files to make available in the search results.  Ever since I unchecked this box, I’ve been able to safely eject the drive.

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • How to remove HDD Low virus

    - by samsudeen
    “HDD Low virus” is a new  fake system optimizer application which started affecting all  the Windows ( XP, vista, Windows 7) based computers world wide starting from Monday. It gets installed to the computers without notice by passing all our antivirus software. The infected computers will suddenly popup a system error  similar to the below screen shot and tries to shut down the computer.   Though the major anti virus companies have not yet release an update for this virus, We can easily remove this virus using the below steps Steps to remove HDD Low virus Press Alt+Ctrl+Delete and go the the Task Manager -> Process and kill the process with name [random number].exe ( e.g 123410.exe) Go to Run -> type msconfig to launch the System Configuration utility. In the Start up Tab un check  all the services with random name (e.g jygkgs.exe) and note folder path of the service in the Command column. Go to that folder path and delete all the exe files with random name manually ( It is recommended to use command prompt to delete the files) Delete all the HDD low files in the below path %Desktop%\HDD Low.lnk %Programs%\HDD Low\Uninstall HDD Low.lnk %Programs%\HDD Low\HDD Low.lnk Open registry using Run-> regedit.exe search for the below key and delete software\Microsoft\Windows\CurrentVersion\Run [random number].exe” Restart the computer Also update your anti virus definition and run a full scan of your computer to remove any affected files. This article titled,How to remove HDD Low virus, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • How to Use the Avira Rescue CD to Clean Your Infected PC

    - by The Geek
    When you’ve got a PC completely infected with viruses, sometimes it’s best to reboot into a rescue disc and run a full virus scan from there. Here’s how to use the Avira Rescue CD to clean an infected PC. We’ve previously covered how to clean an infected PC using the BitDefender or Kaspersky rescue disks, and loads of readers have written in saying thanks, and reporting that they were able to clean their PC easily. Be sure and check out our previous articles on the subject: How to Use the BitDefender Rescue CD to Clean Your Infected PC How to Use the Kaspersky Rescue Disk to Clean Your Infected PC Otherwise, keep reading for how it all works with Avira, a well-respected anti-virus solution Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Deathwing the Destroyer – WoW Cataclysm Dragon Wallpaper Drag2Up Lets You Drag and Drop Files to the Web With Ease The Spam Police Parts 1 and 2 – Goodbye Spammers [Videos] Snow Angels Theme for Windows 7 Exploring the Jungle Ruins Wallpaper Protect Your Privacy When Browsing with Chrome and Iron Browser

    Read the article

  • New Bundling and Minification Support (ASP.NET 4.5 Series)

    - by ScottGu
    This is the sixth in a series of blog posts I'm doing on ASP.NET 4.5. The next release of .NET and Visual Studio include a ton of great new features and capabilities.  With ASP.NET 4.5 you'll see a bunch of really nice improvements with both Web Forms and MVC - as well as in the core ASP.NET base foundation that both are built upon. Today’s post covers some of the work we are doing to add built-in support for bundling and minification into ASP.NET - which makes it easy to improve the performance of applications.  This feature can be used by all ASP.NET applications, including both ASP.NET MVC and ASP.NET Web Forms solutions. Basics of Bundling and Minification As more and more people use mobile devices to surf the web, it is becoming increasingly important that the websites and apps we build perform well with them. We’ve all tried loading sites on our smartphones – only to eventually give up in frustration as it loads slowly over a slow cellular network.  If your site/app loads slowly like that, you are likely losing potential customers because of bad performance.  Even with powerful desktop machines, the load time of your site and perceived performance can make an enormous customer perception. Most websites today are made up of multiple JavaScript and CSS files to separate the concerns and keep the code base tight. While this is a good practice from a coding point of view, it often has some unfortunate consequences for the overall performance of the website.  Multiple JavaScript and CSS files require multiple HTTP requests from a browser – which in turn can slow down the performance load time.  Simple Example Below I’ve opened a local website in IE9 and recorded the network traffic using IE’s built-in F12 developer tools. As shown below, the website consists of 5 CSS and 4 JavaScript files which the browser has to download. Each file is currently requested separately by the browser and returned by the server, and the process can take a significant amount of time proportional to the number of files in question. Bundling ASP.NET is adding a feature that makes it easy to “bundle” or “combine” multiple CSS and JavaScript files into fewer HTTP requests. This causes the browser to request a lot fewer files and in turn reduces the time it takes to fetch them.   Below is an updated version of the above sample that takes advantage of this new bundling functionality (making only one request for the JavaScript and one request for the CSS): The browser now has to send fewer requests to the server. The content of the individual files have been bundled/combined into the same response, but the content of the files remains the same - so the overall file size is exactly the same as before the bundling.   But notice how even on a local dev machine (where the network latency between the browser and server is minimal), the act of bundling the CSS and JavaScript files together still manages to reduce the overall page load time by almost 20%.  Over a slow network the performance improvement would be even better. Minification The next release of ASP.NET is also adding a new feature that makes it easy to reduce or “minify” the download size of the content as well.  This is a process that removes whitespace, comments and other unneeded characters from both CSS and JavaScript. The result is smaller files, which will download and load in a browser faster.  The graph below shows the performance gain we are seeing when both bundling and minification are used together: Even on my local dev box (where the network latency is minimal), we now have a 40% performance improvement from where we originally started.  On slow networks (and especially with international customers), the gains would be even more significant. Using Bundling and Minification inside ASP.NET The upcoming release of ASP.NET makes it really easy to take advantage of bundling and minification within projects and see performance gains like in the scenario above. The way it does this allows you to avoid having to run custom tools as part of your build process –  instead ASP.NET has added runtime support to perform the bundling/minification for you dynamically (caching the results to make sure perf is great).  This enables a really clean development experience and makes it super easy to start to take advantage of these new features. Let’s assume that we have a simple project that has 4 JavaScript files and 6 CSS files: Bundling and Minifying the .css files Let’s say you wanted to reference all of the stylesheets in the “Styles” folder above on a page.  Today you’d have to add multiple CSS references to get all of them – which would translate into 6 separate HTTP requests: The new bundling/minification feature now allows you to instead bundle and minify all of the .css files in the Styles folder – simply by sending a URL request to the folder (in this case “styles”) with an appended “/css” path after it.  For example:    This will cause ASP.NET to scan the directory, bundle and minify the .css files within it, and send back a single HTTP response with all of the CSS content to the browser.  You don’t need to run any tools or pre-processor to get this behavior.  This enables you to cleanly separate your CSS into separate logical .css files and maintain a very clean development experience – while not taking a performance hit at runtime for doing so.  The Visual Studio designer will also honor the new bundling/minification logic as well – so you’ll still get a WYSWIYG designer experience inside VS as well. Bundling and Minifying the JavaScript files Like the CSS approach above, if we wanted to bundle and minify all of our JavaScript into a single response we could send a URL request to the folder (in this case “scripts”) with an appended “/js” path after it:   This will cause ASP.NET to scan the directory, bundle and minify the .js files within it, and send back a single HTTP response with all of the JavaScript content to the browser.  Again – no custom tools or builds steps were required in order to get this behavior.  And it works with all browsers. Ordering of Files within a Bundle By default, when files are bundled by ASP.NET they are sorted alphabetically first, just like they are shown in Solution Explorer. Then they are automatically shifted around so that known libraries and their custom extensions such as jQuery, MooTools and Dojo are loaded before anything else. So the default order for the merged bundling of the Scripts folder as shown above will be: Jquery-1.6.2.js Jquery-ui.js Jquery.tools.js a.js By default, CSS files are also sorted alphabetically and then shifted around so that reset.css and normalize.css (if they are there) will go before any other file. So the default sorting of the bundling of the Styles folder as shown above will be: reset.css content.css forms.css globals.css menu.css styles.css The sorting is fully customizable, though, and can easily be changed to accommodate most use cases and any common naming pattern you prefer.  The goal with the out of the box experience, though, is to have smart defaults that you can just use and be successful with. Any number of directories/sub-directories supported In the example above we just had a single “Scripts” and “Styles” folder for our application.  This works for some application types (e.g. single page applications).  Often, though, you’ll want to have multiple CSS/JS bundles within your application – for example: a “common” bundle that has core JS and CSS files that all pages use, and then page specific or section specific files that are not used globally. You can use the bundling/minification support across any number of directories or sub-directories in your project – this makes it easy to structure your code so as to maximize the bunding/minification benefits.  Each directory by default can be accessed as a separate URL addressable bundle.  Bundling/Minification Extensibility ASP.NET’s bundling and minification support is built with extensibility in mind and every part of the process can be extended or replaced. Custom Rules In addition to enabling the out of the box - directory-based - bundling approach, ASP.NET also supports the ability to register custom bundles using a new programmatic API we are exposing.  The below code demonstrates how you can register a “customscript” bundle using code within an application’s Global.asax class.  The API allows you to add/remove/filter files that go into the bundle on a very granular level:     The above custom bundle can then be referenced anywhere within the application using the below <script> reference:     Custom Processing You can also override the default CSS and JavaScript bundles to support your own custom processing of the bundled files (for example: custom minification rules, support for Saas, LESS or Coffeescript syntax, etc). In the example below we are indicating that we want to replace the built-in minification transforms with a custom MyJsTransform and MyCssTransform class. They both subclass the CSS and JavaScript minifier respectively and can add extra functionality:     The end result of this extensibility is that you can plug-into the bundling/minification logic at a deep level and do some pretty cool things with it. 2 Minute Video of Bundling and Minification in Action Mads Kristensen has a great 90 second video that shows off using the new Bundling and Minification feature.  You can watch the 90 second video here. Summary The new bundling and minification support within the next release of ASP.NET will make it easier to build fast web applications.  It is really easy to use, and doesn’t require major changes to your existing dev workflow.  It is also supports a rich extensibility API that enables you to customize it however you want. You can easily take advantage of this new support within ASP.NET MVC, ASP.NET Web Forms and ASP.NET Web Pages based applications. Hope this helps, Scott P.S. In addition to blogging, I use Twitter to-do quick posts and share links. My Twitter handle is: @scottgu

    Read the article

  • SPARC64 VII+ Processor Core License Factor Reduced by 33%

    - by john.shell
    The Oracle processor core license factor has been a popular topic the last few months.  For those partners new to Oracle software licensing, the processor core license factor determines the number licensed CPUs that are required when running Oracle software (those charged on a per-CPU basis) on multi-core processors.My last entry talked about the core factor reduction for our T3 processor.  The core license factor for our newly announced SPARC64 VII+ processor is 0.5, which is a 33% reduction from the 0.75 rate used with our SPARC64 VI and VII processors.What does this mean for our partners?  Increased opportunity.  This change, similar to our T3-based systems, means that our hardware is the preferred platform for Oracle software. Still a little dizzy on the breadth of Oracle's software offering?  Do a simple scan of Oracle's software price lists. Consider this your target market.This change allows you to focus on total solution price or price/performance, not server prices or per core performance (a standard IBM sales tactic). That's the offensive side of the game.  Don't forget your defense.  One of the biggest customer benefits around the M-Series is investment protection.  The combination of a simple processor/board upgrade, along with a reduction in processor core license factor, makes upgrading one of the best financial moves for our customers.    One reminder.  The update to the processor core license factor only applies to the new VII+ processor - NOT the SPARC64 VI or VII processors.  You can find the official table here.

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >