Search Results

Search found 4640 results on 186 pages for 'unique'.

Page 58/186 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • What are the Best Virtual Desktop Managers for Windows 7 excluding Dexpot? [closed]

    - by user233641
    My question is different to others as my list of important features is completely different and I believe unique. Necessary Features Are: Dual-monitor support 6 desktops minimum Different icons can be created on different desktops Reliable and does not delete or remove icons without input from me Ability to save profiles and reload them if necessary Ability to change the home desktop to a different one Reasonably easy to use Keyboard Support Good email support

    Read the article

  • lan extension over wide area

    - by avinash
    when we use technology like leased line to extend a lan over a wide area(like when connecting two offices such that hosts in both offices use private ip addresses) , why do we use encapsulations like ppp or hdlc...what can't we use the ethernet header to communicate because mac addresses are unique and can easily be used to identify hosts just like a small area lan... this question may seem a bit absurd but it has been bugging me...so plz explain

    Read the article

  • Powershell script

    - by clide
    Do you guys think we can create a script in powershell, where a wepkey profile can be send to the Organizationally Unique Identifier (OUI) of each laptop on my network? where do I start

    Read the article

  • New hire expectations... (Am I being unreasonable?)

    - by user295841
    I work for a very small custom software shop. We currently consist me and my boss. My boss is an old FoxPro DOS developer and OOP makes him uncomfortable. He is planning on taking a back seat in the next few years to hopefully enjoy a “partial retirement”. I will be taking over the day to day operations and we are now desperately looking for more help. We tried Monster.com, Dice.com, and others a few years ago when we started our search. We had no success. We have tried outsourcing overseas (total disaster), hiring kids right out of college (mostly a disaster but that’s where I came from), interns (good for them, not so good for us) and hiring laid off “experienced” developers (there was a reason they were laid off). I have heard hiring practices discussed on podcasts, blogs, etc... and have tried a few. The “Fizz Buzz” test was a good one. One kid looked physically ill before he finally gave up. I think my problem is that I have grown so much as a developer since I started here that I now have a high standard. I hear/read very intelligent people podcasts and blogs and I know that there are lots of people out there that can do the job. I don’t want to settle for less than a “good” developer. Perhaps my expectations are unreasonable. I expect any good developer (entry level or experienced) to be billable (at least paying their own wage) in under one month. I expect any good developer to be able to be productive (at least dangerous) in any language or technology with only a few days of research/training. I expect any good developer to be able to take a project from initial customer request to completion with little or no help from others. Am I being unreasonable? What constitutes a valuable developer? What should be expected of an entry level developer? What should be expected of an experienced developer? I realize that everyone is different but there has to be some sort of expectations standard, right? I have been giving the test project below to potential canidates to weed them out. Good idea? Too much? Too little? Please let me know what you think. Thanks. Project ID: T00001 Description: Order Entry System Deadline: 1 Week Scope The scope of this project is to develop a fully function order entry system. Screen/Form design must be user friendly and promote efficient data entry and modification. User experience (Navigation, Screen/Form layouts, Look and Feel…) is at the developer’s discretion. System may be developed using any technologies that conform to the technical and system requirements. Deliverables Complete source code Database setup instructions (Scripts or restorable backup) Application installation instructions (Installer or installation procedure) Any necessary documentation Technical Requirements Server Platform – Windows XP / Windows Server 2003 / SBS Client Platform – Windows XP Web Browser (If applicable) – IE 8 Database – At developer’s discretion (Must be a relational SQL database.) Language – At developer’s discretion All data must be normalized. (+) All data must maintain referential integrity. (++) All data must be indexed for optimal performance. System must handle concurrency. System Requirements Customer Maintenance Customer records must have unique ID. Customer data will include Name, Address, Phone, etc. User must be able to perform all CRUD (Create, Read, Update, and Delete) operations on the Customer table. User must be able to enter a specific Customer ID to edit. User must be able to pull up a sortable/queryable search grid/utility to find a customer to edit. Validation must be performed prior to database commit. Customer record cannot be deleted if the customer has an order in the system. (++) Inventory Maintenance Part records must have unique ID. Part data will include Description, Price, UOM (Unit of Measure), etc. User must be able to perform all CRUD operations on the part table. User must be able to enter a specific Part ID to edit. User must be able to pull up a sortable/queryable search grid/utility to find a part to edit. Validation must be performed prior to database commit. Part record cannot be deleted if the part has been used in an order. (++) Order Entry Order records must have a unique auto-incrementing key (Order Number). Order data must be split into a header/detail structure. (+) Order can contain an infinite number of detail records. Order header data will include Order Number, Customer ID (++), Order Date, Order Status (Open/Closed), etc. Order detail data will include Part Number (++), Quantity, Price, etc. User must be able to perform all CRUD operations on the order tables. User must be able to enter a specific Order Number to edit. User must be able to pull up a sortable/queryable search grid/utility to find an order to edit. User must be able to print an order form from within the order entry form. Validation must be performed prior to database commit. Reports Customer Listing – All Customers in the system. Inventory Listing – All parts in the system. Open Order Listing – All open orders in system. Customer Order Listing – All orders for specific customer. All reports must include sorts and filter functions where applicable. Ex. Customer Listing by range of Customer IDs. Open Order Listing by date range.

    Read the article

  • Hibernate without primary keys generated by db?

    - by Michael Jones
    I'm building a data warehouse and want to use InfiniDB as the storage engine. However, it doesn't allow primary keys or foreign key constraints (or any constraints for that matter). Hibernate complains "The database returned no natively generated identity value" when I perform an insert. Each table is relational, and contains a unique integer column that was previously used as the primary key - I want to keep that, but just not have the constraint in the db that the column is the primary key. I'm assuming the problem is that Hibernate expects the db to return a generated key. Is it possible to override this behaviour so I can set the primary key field's value myself and keep Hibernate happy? -- edit -- Two of the mappings are as follows: <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <!-- Generated Jun 1, 2010 2:49:51 PM by Hibernate Tools 3.2.1.GA --> <hibernate-mapping> <class name="com.example.project.Visitor" table="visitor" catalog="orwell"> <id name="id" type="java.lang.Long"> <column name="id" /> <generator class="identity" /> </id> <property name="firstSeen" type="timestamp"> <column name="first_seen" length="19" /> </property> <property name="lastSeen" type="timestamp"> <column name="last_seen" length="19" /> </property> <property name="sessionId" type="string"> <column name="session_id" length="26" unique="true" /> </property> <property name="userId" type="java.lang.Long"> <column name="user_id" /> </property> <set name="visits" inverse="true"> <key> <column name="visitor_id" /> </key> <one-to-many class="com.example.project.Visit" /> </set> </class> </hibernate-mapping> and: <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <!-- Generated Jun 1, 2010 2:49:51 PM by Hibernate Tools 3.2.1.GA --> <hibernate-mapping> <class name="com.example.project.Visit" table="visit" catalog="orwell"> <id name="id" type="java.lang.Long"> <column name="id" /> <generator class="identity" /> </id> <many-to-one name="visitor" class="com.example.project.Visitor" fetch="join" cascade="all"> <column name="visitor_id" /> </many-to-one> <property name="visitId" type="string"> <column name="visit_id" length="20" unique="true" /> </property> <property name="startTime" type="timestamp"> <column name="start_time" length="19" /> </property> <property name="endTime" type="timestamp"> <column name="end_time" length="19" /> </property> <property name="userAgent" type="string"> <column name="user_agent" length="65535" /> </property> <set name="pageViews" inverse="true"> <key> <column name="visit_id" /> </key> <one-to-many class="com.example.project.PageView" /> </set> </class> </hibernate-mapping>

    Read the article

  • Silverlight TV with Myself, John Papa, Shawn Wildermuth and Ward Bell

    - by dwahlin
    I had the chance to go on a live episode of Channel 9 while at DevConnections and had a lot of fun chatting about various Silverlight topics and answering some fairly unique questions posted on Twitter.  Here’s more info on the episode from John Papa’s blog: John interviews a panel of 3 well known Silverlight leaders including Shawn Wildermuth, Dan Wahlin, and Ward Bell at the Silverlight 4 launch event. The guest panel answers questions sent in from Twitter about the features in Silverlight 4, thoughts on MVVM, and the panel members' experiences developing Silverlight. This is a great chance to hear from some of the leading Silverlight minds. These guys are all experts at building business applications with Silverlight. Relevant links: John's Blog and on Twitter (@john_papa) Shawn's Blog and on Twitter (@shawnwildermuth) Dan's Blog and on Twitter (@danwahlin) Ward's Blog and on Twitter (@wardbell) Silverlight Training Course on Channel 9 Follow us on Twitter @SilverlightTV or on the web at http://silverlight.tv You can see the episode online by clicking the image below:

    Read the article

  • Java Embedded @ JavaOne: Q & A

    - by terrencebarr
    There has been a lot of interest in Java Embedded @ JavaOne since it was announced a short while ago (see my previous post). As this is a new conference we did get a number of questions regarding the conference. So we put together a brief Q & A on audience focus, dates, registrations, pricing, submissions, etc. Hope this helps and, remember, the Call for Papers ends next week, Jul 18th 2012! Cheers, – Terrence    Java Embedded @ JavaOne : Q & A  Q. Where can I learn more about “Java Embedded @ JavaOne”? A. Please visit: http://oracle.com/javaone/embedded Q. What is the purpose of “Java Embedded @ JavaOne”? A. This net-new event is designed to provide business and technical decision makers, as well as Java embedded ecosystem partners, a unique occasion to come together and learn about how they can use Java Embedded technologies for new business opportunities. Q. What broad audiences would benefit by attending “Java Embedded @ JavaOne”? A. Java licensees; Government agencies; ISVs, Device Manufacturers; Service Providers such as Telcos, Utilities, Healthcare, Energy, Smart Grid/Smart Metering; Automotive/Telematics; Home/Building Automation; Factory Automation; Media/TV; and Payment vendors. Q. What business titles would benefit by attending “Java Embedded @ JavaOne”? A. The ideal audience for this event is business and technical decision makers (e.g. System Integrators, CTO, CXO, Chief Architects/Architects, Business Development Managers, Project Managers, Purchasing managers, Technical Leads, Senior Decision Makers, Practice Leads, R&D Heads, and Development Managers/Leads). Q. When is “Java Embedded @ JavaOne” taking place? A. The event takes place on Wednesday, Oct. 3th through Thursday, Oct. 4th. Q. Where is “Java Embedded @ JavaOne” taking place? A. The event takes place in the Hotel Nikko. Q. Won’t “Java Embedded @ JavaOne” impact the flagship JavaOne conference since the Hotel Nikko is one of the 3 flagship JavaOne conference’s venue hotels? A. No. Separate space in the Hotel Nikko will be used for “Java Embedded @ JavaOne” and will in no way impact scale and scope of the flagship JavaOne conference’s content mix. Q. Will there be a call for papers for “Java Embedded @ JavaOne”? A. Yes.  The call for papers has started but is ONLY for business focused submissions. Q. What type of business submissions can I make for “Java Embedded @ JavaOne”? A. We are accepting 3 types of business submissions: Best Practices: Java Embedded business solutions, methods, and techniques that consistently show results superior to those achieved with other means, as well as discussions on how Java Embedded can improve business operations, and increase competitive differentiation and profitability. Case Studies: Discussions with Oracle customers and partners that describe the unique business drivers that convinced them to implement Java Embedded as part of an infrastructure technology mix. The discussions will highlight the issues they faced, the decision making involved, and the implementation choices made to create value and improve business differentiation. Panel: Moderator-driven open discussion focused on the emerging opportunities Java Embedded offers businesses, as well as other topics such as strategy, overcoming common challenges, etc. Q. What is the call for papers timeline for “Java Embedded @ JavaOne”? A. The timeline is as follows: CFP Launched – June 18th Deadline for submissions – July 18th Notifications (Accepts/Declines) – week of July 29th Deadline for speakers to accept speaker invitation – August 10th Presentations due for review – August 31st Q. Where can I find more call for paper details for “Java Embedded @ JavaOne”? A. Please go to: http://www.oracle.com/javaone/embedded/call-for-papers/information/index.html Q. How much does it cost to attend “Java Embedded @ JavaOne”? A. The cost to attend is: $595.00 U.S. — Early Bird (Launch date – July 13, 2012) $795.00 U.S. — Pre-Registration (July 14 – September 28, 2012) $995.00 U.S. — Onsite Registration (September 29 – October 4, 2012) Q. Can an attendee of the flagship JavaOne event and Oracle OpenWorld attend “Java Embedded @ JavaOne”? ?A. Yes.  Attendees of both the flagship JavaOne event and Oracle OpenWorld can attend “Java Embedded @ JavaOne” by purchasing a $100.00 U.S. upgrade to their full conference pass. Filed under: Mobile & Embedded Tagged: Call for Papers, Java Embedded @ JavaOne, JavaOne San Francisco

    Read the article

  • The Mysterious ARR Server Farm to URL Rewrite link

    Application Request Routing (ARR) is a reverse proxy plug-in for IIS7+ that does many things, including functioning as a load balancer.  For this post, Im assuming that you already have an understanding of ARR.  Today I wanted to find out how the mysterious link between ARR and URL Rewrite is maintained.  Let me explain ARR is unique in that it doesnt work by itself.  It sits on top of IIS7 and uses URL Rewrite.  As a result, ARR depends on URL Rewrite to catch the traffic...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • MVC 2 Editor Template for Radio Buttons

    - by Steve Michelotti
    A while back I blogged about how to create an HTML Helper to produce a radio button list.  In that post, my HTML helper was “wrapping” the FluentHtml library from MvcContrib to produce the following html output (given an IEnumerable list containing the items “Foo” and “Bar”): 1: <div> 2: <input id="Name_Foo" name="Name" type="radio" value="Foo" /><label for="Name_Foo" id="Name_Foo_Label">Foo</label> 3: <input id="Name_Bar" name="Name" type="radio" value="Bar" /><label for="Name_Bar" id="Name_Bar_Label">Bar</label> 4: </div> With the release of MVC 2, we now have editor templates we can use that rely on metadata to allow us to customize our views appropriately.  For example, for the radio buttons above, we want the “id” attribute to be differentiated and unique and we want the “name” attribute to be the same across radio buttons so the buttons will be grouped together and so model binding will work appropriately. We also want the “for” attribute in the <label> element being set to correctly point to the id of the corresponding radio button.  The default behavior of the RadioButtonFor() method that comes OOTB with MVC produces the same value for the “id” and “name” attributes so this isn’t exactly what I want out the the box if I’m trying to produce the HTML mark up above. If we use an EditorTemplate, the first gotcha that we run into is that, by default, the templates just work on your view model’s property. But in this case, we *also* was the list of items to populate all the radio buttons. It turns out that the EditorFor() methods do give you a way to pass in additional data. There is an overload of the EditorFor() method where the last parameter allows you to pass an anonymous object for “extra” data that you can use in your view – it gets put on the view data dictionary: 1: <%: Html.EditorFor(m => m.Name, "RadioButtonList", new { selectList = new SelectList(new[] { "Foo", "Bar" }) })%> Now we can create a file called RadioButtonList.ascx that looks like this: 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> 2: <% 3: var list = this.ViewData["selectList"] as SelectList; 4: %> 5: <div> 6: <% foreach (var item in list) { 7: var radioId = ViewData.TemplateInfo.GetFullHtmlFieldId(item.Value); 8: var checkedAttr = item.Selected ? "checked=\"checked\"" : string.Empty; 9: %> 10: <input type="radio" id="<%: radioId %>" name="<%: ViewData.TemplateInfo.HtmlFieldPrefix %>" value="<%: item.Value %>" <%: checkedAttr %>/> 11: <label for="<%: radioId %>"><%: item.Text %></label> 12: <% } %> 13: </div> There are several things to note about the code above. First, you can see in line #3, it’s getting the SelectList out of the view data dictionary. Then on line #7 it uses the GetFullHtmlFieldId() method from the TemplateInfo class to ensure we get unique IDs. We pass the Value to this method so that it will produce IDs like “Name_Foo” and “Name_Bar” rather than just “Name” which is our property name. However, for the “name” attribute (on line #10) we can just use the normal HtmlFieldPrefix property so that we ensure all radio buttons have the same name which corresponds to the view model’s property name. We also get to leverage the fact the a SelectListItem has a Boolean Selected property so we can set the checkedAttr variable on line #8 and use it on line #10. Finally, it’s trivial to set the correct “for” attribute for the <label> on line #11 since we already produced that value. Because the TemplateInfo class provides all the metadata for our view, we’re able to produce this view that is widely re-usable across our application. In fact, we can create a couple HTML helpers to better encapsulate this call and make it more user friendly: 1: public static MvcHtmlString RadioButtonList<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, params string[] items) 2: { 3: return htmlHelper.RadioButtonList(expression, new SelectList(items)); 4: } 5:   6: public static MvcHtmlString RadioButtonList<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, IEnumerable<SelectListItem> items) 7: { 8: var func = expression.Compile(); 9: var result = func(htmlHelper.ViewData.Model); 10: var list = new SelectList(items, "Value", "Text", result); 11: return htmlHelper.EditorFor(expression, "RadioButtonList", new { selectList = list }); 12: } This allows us to simply the call like this: 1: <%: Html.RadioButtonList(m => m.Name, "Foo", "Bar" ) %> In that example, the values for the radio button are hard-coded and being passed in directly. But if you had a view model that contained a property for the collection of items you could call the second overload like this: 1: <%: Html.RadioButtonList(m => m.Name, Model.FooBarList ) %> The Editor templates introduced in MVC 2 definitely allow for much more flexible views/editors than previously available. By knowing about the features you have available to you with the TemplateInfo class, you can take these concepts and customize your editors with extreme flexibility and re-usability.

    Read the article

  • Parallelism in .NET – Part 1, Decomposition

    - by Reed
    The first step in designing any parallelized system is Decomposition.  Decomposition is nothing more than taking a problem space and breaking it into discrete parts.  When we want to work in parallel, we need to have at least two separate things that we are trying to run.  We do this by taking our problem and decomposing it into parts. There are two common abstractions that are useful when discussing parallel decomposition: Data Decomposition and Task Decomposition.  These two abstractions allow us to think about our problem in a way that helps leads us to correct decision making in terms of the algorithms we’ll use to parallelize our routine. To start, I will make a couple of minor points. I’d like to stress that Decomposition has nothing to do with specific algorithms or techniques.  It’s about how you approach and think about the problem, not how you solve the problem using a specific tool, technique, or library.  Decomposing the problem is about constructing the appropriate mental model: once this is done, you can choose the appropriate design and tools, which is a subject for future posts. Decomposition, being unrelated to tools or specific techniques, is not specific to .NET in any way.  This should be the first step to parallelizing a problem, and is valid using any framework, language, or toolset.  However, this gives us a starting point – without a proper understanding of decomposition, it is difficult to understand the proper usage of specific classes and tools within the .NET framework. Data Decomposition is often the simpler abstraction to use when trying to parallelize a routine.  In order to decompose our problem domain by data, we take our entire set of data and break it into smaller, discrete portions, or chunks.  We then work on each chunk in the data set in parallel. This is particularly useful if we can process each element of data independently of the rest of the data.  In a situation like this, there are some wonderfully simple techniques we can use to take advantage of our data.  By decomposing our domain by data, we can very simply parallelize our routines.  In general, we, as developers, should be always searching for data that can be decomposed. Finding data to decompose if fairly simple, in many instances.  Data decomposition is typically used with collections of data.  Any time you have a collection of items, and you’re going to perform work on or with each of the items, you potentially have a situation where parallelism can be exploited.  This is fairly easy to do in practice: look for iteration statements in your code, such as for and foreach. Granted, every for loop is not a candidate to be parallelized.  If the collection is being modified as it’s iterated, or the processing of elements depends on other elements, the iteration block may need to be processed in serial.  However, if this is not the case, data decomposition may be possible. Let’s look at one example of how we might use data decomposition.  Suppose we were working with an image, and we were applying a simple contrast stretching filter.  When we go to apply the filter, once we know the minimum and maximum values, we can apply this to each pixel independently of the other pixels.  This means that we can easily decompose this problem based off data – we will do the same operation, in parallel, on individual chunks of data (each pixel). Task Decomposition, on the other hand, is focused on the individual tasks that need to be performed instead of focusing on the data.  In order to decompose our problem domain by tasks, we need to think about our algorithm in terms of discrete operations, or tasks, which can then later be parallelized. Task decomposition, in practice, can be a bit more tricky than data decomposition.  Here, we need to look at what our algorithm actually does, and how it performs its actions.  Once we have all of the basic steps taken into account, we can try to analyze them and determine whether there are any constraints in terms of shared data or ordering.  There are no simple things to look for in terms of finding tasks we can decompose for parallelism; every algorithm is unique in terms of its tasks, so every algorithm will have unique opportunities for task decomposition. For example, say we want our software to perform some customized actions on startup, prior to showing our main screen.  Perhaps we want to check for proper licensing, notify the user if the license is not valid, and also check for updates to the program.  Once we verify the license, and that there are no updates, we’ll start normally.  In this case, we can decompose this problem into tasks – we have a few tasks, but there are at least two discrete, independent tasks (check licensing, check for updates) which we can perform in parallel.  Once those are completed, we will continue on with our other tasks. One final note – Data Decomposition and Task Decomposition are not mutually exclusive.  Often, you’ll mix the two approaches while trying to parallelize a single routine.  It’s possible to decompose your problem based off data, then further decompose the processing of each element of data based on tasks.  This just provides a framework for thinking about our algorithms, and for discussing the problem.

    Read the article

  • SQL SERVER – Disabled Index and Update Statistics

    - by pinaldave
    When we try to update the statistics, it throws an error as if the clustered index is disabled. Now let us enable the clustered index only and attempt to update the statistics of the table right after that. Have you ever come across the situation where a conversation never gets over and it continues even though original point of discussion has passed. I am facing the same situation in the case of Disabled Index. Here is the link to original conversations. SQL SERVER – Disable Clustered Index and Data Insert – Reader had a issue here with Disabled Index SQL SERVER – Understanding ALTER INDEX ALL REBUILD with Disabled Clustered Index – Reader asked the effect of Rebuilding Indexes The same reader asked me today – “I understood what the disabled indexes do; what is their effect on statistics. Is it true that even though indexes are disabled, they continue updating the statistics?“ The answer is very interesting: If you have disabled clustered index, you will be not able to update the statistics at all for any index. If you have enabled clustered index and disabled non clustered index when you update the statistics of the table, it automatically updates the statistics of the ALL (disabled and enabled – both) the indexes on the table. If you are not satisfied with the answer, let us go over a simple example. I have written necessary comments in the code itself to have a clear idea. USE tempdb GO -- Drop Table if Exists IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[TableName]') AND type IN (N'U')) DROP TABLE [dbo].[TableName] GO -- Create Table CREATE TABLE [dbo].[TableName]( [ID] [int] NOT NULL, [FirstCol] [varchar](50) NULL ) GO -- Insert Some data INSERT INTO TableName SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third' UNION ALL SELECT 4, 'Fourth' UNION ALL SELECT 5, 'Five' GO -- Create Clustered Index ALTER TABLE [TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY CLUSTERED ([ID] ASC) GO -- Create Nonclustered Index CREATE UNIQUE NONCLUSTERED INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] ([FirstCol] ASC) GO -- Check that all the indexes are enabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO Now let us update the statistics of the table and check the statistics update date. -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO Now let us disable the indexes and check if they are disabled using sys.indexes. -- Disable Indexes -- Disable Nonclustered Index ALTER INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] DISABLE GO -- Disable Clustered Index ALTER INDEX [PK_TableName] ON [dbo].[TableName] DISABLE GO -- Check that all the indexes are disabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO Let us try to update the statistics of the table. -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO /* -- Above operation should thrown following error Msg 1974, Level 16, State 1, Line 1 Cannot perform the specified operation on table 'TableName' because its clustered index 'PK_TableName' is disabled. */ When we try to update the statistics it throws an error as it clustered index is disabled. Now let us enable the clustered index only and attempt to update the statistics of the table right after that. -- Now let us rebuild clustered index only ALTER INDEX [PK_TableName] ON [dbo].[TableName] REBUILD GO -- Check that all the indexes status SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO We can clearly see that even though the nonclustered index is disabled it is also updated. If you do not need a nonclustered index, I suggest you to drop it as keeping them disabled is an overhead on your system. This is because every time the statistics are updated for system all the statistics for disabled indexesare also updated. -- Clean up DROP TABLE [TableName] GO The complete script is given below for easy reference. USE tempdb GO -- Drop Table if Exists IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[TableName]') AND type IN (N'U')) DROP TABLE [dbo].[TableName] GO -- Create Table CREATE TABLE [dbo].[TableName]( [ID] [int] NOT NULL, [FirstCol] [varchar](50) NULL ) GO -- Insert Some data INSERT INTO TableName SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third' UNION ALL SELECT 4, 'Fourth' UNION ALL SELECT 5, 'Five' GO -- Create Clustered Index ALTER TABLE [TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY CLUSTERED ([ID] ASC) GO -- Create Nonclustered Index CREATE UNIQUE NONCLUSTERED INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] ([FirstCol] ASC) GO -- Check that all the indexes are enabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Disable Indexes -- Disable Nonclustered Index ALTER INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] DISABLE GO -- Disable Clustered Index ALTER INDEX [PK_TableName] ON [dbo].[TableName] DISABLE GO -- Check that all the indexes are disabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO /* -- Above operation should thrown following error Msg 1974, Level 16, State 1, Line 1 Cannot perform the specified operation on table 'TableName' because its clustered index 'PK_TableName' is disabled. */ -- Now let us rebuild clustered index only ALTER INDEX [PK_TableName] ON [dbo].[TableName] REBUILD GO -- Check that all the indexes status SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Clean up DROP TABLE [TableName] GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics

    Read the article

  • The Dark Knight and Team Fortress 2 Mashup Movie Trailer [Video]

    - by Asian Angel
    If you are a Batman fan then you will find this video mashup interesting to watch. YouTube user TrueOneMoreUser has created a unique Dark Knight Trailer using characters from Team Fortress 2 and select scenes from The Dark Knight movie itself. Team Fortress 2 – The Demo Knight [via Geeks are Sexy] Latest Features How-To Geek ETC How To Make Disposable Sleeves for Your In-Ear Monitors Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Bring the Grid to Your Desktop with the TRON Legacy Theme for Windows 7 The Dark Knight and Team Fortress 2 Mashup Movie Trailer [Video] Dirt Cheap DSLR Viewfinder Improves Outdoor DSLR LCD Visibility Lakeside Sunset in the Mountains [Wallpaper] Taskbar Meters Turn Your Taskbar into a System Resource Monitor Create Shortcuts for Your Favorite or Most Used Folders in Ubuntu

    Read the article

  • Azure &ndash; Part 6 &ndash; Blob Storage Service

    - by Shaun
    When migrate your application onto the Azure one of the biggest concern would be the external files. In the original way we understood and ensure which machine and folder our application (website or web service) is located in. So that we can use the MapPath or some other methods to read and write the external files for example the images, text files or the xml files, etc. But things have been changed when we deploy them on Azure. Azure is not a server, or a single machine, it’s a set of virtual server machine running under the Azure OS. And even worse, your application might be moved between thses machines. So it’s impossible to read or write the external files on Azure. In order to resolve this issue the Windows Azure provides another storage serviec – Blob, for us. Different to the table service, the blob serivce is to be used to store text and binary data rather than the structured data. It provides two types of blobs: Block Blobs and Page Blobs. Block Blobs are optimized for streaming. They are comprised of blocks, each of which is identified by a block ID and each block can be a maximum of 4 MB in size. Page Blobs are are optimized for random read/write operations and provide the ability to write to a range of bytes in a blob. They are a collection of pages. The maximum size for a page blob is 1 TB.   In the managed library the Azure SDK allows us to communicate with the blobs through these classes CloudBlobClient, CloudBlobContainer, CloudBlockBlob and the CloudPageBlob. Similar with the table service managed library, the CloudBlobClient allows us to reach the blob service by passing our storage account information and also responsible for creating the blob container is not exist. Then from the CloudBlobContainer we can save or load the block blobs and page blobs into the CloudBlockBlob and the CloudPageBlob classes.   Let’s improve our exmaple in the previous posts – add a service method allows the user to upload the logo image. In the server side I created a method name UploadLogo with 2 parameters: email and image. Then I created the storage account from the config file. I also add the validation to ensure that the email passed in is valid. 1: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 2: var accountContext = new DynamicDataContext<Account>(storageAccount); 3:  4: // validation 5: var accountNumber = accountContext.Load() 6: .Where(a => a.Email == email) 7: .ToList() 8: .Count; 9: if (accountNumber <= 0) 10: { 11: throw new ApplicationException(string.Format("Cannot find the account with the email {0}.", email)); 12: } Then there are three steps for saving the image into the blob service. First alike the table service I created the container with a unique name and create it if it’s not exist. 1: // create the blob container for account logos if not exist 2: CloudBlobClient blobStorage = storageAccount.CreateCloudBlobClient(); 3: CloudBlobContainer container = blobStorage.GetContainerReference("account-logo"); 4: container.CreateIfNotExist(); Then, since in this example I will just send the blob access URL back to the client so I need to open the read permission on that container. 1: // configure blob container for public access 2: BlobContainerPermissions permissions = container.GetPermissions(); 3: permissions.PublicAccess = BlobContainerPublicAccessType.Container; 4: container.SetPermissions(permissions); And at the end I combine the blob resource name from the input file name and Guid, and then save it to the block blob by using the UploadByteArray method. Finally I returned the URL of this blob back to the client side. 1: // save the blob into the blob service 2: string uniqueBlobName = string.Format("{0}_{1}.jpg", email, Guid.NewGuid().ToString()); 3: CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName); 4: blob.UploadByteArray(image); 5:  6: return blob.Uri.ToString(); Let’s update a bit on the client side application and see the result. Here I just use my simple console application to let the user input the email and the file name of the image. If it’s OK it will show the URL of the blob on the server side so that we can see it through the web browser. Then we can see the logo I’ve just uploaded through the URL here. You may notice that the blob URL was based on the container name and the blob unique name. In the document of the Azure SDK there’s a page for the rule of naming them, but I think the simple rule would be – they must be valid as an URL address. So that you cannot name the container with dot or slash as it will break the ADO.Data Service routing rule. For exmaple if you named the blob container as Account.Logo then it will throw an exception says 400 Bad Request.   Summary In this short entity I covered the simple usage of the blob service to save the images onto Azure. Since the Azure platform does not support the file system we have to migrate our code for reading/writing files to the blob service before deploy it to Azure. In order to reducing this effort Microsoft provided a new approch named Drive, which allows us read and write the NTFS files just likes what we did before. It’s built up on the blob serivce but more properly for files accessing. I will discuss more about it in the next post.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • VMware sort VMware Go Pro sa solution de virtualisation Cloud pour PME, passerelle vers VMware vSphere

    VMware sort VMware Go Pro Sa solution de virtualisation Cloud pour PME, passerelle vers VMware vSphere VMware vient d'annoncer la sortie de sa solution de virtualisation Cloud pour les PME : VMware Go Pro. VMware Go Pro a pour principal objectif de faciliter les efforts de virtualisation des petites et moyennes entreprises en leur proposant un outil ergonomique, qui simplifie la gestion des systèmes d'information et qui veut améliorer la productivité. L'outil intègre une console centrale unique, pour fédérer et simplifier l'administration des infrastructures physiques et virtuelles. Une console qui permet de « libérer les équipes des tâches récurrentes afin de se consacr...

    Read the article

  • Yelp Like Adjective Rating System

    - by clifgray
    I am building a website that has users list their outdoor adventures (skydiving, surfing, base jumping, etc) and the other people can comment on them. I want to have a rating system like Yelp which has "Useful, Funny, or Cool" but with different adjectives. I have thought of a few such as Daring, Adventurous, and Unique but I wanted to get some feedback on what a few other good adjectives would be. Also does anyone have experience with other such systems or advice for better systems? Primarily I just want the user to have somewhat more descriptive voting options than u and down or 1 though 5.

    Read the article

  • Protecting offline IRM rights and the error "Unable to Connect to Offline database"

    - by Simon Thorpe
    One of the most common problems I get asked about Oracle IRM is in relation to the error message "Unable to Connect to Offline database". This error message is a result of how Oracle IRM is protecting the cached rights on the local machine and if that cache has become invalid in anyway, this error is thrown. Offline rights and security First we need to understand how Oracle IRM handles offline use. The way it is implemented is one of the main reasons why Oracle IRM is the leading document security solution and demonstrates our methodology to ensure that solutions address both security and usability and puts the balance of these two in your control. Each classification has a set of predefined roles that the manager of the classification can assign to users. Each role has an offline period which determines the amount of time a user can access content without having to communicate with the IRM server. By default for the context model, which is the classification system that ships out of the box with Oracle IRM, the offline period for each role is 3 days. This is easily changed however and can be as low as under an hour to as long as years. It is also possible to switch off the ability to access content offline which can be useful when content is very sensitive and requires a tight leash. So when a user is online, transparently in the background, the Oracle IRM Desktop communicates with the server and updates the users rights and offline periods. This transparent synchronization period is determined by the server and communicated to all IRM Desktops and allows for users rights to be kept up to date without their intervention. This allows us to support some very important scenarios which are key to a successful IRM solution. A user doesn't have to make any decision when going offline, they simply unplug their laptop and they already have their offline periods synchronized to the maximum values. Any solution that requires a user to make a decision at the point of going offline isn't going to work because people forget to do this and will therefore be unable to legitimately access their content offline. If your rights change to REMOVE your access to content, this also happens in the background. This is very useful when someone has an offline duration of a week and they happen to make a connection to the internet 3 days into that offline period, the Oracle IRM Desktop detects this online state and automatically updates all rights for the user. This means the business risk is reduced when setting long offline periods, because of the daily transparent sync, you can reflect changes as soon as the user is online. Of course, if they choose not to come online at all during that week offline period, you cannot effect change, but you take that risk in giving the 7 day offline period in the first place. If you are added to a NEW classification during the day, this will automatically be synchronized without the user even having to open a piece of content secured against that classification. This is very important, consider the scenario where a senior executive downloads all their email but doesn't open any of it. Disconnects the laptop and then gets on a plane. During the flight they attempt to open a document attached to a downloaded email which has been secured against an IRM classification the user was not even aware they had access to. Because their new role in this classification was automatically synchronized their experience is a good one and the document opens. More information on how the Oracle IRM classification model works can be found in this article by Martin Abrahams. So what about problems accessing the offline rights database? So onto the core issue... when these rights are cached to your machine they are stored in an encrypted database. The encryption of this offline database is keyed to the instance of the installation of the IRM Desktop and the Windows user account. Why? Well what you do not want to happen is for someone to get their rights for content and then copy these files across hundreds of other machines, therefore getting access to sensitive content across many environments. The IRM server has a setting which controls how many times you can cache these rights on unique machines. This is because people typically access IRM content on more than one computer. Their work desktop, a laptop and often a home computer. So Oracle IRM allows for the usability of caching rights on more than one computer whilst retaining strong security over this cache. So what happens if these files are corrupted in someway? That's when you will see the error, Unable to Connect to Offline database. The most common instance of seeing this is when you are using virtual machines and copy them from one computer to the next. The virtual machine software, VMWare Workstation for example, makes changes to the unique information of that virtual machine and as such invalidates the offline database. How do you solve the problem? Resolution is however simple. You just delete all of the offline database files on the machine and they will be recreated with working encryption when the Oracle IRM Desktop next starts. However this does mean that the IRM server will think you have your rights cached to more than one computer and you will need to rerequest your rights, even though you are only going to be accessing them on one. Because it still thinks the old cache is valid. So be aware, it is good practice to increase the server limit from the default of 1 to say 3 or 4. This is done using the Enterprise Manager instance of IRM. So to delete these offline files I have a simple .bat file you can use; Download DeleteOfflineDBs.bat Note that this uses pskillto stop the irmBackground.exe from running. This is part of the IRM Desktop and holds open a lock to the offline database. Either kill this from task manager or use pskillas part of the script.

    Read the article

  • Create Awesome Map-Based Wallpapers for Your Desktop with ‘Map –> Image’

    - by Asian Angel
    Are you tired of using the same old types of wallpapers on your desktop? Then add something fresh and unique to your desktop with custom-created map wallpapers from ‘Map – Image’. When you first visit the website it will show the default location of San Francisco (home of the developers). To get started simply enter your location in the search blank in the upper left corner and click the Go Button. Your chosen location will appear in a basic black and white format as shown here. 6 Start Menu Replacements for Windows 8 What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8

    Read the article

  • Kile doesn’t brings Okular to front

    - by Tobi
    As @user49890 in Emacs-AUCTeX-Okular I got an equal problem with Kile and Okular. In configured both for SyncTeX and it works fine when Okular is closed and Kile quickbuilds the PDF, then Okular is opened in front of Kile. But if Okular is already opened with the window behind Kile’s window it stys hidden when Kile says [ForwardPDF] test.pdf (okular) Is this a bug or did I made a wrong setting (I’m using the predefined configurations of Okular and Kile, though). Edit The problem remains wether I use Okular’s --unique option or not. And Using ViewPDF instead of ForwardPDF makes no difference too.

    Read the article

  • Week 16: Integrate This - Introducing Oracle Enterprise Manager 11g

    - by sandra.haan
    Spring in New York City is a wonderful time of year, but if you're out walking around in Central Park it means you missed the most exciting thing happening in the city today -Oracle's announcement of the launch of Enterprise Manager 11g at the Guggenheim. You can catch-up on what you missed here and listen in as Judson talks about the partner opportunity with Enterprise Manager 11g: Learn how Oracle Enterprise Manager 11g can help you drive agility and efficiency through its unique, integrated IT management capabilities and check out the Enterprise Manager Knowledge Zone to get engaged with OPN. Learn more and get the full scoop from today's press release. Until the next time, The OPN Communications Team

    Read the article

  • All articles with infinite scroll, or one per page for usability and SEO?

    - by Rana Prathap
    Imagine a hypothetical website with all user generated content of single page articles. Here are some ways to structuring such a website: Putting up more than one article in one page and creating an infinite scroll or a pager. The fact that the articles may not be on the same topic makes this a less search engine friendly way. Giving a unique page to each article and putting up a list of links/titles in the front page with or without a teaser and a thumbnail. This might make the home page look unprofessional. Keeping the front page clean, without direct links to internal pages. Which method would be a search engine friendly and user friendly way to do this?

    Read the article

  • SQLBits - Unicode Porn

    - by Most Valuable Yak (Rob Volk)
    We've just finished up a fantastic event at SQLBits X in London!  If you've never been to SQLBits and you can make it to the UK, I highly recommend it.  If you didn't attend, here's what you missed. Meanwhile, for those who attended the Lightning Talk sessions and were disappointed that I ran out of time, here's the last part that you would have seen: /*    How to Lose Friends and Irritate People...With Unicode!     Rob Volk     SQLBits X - London - March 31, 2012 */ -- some sexy SQL DECLARE @oohbaby TABLE(i INT NOT NULL UNIQUE, uni_char AS NCHAR(i), hex AS CAST(i AS BINARY(2))) INSERT @oohbaby VALUES(664),(1022),(1023),(1120),(1150),(8857),(11609),(42420),(42427) -- change results font to larger size, some only work in grid font SELECT * FROM @oohbaby SELECT NCHAR(1022) + NCHAR(1023) AS Page3Girl It's probably better that you run this yourself, in the privacy of your own home/office, you know *wink* *wink* *nudge* *nudge* *say no more*

    Read the article

  • Introduction to Oracle’s New StorageTek SL150 Modular Tape Library

    - by Cinzia Mascanzoni
    Join the product announcement webcast on Thursday July 12, 2012 at 3pm CET (2pm GMT). This webcast will help you to understand Oracle's New StorageTek SL150 Modular tape library which is the first scalable tape library designed for small and midsized companies that are experiencing high growth. Built from Oracle software and StorageTek library technology, it delivers a cost-effective combination of ease of use and scalability, resulting in overall TCO savings. During the webcast Cindy McCurley, from Tape Product Management will introduce you to the latest addition to the Oracle Tape Storage product portfolio, the SL150 Modular Tape Library. This 60 minutes webcast will cover the product’s features, positioning, unique selling points and a competitive overview on StorageTek. You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast.Register NOW!

    Read the article

  • Oracle Supply Chain builds momentum in the Press

    - by [email protected]
    SCM coverage in early '10 was dominated by major product announcements. The release of Oracle Global Trade Management and Oracle Transportation Management 6.1 garnered ten unique articles. SearchOracle.com and Supply Chain Management Review primarily focused on the compliance aspect of the announcement while Managing Automation concentrated on the new trade management capabilities. Elsewhere, there was a lot of interest around the new 'Green Dashboard' as reported by Modern Materials Handling, Environmental Leader and TMCnet. Other SCM news included the announced integration of Oracle Hyperion Planning and Demantra S&OP as reported by Database Trends and Applications and Treasury & Risk.

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >