Search Results

Search found 27327 results on 1094 pages for 'search results'.

Page 246/1094 | < Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >

  • SPARC T4-2 Produces World Record Oracle Essbase Aggregate Storage Benchmark Result

    - by Brian
    Significance of Results Oracle's SPARC T4-2 server configured with a Sun Storage F5100 Flash Array and running Oracle Solaris 10 with Oracle Database 11g has achieved exceptional performance for the Oracle Essbase Aggregate Storage Option benchmark. The benchmark has upwards of 1 billion records, 15 dimensions and millions of members. Oracle Essbase is a multi-dimensional online analytical processing (OLAP) server and is well-suited to work well with SPARC T4 servers. The SPARC T4-2 server (2 cpus) running Oracle Essbase 11.1.2.2.100 outperformed the previous published results on Oracle's SPARC Enterprise M5000 server (4 cpus) with Oracle Essbase 11.1.1.3 on Oracle Solaris 10 by 80%, 32% and 2x performance improvement on Data Loading, Default Aggregation and Usage Based Aggregation, respectively. The SPARC T4-2 server with Sun Storage F5100 Flash Array and Oracle Essbase running on Oracle Solaris 10 achieves sub-second query response times for 20,000 users in a 15 dimension database. The SPARC T4-2 server configured with Oracle Essbase was able to aggregate and store values in the database for a 15 dimension cube in 398 minutes with 16 threads and in 484 minutes with 8 threads. The Sun Storage F5100 Flash Array provides more than a 20% improvement out-of-the-box compared to a mid-size fiber channel disk array for default aggregation and user-based aggregation. The Sun Storage F5100 Flash Array with Oracle Essbase provides the best combination for large Oracle Essbase databases leveraging Oracle Solaris ZFS and taking advantage of high bandwidth for faster load and aggregation. Oracle Fusion Middleware provides a family of complete, integrated, hot pluggable and best-of-breed products known for enabling enterprise customers to create and run agile and intelligent business applications. Oracle Essbase's performance demonstrates why so many customers rely on Oracle Fusion Middleware as their foundation for innovation. Performance Landscape System Data Size(millions of items) Database Load(minutes) Default Aggregation(minutes) Usage Based Aggregation(minutes) SPARC T4-2, 2 x SPARC T4 2.85 GHz 1000 149 398* 55 Sun M5000, 4 x SPARC64 VII 2.53 GHz 1000 269 526 115 Sun M5000, 4 x SPARC64 VII 2.4 GHz 400 120 448 18 * – 398 mins with CALCPARALLEL set to 16; 484 mins with CALCPARALLEL threads set to 8 Configuration Summary Hardware Configuration: 1 x SPARC T4-2 2 x 2.85 GHz SPARC T4 processors 128 GB memory 2 x 300 GB 10000 RPM SAS internal disks Storage Configuration: 1 x Sun Storage F5100 Flash Array 40 x 24 GB flash modules SAS HBA with 2 SAS channels Data Storage Scheme Striped - RAID 0 Oracle Solaris ZFS Software Configuration: Oracle Solaris 10 8/11 Installer V 11.1.2.2.100 Oracle Essbase Client v 11.1.2.2.100 Oracle Essbase v 11.1.2.2.100 Oracle Essbase Administration services 64-bit Oracle Database 11g Release 2 (11.2.0.3) HP's Mercury Interactive QuickTest Professional 9.5.0 Benchmark Description The objective of the Oracle Essbase Aggregate Storage Option benchmark is to showcase the ability of Oracle Essbase to scale in terms of user population and data volume for large enterprise deployments. Typical administrative and end-user operations for OLAP applications were simulated to produce benchmark results. The benchmark test results include: Database Load: Time elapsed to build a database including outline and data load. Default Aggregation: Time elapsed to build aggregation. User Based Aggregation: Time elapsed of the aggregate views proposed as a result of tracked retrieval queries. Summary of the data used for this benchmark: 40 flat files, each of size 1.2 GB, 49.4 GB in total 10 million rows per file, 1 billion rows total 28 columns of data per row Database outline has 15 dimensions (five of them are attribute dimensions) Customer dimension has 13.3 million members 3 rule files Key Points and Best Practices The Sun Storage F5100 Flash Array has been used to accelerate the application performance. Setting data load threads (DLTHREADSPREPARE) to 64 and Load Buffer to 6 improved dataloading by about 9%. Factors influencing aggregation materialization performance are "Aggregate Storage Cache" and "Number of Threads" (CALCPARALLEL) for parallel view materialization. The optimal values for this workload on the SPARC T4-2 server were: Aggregate Storage Cache: 32 GB CALCPARALLEL: 16   See Also Oracle Essbase Aggregate Storage Option Benchmark on Oracle's SPARC T4-2 Server oracle.com Oracle Essbase oracle.com OTN SPARC T4-2 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 28 August 2012.

    Read the article

  • Announcing Oracle Knowledge 8.5: Even Superheroes Need Upgrades

    - by Richard Lefebvre
    It’s no secret that we like Iron Man here at Oracle. We've certainly got stuff in common: one of the world’s largest technology companies and one of the world’s strongest technology-driven superheroes. If you've seen the recent Iron Man movies, you might have even noticed some of our servers sitting in Tony Stark’s lab. Heck, our CEO made a cameo appearance in one of the movies. Yeah, we’re fans. Especially as Iron Man is a regular guy with some amazing technology – like us. But Like all great things even Superheroes need upgrades, whether it’s their suit, their car or their spacestation. Oracle certainly has its share of advanced technology.  For example, Oracle acquired InQuira in 2011 after years of watching the company advance the science of Knowledge Management.  And it was some extremely super technology.  At that time, Forrester’s Kate Leggett wrote about it in ‘Standalone Knowledge Management Is Dead With Oracle's Announcement To Acquire InQuira’ saying ‘Knowledge, accessible via web self-service or agent UIs, is a critical customer service component for industries fielding repetitive questions about policies, procedures, products, and solutions.’  One short sentence that amounts to a very tall order.  Since the acquisition our KM scientists have been hard at work in their labs. Today Oracle announced its first major knowledge management release since its acquisition of InQuira: Oracle Knowledge 8.5. We’ve put a massively-upgraded supersuit on our KM solution because we still have bad guys to fight. And we are very proud to say that we went way beyond our original plans. So what, exactly, did we do in Oracle Knowledge 8.5? We did what any high-tech super-scientist would do. We made Oracle Knowledge smarter, stronger and faster. First, we gave Oracle Knowledge a stronger heart: Certified on Oracle technologies, including Oracle WebLogic Server, Oracle Business Intelligence, Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud. Huge scaling and performance improvements. Then we gave it a better reach: Improved iConnect functionality that delivers contextualized knowledge directly into CRM applications. Better content acquisition support across disparate sources. Enhanced Language Support including Natural Language search support for 16 Languages. Enhanced Keyword Search for 23 authoring languages, as well as enhanced out-of-the-box industry ontologies covering 14 languages. And finally we made Oracle Knowledge ridiculously smarter: Improved Natural Language Search and a new Contextual Answer Delivery that understands the true intent of each inquiry to deliver the best possible answers. AnswerFlow for Guided Navigation & Answer Delivery, a new application for guided troubleshooting and answer delivery. Knowledge Analytics standardized on Oracle’s Business Intelligence Enterprise Edition. Knowledge Analytics Dashboards optimized search and content creation through targeted, actionable insights. A new three-level language model "Global - Language - Locale" that provides an improved search experience for organizations with a global footprint. We believe that Oracle Knowledge 8.5 is the most sophisticated KM solution in existence today and we’ve worked very hard to help it fulfill the promise of KM: empowering customers and employees with deep insights wherever they need them. We hope you agree it’s a suit worth wearing. We are continuing to invest in Knowledge Management as it continues to be especially relevant today with the enterprise push for peer collaboration, crowd-sourced wisdom, agile innovation, social interaction channels, applied real-time analytics, and personalization. In fact, we believe that Knowledge Management is a critical part of the Customer Experience portfolio for success. From empowering employee’s, to empowering customers, to gaining the insights from interactions across all channels, businesses today cannot efficiently scale their efforts, strengthen their customer relationships or achieve their growth goals without a solid Knowledge Management foundation to build from. And like every good superhero saga, we’re not even close to being finished. Next we are taking Oracle Knowledge into the Cloud. Yes, we’re thinking what you’re thinking: ROCKET BOOTS! Stay tuned for the next adventure… By Nav Chakravarti, Vice-President, Product Management, CRM Knowledge and previously the CTO of InQuira, a knowledge management company acquired by Oracle in 2011

    Read the article

  • Announcing Oracle Knowledge 8.5: Even Superheroes Need Upgrades

    - by Chris Warner
    It’s no secret that we like Iron Man here at Oracle. We've certainly got stuff in common: one of the world’s largest technology companies and one of the world’s strongest technology-driven superheroes. If you've seen the recent Iron Man movies, you might have even noticed some of our servers sitting in Tony Stark’s lab. Heck, our CEO made a cameo appearance in one of the movies. Yeah, we’re fans. Especially as Iron Man is a regular guy with some amazing technology – like us. But Like all great things even Superheroes need upgrades, whether it’s their suit, their car or their spacestation. Oracle certainly has its share of advanced technology.  For example, Oracle acquired InQuira in 2011 after years of watching the company advance the science of Knowledge Management.  And it was some extremely super technology.  At that time, Forrester’s Kate Leggett wrote about it in ‘Standalone Knowledge Management Is Dead With Oracle's Announcement To Acquire InQuira’ saying ‘Knowledge, accessible via web self-service or agent UIs, is a critical customer service component for industries fielding repetitive questions about policies, procedures, products, and solutions.’  One short sentence that amounts to a very tall order.  Since the acquisition our KM scientists have been hard at work in their labs. Today Oracle announced its first major knowledge management release since its acquisition of InQuira: Oracle Knowledge 8.5. We’ve put a massively-upgraded supersuit on our KM solution because we still have bad guys to fight. And we are very proud to say that we went way beyond our original plans. So what, exactly, did we do in Oracle Knowledge 8.5? We did what any high-tech super-scientist would do. We made Oracle Knowledge smarter, stronger and faster. First, we gave Oracle Knowledge a stronger heart: Certified on Oracle technologies, including Oracle WebLogic Server, Oracle Business Intelligence, Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud. Huge scaling and performance improvements. Then we gave it a better reach: Improved iConnect functionality that delivers contextualized knowledge directly into CRM applications. Better content acquisition support across disparate sources. Enhanced Language Support including Natural Language search support for 16 Languages. Enhanced Keyword Search for 23 authoring languages, as well as enhanced out-of-the-box industry ontologies covering 14 languages. And finally we made Oracle Knowledge ridiculously smarter: Improved Natural Language Search and a new Contextual Answer Delivery that understands the true intent of each inquiry to deliver the best possible answers. AnswerFlow for Guided Navigation & Answer Delivery, a new application for guided troubleshooting and answer delivery. Knowledge Analytics standardized on Oracle’s Business Intelligence Enterprise Edition. Knowledge Analytics Dashboards optimized search and content creation through targeted, actionable insights. A new three-level language model "Global - Language - Locale" that provides an improved search experience for organizations with a global footprint. We believe that Oracle Knowledge 8.5 is the most sophisticated KM solution in existence today and we’ve worked very hard to help it fulfill the promise of KM: empowering customers and employees with deep insights wherever they need them. We hope you agree it’s a suit worth wearing. We are continuing to invest in Knowledge Management as it continues to be especially relevant today with the enterprise push for peer collaboration, crowd-sourced wisdom, agile innovation, social interaction channels, applied real-time analytics, and personalization. In fact, we believe that Knowledge Management is a critical part of the Customer Experience portfolio for success. From empowering employee’s, to empowering customers, to gaining the insights from interactions across all channels, businesses today cannot efficiently scale their efforts, strengthen their customer relationships or achieve their growth goals without a solid Knowledge Management foundation to build from. And like every good superhero saga, we’re not even close to being finished. Next we are taking Oracle Knowledge into the Cloud. Yes, we’re thinking what you’re thinking: ROCKET BOOTS! Stay tuned for the next adventure… By Nav Chakravarti, Vice-President, Product Management, CRM Knowledge and previously the CTO of InQuira, a knowledge management company acquired by Oracle in 2011. 

    Read the article

  • SQL SERVER – How to Roll Back SQL Server Database Changes

    - by Pinal Dave
    In a perfect scenario, no unexpected and unplanned changes occur. There are no unpleasant surprises, no inadvertent changes. However, even with all precautions and testing, there is sometimes a need to revert a structure or data change. One of the methods that can be used in this situation is to use an older database backup that has the records or database object structure you want to revert to. For this method, you have to have the adequate full database backup and a tool that will help you with comparison and synchronization is preferred. In this article, we will focus on another method: rolling back the changes. This can be done by using: An option in SQL Server Management Studio T-SQL, or ApexSQL Log The first two solutions have been described in this article The disadvantages of these methods are that you have to know when exactly the change you want to revert happened and that all transactions on the database executed in a specific time range are rolled back – the ones you want to undo and the ones you don’t. How to easily roll back SQL Server database changes using ApexSQL Log? The biggest challenge is to roll back just specific changes, not all changes that happened in a specific time range. While SQL Server Management Studio option and T-SQL read and roll forward all transactions in the transaction log files, I will show you a solution that finds and scripts only the specific changes that match your criteria. Therefore, you don’t need to worry about all other database changes that you don’t want to roll back. ApexSQL Log is a SQL Server disaster recovery tool that reads transaction logs and provides a wide range of filters that enable you to easily rollback only specific data changes. First, connect to the online database where you want to roll back the changes. Once you select the database, ApexSQL Log will show its recovery model. Note that changes can be rolled back even for a database in the Simple recovery model, when no database and transaction log backups are available. However, ApexSQL Log achieves best results when the database is in the Full recovery model and you have a chain of subsequent transaction log backups, back to the moment when the change occurred. In this example, we will use only the online transaction log. In the next step, use filters to read only the transactions that happened in a specific time range. To remove noise, it’s recommended to use as many filters as possible. Besides filtering by the time of the transaction, ApexSQL Log can filter by the operation type: Table name: As well as transaction state (committed, aborted, running, and unknown), name of the user who committed the change, specific field values, server process IDs, and transaction description. You can select only the tables affected by the changes you want to roll back. However, if you’re not certain which tables were affected, you can leave them all selected and once the results are shown in the main grid, analyze them to find the ones you to roll back. When you set the filters, you can select how to present the results. ApexSQL Log can automatically create undo or redo scripts, export the transactions into an XML, HTML, CSV, SQL, or SQL Bulk file, and create a batch file that you can use for unattended transaction log reading. In this example, I will open the results in the grid, as I want to analyze them before rolling back the transactions. The results contain information about the transaction, as well as who and when made it. For UPDATEs, ApexSQL Log shows both old and new values, so you can easily see what has happened. To create an UNDO script that rolls back the changes, select the transactions you want to roll back and click Create undo script in the menu. For the DELETE statement selected in the screenshot above, the undo script is: INSERT INTO [Sales].[PersonCreditCard] ([BusinessEntityID], [CreditCardID], [ModifiedDate]) VALUES (297, 8010, '20050901 00:00:00.000') When it comes to rolling back database changes, ApexSQL Log has a big advantage, as it rolls back only specific transactions, while leaving all other transactions that occurred at the same time range intact. That makes ApexSQL Log a good solution for rolling back inadvertent data and schema changes on your SQL Server databases. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: ApexSQL

    Read the article

  • Lessons Building KeyRef (a .NET developer learning Rails)

    - by Liam McLennan
    Just because I like to build things, and I like to learn, I have been working on a keyboard shortcut reference site. I am using this as an opportunity to improve my ruby and rails skills. The first few days were frustrating. Perhaps the learning curve of all the fun new toys was a bit excessive. Finally tonight things have really started to come together. I still don’t understand the rails built-in testing support but I will get there. Interesting Things I Learned Tonight RubyMine IDE Tonight I switched to RubyMine instead of my usual Notepad++. I suspect RubyMine is a powerful tool if you know how to use it – but I don’t. At the moment it gives me errors about some gems not being activated. This is another one of those things that I will get to. I have also noticed that the editor functions significantly differently to the editors I am used to. For example, in visual studio and notepad++ if you place the cursor at the start of a line and press left arrow the cursor is sent to the end of the previous line. In RubyMine nothing happens. Haml Haml is my favourite view engine. For my .NET work I have been using its non-union Mexican CLR equivalent – nHaml. Multiple CSS Classes To define a div with more than one css class haml lets you chain them together with a ‘.’, such as: .span-6.search_result contents of the div go here Indent Consistency I also learnt tonight that both haml and nhaml complain if you are not consistent about indenting. As a consequence of the move from notepad++ to RubyMine my haml views ended up with some tab indenting and some space indenting. For the view to render all of the indents within a view must be consistent. Sorting Arrays I guessed that ruby would be able to sort an array alphabetically by a property of the elements so my first attempt was: Application.all.sort {|app| app.name} which does not work. You have to supply a comparer (much like .NET). The correct sort is: Application.all.sort {|a,b| a.name.downcase <=> b.name.downcase} MongoMapper Find by Id Since document databases are just fancy key-value stores it is essential to be able to easily search for a document by its id. This functionality is so intrinsic that it seems that the MongoMapper author did not bother to document it. To search by id simply pass the id to the find method: Application.find(‘4c19e8facfbfb01794000002’) Rails And CoffeeScript I am a big fan of CoffeeScript so integrating it into this application is high on my priorities. My first thought was to copy Dr Nic’s strategy. Unfortunately, I did not get past step 1. Install Node.js. I am doing my development on Windows and node is unix only. I looked around for a solution but eventually had to concede defeat… for now. Quicksearch The front page of the application I am building displays a list of applications. When the user types in the search box I want to reduce the list of applications to match their search. A quick googlebing turned up quicksearch, a jquery plugin. You simply tell quicksearch where to get its input (the search textbox) and the list of items to filter (the divs containing the names of applications) and it just works. Here is the code: $('#app_search').quicksearch('.search_result'); Summary I have had a productive evening. The app now displays a list of applications, allows them to be sorted and links through to an application page when an application is selected. Next on the list is to display the set of keyboard shortcuts for an application.

    Read the article

  • Building on someone else's DefaultButton Silverlight work...

    - by KyleBurns
    This week I was handed a "simple" requirement - have a search screen execute its search when the user pressed the Enter key instead of having to move hands from keyboard to mouse and click Search.  That is a reasonable request that has been met for years both in Windows and Web apps.  I did a quick scan for code to pilfer and found Patrick Cauldwell's Blog posting "A 'Default Button' In Silverlight".  This posting was a great start and I'm glad that the basic work had been done for me, but I ran into one issue - when using bound textboxes (I'm a die-hard MVVM enthusiast when it comes to Silverlight development), the search was being executed before the textbox I was in when the Enter key was pressed updated its bindings.  With a little bit of reflection work, I think I have found a good generic solution that builds upon Patrick's to make it more binding-friendly.  Also, I wanted to set the DefaultButton at a higher level than on each TextBox (or other control for that matter), so the use of mine is intended to be set somewhere such as the LayoutRoot or other high level control and will apply to all controls beneath it in the control tree.  I haven't tested this on controls that treat the Enter key special themselves in the mix. The real change from Patrick's solution here is that in the KeyUp event, I grab the source of the KeyUp event (in my case the textbox containing search criteria) and loop through the static fields on the element's type looking for DependencyProperty instances.  When I find a DependencyProperty, I grab the value and query for bindings.  Each time I find a binding, UpdateSource is called to make sure anything bound to any property of the field has the opportunity to update before the action represented by the DefaultButton is executed. Here's the code: public class DefaultButtonService { public static DependencyProperty DefaultButtonProperty = DependencyProperty.RegisterAttached("DefaultButton", typeof (Button), typeof (DefaultButtonService), new PropertyMetadata (null, DefaultButtonChanged)); private static void DefaultButtonChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { var uiElement = d as UIElement; var button = e.NewValue as Button; if (uiElement != null && button != null) { uiElement.KeyUp += (sender, arg) => { if (arg.Key == Key.Enter) { var element = arg.OriginalSource as FrameworkElement; if (element != null) { UpdateBindings(element); } if (button.IsEnabled) { button.Focus(); var peer = new ButtonAutomationPeer(button); var invokeProv = peer.GetPattern(PatternInterface.Invoke) as IInvokeProvider; if (invokeProv != null) invokeProv.Invoke(); arg.Handled = true; } } }; } } public static DefaultButtonService GetDefaultButton(UIElement obj) { return (DefaultButtonService) obj.GetValue(DefaultButtonProperty); } public static void SetDefaultButton(DependencyObject obj, DefaultButtonService button) { obj.SetValue(DefaultButtonProperty, button); } public static void UpdateBindings(FrameworkElement element) { element.GetType().GetFields(BindingFlags.Public | BindingFlags.Static).ForEach(field => { if (field.FieldType.IsAssignableFrom(typeof(DependencyProperty))) { try { var dp = field.GetValue(null) as DependencyProperty; if (dp != null) { var binding = element.GetBindingExpression(dp); if (binding != null) { binding.UpdateSource(); } } } // ReSharper disable EmptyGeneralCatchClause catch (Exception) // ReSharper restore EmptyGeneralCatchClause { // swallow exceptions } } }); } }

    Read the article

  • Oracle Announces Release of PeopleSoft HCM 9.1 Feature Pack 2

    - by Jay Zuckert
    Big things sometimes come in small packages.  Today Oracle announced the availability of PeopleSoft HCM 9.1 Feature Pack 2 which delivers a new HR self service user experience that fundamentally changes the way managers and employees interact with the HCM system.  Earlier this year we reviewed a number of new concept designs with our Customer Advisory Boards.  With the accelerated feature pack development cycle we have adopted, these innovations are  now available to all 9.1 customers without the need for an upgrade.   There are no new products that need to be licensed for the capabilities below. For more details on Feature Pack 2, please see the Oracle press release. Included in Feature Pack 2 is a new search-based menu-free navigation that allows managers to search for employees by name and take actions directly from the secure search results.  For example, a manager can now simply type in part of an employee’s first or last name and receive meaningful results from documents related to performance, compensation, learning, recruiting, career planning and more.   Delivered actions can be initiated directly from these search results and the actions are securely tied to HCM security and user role.  The feature pack also includes new pages that will enable managers to be more productive by aggregating key employee data into a single page.  The new Manager Dashboard and Talent Summary provide a consolidated view of data related to a manager’s team and individual team members, respectively.   The Manager Dashboard displays information relevant to their direct reports including team learning, objective alignment, alerts, and pending approvals requiring their attention.  The Talent Summary provides managers with an aggregated view of talent management-related data for an individual employee including performance history, salary history, succession options, total rewards, and competencies.   The information displayed in both the Manager Dashboard and Talent Summary is configurable by system administrators and can be personalized by each of your managers. Other Feature Pack 2 enhancements allow organizations to administer Matrix or Dotted-Line Relationship Management, which addresses the challenge of tracking and maintaining project-based organizations that cut across the enterprise and geographic regions.  From within the Company Directory and Org Viewer organization charts, managers now have access to manager self-service transactions from related actions.  More than 70 manager and employee self-service transactions have been tied into the related action framework accessible from Org Viewer, Manager Dashboard, Talent Summary and Secure Enterprise Search (SES) results.  In addition to making it easier to access manager self-service transactions, the feature pack delivers streamlined transaction pages making everyday tasks such as promoting an employee faster and more efficient. With the delivery of PeopleSoft HCM 9.1 Feature Pack 2, Oracle continues to deliver on its commitment to our PeopleSoft customers.  With this feature pack, HCM 9.1 customers will be able to deploy the newest functionality quickly, without a major release upgrade, and realize added value from their existing PeopleSoft investment.    For customers newly deploying 9.1, a new download with all of Feature Pack 2  will be available early next year.   This will aslo include recertified upgrade paths from 8.8, 8.9 and 9.0, for customers in the upgrade process.

    Read the article

  • Logging errors caused by exceptions deep in the application

    - by Kaleb Pederson
    What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error? For example, let's say that I have an ETL system whose transform step involves: a transformer, pipeline, processing algorithm, and processing engine. In brief, the transformer takes in an input file, parses out records, and sends the records through the pipeline. The pipeline aggregates the results of the processing algorithm (which could do serial or parallel processing). The processing algorithm sends each record through one or more processing engines. So, I have at least four levels: Transformer - Pipeline - Algorithm - Engine. My code might then look something like the following: class Transformer { void Process(InputSource input) { try { var inRecords = _parser.Parse(input.Stream); var outRecords = _pipeline.Transform(inRecords); } catch (Exception ex) { var inner = new ProcessException(input, ex); _logger.Error("Unable to parse source " + input.Name, inner); throw inner; } } } class Pipeline { IEnumerable<Result> Transform(IEnumerable<Record> records) { // NOTE: no try/catch as I have no useful information to provide // at this point in the process var results = _algorithm.Process(records); // examine and do useful things with results return results; } } class Algorithm { IEnumerable<Result> Process(IEnumerable<Record> records) { var results = new List<Result>(); foreach (var engine in Engines) { foreach (var record in records) { try { engine.Process(record); } catch (Exception ex) { var inner = new EngineProcessingException(engine, record, ex); _logger.Error("Engine {0} unable to parse record {1}", engine, record); throw inner; } } } } } class Engine { Result Process(Record record) { for (int i=0; i<record.SubRecords.Count; ++i) { try { Validate(record.subRecords[i]); } catch (Exception ex) { var inner = new RecordValidationException(record, i, ex); _logger.Error( "Validation of subrecord {0} failed for record {1}", i, record ); } } } } There's a few important things to notice: A single error at the deepest level causes three log entries (ugly? DOS?) Thrown exceptions contain all important and useful information Logging only happens when failure to do so would cause loss of useful information at a lower level. Thoughts and concerns: I don't like having so many log entries for each error I don't want to lose important, useful data; the exceptions contain all the important but the stacktrace is typically the only thing displayed besides the message. I can log at different levels (e.g., warning, informational) The higher level classes should be completely unaware of the structure of the lower-level exceptions (which may change as the different implementations are replaced). The information available at higher levels should not be passed to the lower levels. So, to restate the main questions: What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error?

    Read the article

  • Webcast Q&A: ResCare Solves Content Lifecycle Challenges with Oracle WebCenter

    - by Kellsey Ruppel
    Last week we had the fourth webcast in our WebCenter in Action webcast series, "ResCare Solves Content Lifecycle Challenges with Oracle WebCenter", where customer Joe Lichtefeld from ResCare and Wayne Boerger & Doug Thompson from Oracle Partner TEAM Informatics shared how Oracle WebCenter is powering allowing ResCare to solve content lifecycle challenges, reduce compliance and business risks, and increase adoption of intranet as primary business communication tool In case you missed it, here's a recap of the Q&A.   Joe Lichtefeld, ResCare  Q: Did you run into any issues in the deployment of the platform?A: We experienced very few issues when implementing the content management and search functionalities. There were some challenges in determining the metadata structure. We tried to find a fine balance between having enough fields to provide the functionality needed, but trying to limit the impact to the contributing members.  Q: What has been the biggest benefit your end users have seen?A: The biggest benefit to date is two-fold. Content on the intranet can be maintained by the individual contributors more timely than in our old process of all requests being updated by IT. The other big benefit is the ability to find the most current version of a document instead of relying on emails and phone calls to track down the "current" version. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: We experienced very little resistance. Most of our community groups were eager to be able to contribute and maintain their information. We had the normal hurdles of training and follow-up training with implementing a new system and process. As our second phase rolled out access to all employees, we have received more positive feedback on the accessibility of information. Wayne Boerger & Doug Thompson, TEAM Informatics Q: Can you integrate multiple repositories with the Google Search Appliance? Yes, the Google Search Appliance is designed to index lots of different repositories, from both public and internal sources. There are included connectors to many repositories, such as SharePoint, databases, file systems, LDAP, and with the TEAM GSA Connector and the Oracle Content Server. And the index for these repositories can be configured into different collections depending on the use cases that each customer has, and really, for each need within a customer environment. Q: How many different filters can you add when the search results are returned? A: Presuming this question is about the filtering on the search results. You can add as many filters as you like and it can be done by collection or any number of other criteria. Most importantly, customers now have the ability to limit the returned content by a set metadata value. Q: With the TEAM Sites Connector, what types of content can you sync? A: There’s really no limit; if it can be checked into the content server, then it is eligible for sync into Sites.  So basically, any digital file that has relevance to a Sites implementation can be checked into the WC Content central repository and then the connector can/will manage it. Q: Using the Connector, are there any limitations around where in Sites that synced content can be used? A: There are no limitations about where it can be used. When setting up your environment to use it, you just need to think through the different destinations on the Sites side that might use the content; that way you’ve got the right information to create the rules needed for the connector. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!  ResCare Solves Content Lifecycle Challenges with Oracle WebCenter from Oracle WebCenter

    Read the article

  • PowerShell and SMO – be careful how you iterate

    - by Fatherjack
    I’ve yet to have a totally smooth experience with PowerShell and it was late on Friday when I crashed into this problem. I haven’t investigated if this is a generally well understood circumstance and if it is then I apologise for repeating everything. Scenario: I wanted to scan a number of server for many properties, including existing logins and to identify which accounts are bestowed with sysadmin privileges. A great task to pass to PowerShell, so with a heavy heart I started up PowerShellISE and started typing. The script doesn’t come easily to me but I follow the logic of SMO and the properties and methods available with the language so it seemed something I should be able to master. Version #1 of my script. And the results it returns when executed against my home laptop server. These results looked good and for a long time I was concerned with other parts of the script, for all intents and purposes quite happy that this was an accurate assessment of the server. Let’s just review my logic for each step of the code at the top. Lines 1 to 7 just set up our variables and write out the header message Line 8 our first loop, to go through each login on the server Line 10 an inner loop that will assess each role name that each login has been assigned Line 11 a test to see if each role has the name ‘sysadmin’ Line 13 write out the login name with a bright format as it is a sysadmin login Line 17 write out the login name with no formatting It is quite possible that here someone with more PowerShell experience than me will be shouting at their screen pointing at the error I made but to me this made total sense. Until I altered the code, I altered lines 6 and 7 of code above to be: $c = $Svr.Logins.Count write-host “There are $c Logins on the server” This changed my output to look like this: This started alarm bells ringing – there are clearly not 13 logins listed So, let’s see where things are going wrong, edit the script so it looks like this. I’ve highlighted the changes to make Running this code shows me these results Our $n variable should count up by one for each login returned and We are clearly missing some logins. I referenced this list back to Management Studio for my server and see the Logins as below, where there are clearly 13 logins. We see a Login called Annette in SSMS but not in the script results so I opened that up and looked at its properties and it’s server roles in particular. The account has only public access to the server. Inspection of the other logins that the PowerShell script misses out show they too are only members of the public role. Right now I can’t work out whether there is a good reason for this and if it should be expected behaviour or not. Please spend a few minutes to leave a comment if you have an opinion or theory for this. How to get the full list of logins. Clearly I needed to get a full list of the logins so set about reviewing my code to see if there was a better way to iterate through the roles for each login. This is the code that I came up with and I think it is doing everything that I need it to. It gives me the expected results like this: So it seems that the ListMembers() method is the trouble maker in my first versions of the code. I would have expected that ListMembers should return Logins that are only members of the public role, certainly Technet makes no reference to it being left out in it’s Login.ListMembers details. Suffice to say, it’s a lesson learned and I will approach using it with caution in future circumstances.

    Read the article

  • Test your internet connection - Emtel Fixed Broadband

    Already at the begin of April, I had a phone conversation with my representative at Emtel Ltd. about some upcoming issues due to the ongoing construction work in my neighbourhood. Unfortunately, they finally raised the house two levels above ours, and of course this has to have a negative impact on the visibility between the WiMAX outdoor unit on the roof and the aimed access point at Medine. So, today I had a technical team here to do a site survey and to come up with potential solutions. Short version: It doesn't look good after all. The site survey Well, the two technicians did their work properly, even re-arranged the antenna to check the connection with another end point down at La Preneuse. But no improvements. Looks like we are out of luck since the construction next door hasn't finished yet and at the moment, it even looks like they are planning to put at least one more level on top. I really wonder about the sanity of the responsible bodies at the local district council. But that's another story. Anyway, the outdoor unit was once again pointed towards Medine and properly fixed with new cable guides (air from the sea and rust...). Both of them did a good job and fine-tuned the reception signal to a mere 3 over 9; compared to the original 7 over 9 I had before the daily terror started. The site survey has been done, and now it's up to Emtel to come up with (better) solutions. Well, I wouldn't mind to have an unlimited, symmetric 3G/UMTS or even LTE connection. Let's see what they can do... Testing the connection There are several online sites available which offer you to check certain aspects of your internet connection. Personally, I'm used to speedtest.net and it works very well. I think it is good and necessary to check your connection from time to time, and only a couple of days ago, I posted the following on Emtel's wall at Facebook (21.05.2013 - 14:06 hrs): Dear Emtel, could you eventually provide an answer on the miserable results of SpeedTest? I chose Rose Hill (Hosted by Emtel Ltd.) as testing endpoint... Sadly, no response to this. Seems that the marketing department is not willing to deal with customers on Facebook. Okay, over at speedtest.net you can use their Flash-based test suite to check your connection to quite a number of servers of different providers world-wide. It's actually very interesting to see the results for different end points and to compare them to each other. The results Following are the results of Rose Hill (hosted by Emtel) and respectively Frankfurt, Germany (hosted by Vodafone DE): Speedtest.net result of 30.05.2013 between Flic en Flac and Rose Hill, Mauritius (Emtel - Fixed Broadband) Speedtest.net result of 30.05.2013 between Flic en Flac and Frankfurt, Germany (Emtel - Fixed Broadband) Luckily, the results are quite similar in terms of connection speed; which is good. I'm currently on a WiMAX tariff called 'Classic Browsing 2', or Fixed Broadband as they call it now, which provides a symmetric line of 768 Kbps (or roughly 0.75 Mbps). In terms of downloads or uploads this means that I would be able to transfer files in either direction with approximately 96 KB/s. Frankly speaking, thanks to compression, my choice of browser and operating system I usually exceed this value and I have download rates up to 120 KB/s - not too bad after all. Only the ping times are a little bit of concern. Due to the difference in distance, or better said based on the number of hubs between the endpoints, they indicate the amount of time that it takes to send a package from your machine to the remote server and get a response back. A lower value is better, and usually the ping is less than 300 ms between Mauritius and Europe. The alternatives in Mauritius Not sure whether I should note this done because for my requirements there are no alternatives to Emtel WiMAX at the moment. It would be great to have your opinion on the situation of internet connectivity in Mauritius. Are there really alternatives? And if so, what are the conditions?

    Read the article

  • Representing Mauritius in the 2013 Bench Games

    Only by chance I came across an interesting option for professionals and enthusiasts in IT, and quite honestly I can't even remember where I caught attention of Brainbench and their 2013 Bench Games event. But having access to 600+ free exams in a friendly international intellectual competition doesn't happen to be available every day. So, it was actually a no-brainer to sign up and browse through the various categories. Most interestingly, Brainbench is not only IT-related. They offer a vast variety of fields in their Test Center, like Languages and Communication, Office Skills, Management, Aptitude, etc., and it can be a little bit messy about how things are organised. Anyway, while browsing through their test offers I added a couple of exams to 'My Plan' which I would give a shot afterwards. Self-assessments Actually, I took the tests based on two major aspects: 'Fun Factor' and 'How good would I be in general'... Usually, you have to pay for any kind of exams and given this unique chance by Brainbench to simply train this kind of tests was already worth the time. Frankly speaking, the tests are very close to the ones you would be asked to do at Prometric or Pearson Vue, ie. Microsoft exams, etc. Go through a set of multiple choice questions in a given time frame. Most of the tests I did during the Bench Games were based on 40 questions, each with a maximum of 3 minutes to answer. Ergo, one test in maximum 2 hours - that sounds feasible, doesn't it? The Measure of Achievement While the 2013 Bench Games are considered a worldwide friendly competition of knowledge I was really eager to get other Mauritians attracted. Using various social media networks and community activities it all looked quite well at the beginning. Mauritius was listed on rank #19 of Most Certified Citizens and rank #10 of Most Master Level Certified Nation - not bad, not bad... Until... the next update of the Bench Games Leaderboard. The downwards trend seemed to be unstoppable and I couldn't understand why my results didn't show up on the Individual Leader Board. First of all, I passed exams that were not even listed and second, I had better results on some exams listed. After some further information from the organiser it turned out that my test transcript wasn't available to the public. Only then results are considered and counted in the competition. During that time, I actually managed to hold 3 test results on the Individuals... Other participants were merciless, eh, more successful than me, produced better test results than I did. But still I managed to stay on the final score board: An 'exotic' combination of exam, test result, country and person itself Representing Mauritius and the Visual FoxPro community in that fun event. And although I mainly develop in Visual FoxPro 9.0 SP2 and C# using .NET Framework from 2.0 to 4.5 since a couple of years I still managed to pass on Master Level. Hm, actually my Microsoft Certified Programmer (MCP) exams are dated back in June 2004 - more than 9 years ago... Look who got lucky... As described above I did a couple of exams as time allowed and without any preparations, but still I received the following mail notification: "Thank you for recently participating in our Bench Games event.  We wanted to inform you that you obtained a top score on our test(s) during this event, and as a result, will receive a free annual Brainbench subscription.  Your annual subscription will give you access to all our tests just like Bench Games, but for an entire year plus additional benefits!" -- Leader Board Notification from Brainbench Even fun activities get rewarded sometimes. Thanks to @Brainbench_com for the free annual subscription based on my passed 2013 Bench Games Master Level exam. It would be interesting to know about the total figures, especially to see how many citizens of Mauritius took part in this year's Bench Games. Anyway, I'm looking forward to be able to participate in other challenges like this in the future.

    Read the article

  • RadGrid OnNeedDataSource when the returned datasource is empty, I get a "Cannot find any bindable pr

    - by Matt
    RadGrid OnNeedDataSource when the returned datasource is empty (not null), I get a "Cannot find any bindable properties in an item from the datasource" This is how I have my RadGrid defined in the ASP markup <telerik:RadGrid runat="server" ID="RadGridSearchResults" AllowFilteringByColumn="false" ShowStatusBar="true" AllowPaging="True" AllowSorting="true" VirtualItemCount="10000" AllowCustomPaging="True" OnNeedDataSource="RadGridSearchResults_NeedDataSource" Skin="Default" GridLines="None" ShowGroupPanel="false" GroupLoadMode="Client"> <MasterTableView Width="100%" > <NoRecordsTemplate> <asp:Label ID="LabelNoRecords" runat="server" Text="No Results Found for your Query"/> </NoRecordsTemplate> </MasterTableView> <PagerStyle Mode="NextPrevAndNumeric" /> <FilterMenu EnableTheming="True"> <CollapseAnimation Duration="200" Type="OutQuint" /> </FilterMenu> </telerik:RadGrid> Here is my OnNeedDataSource protected void RadGridSearchResults_NeedDataSource(object source, GridNeedDataSourceEventArgs e) { RadGridSearchResults.DataSource = GetSearchResults(); } And here is my GetSearchResults() private DataTable GetSearchResults() { DataTable dataTableResults = new DataTable(); // Get my data results -- When I get no results, I have a datable with 0 rows return dataTableResults; } This works great when I have results in my DataSet and other tables of mine setup similarly work with the NoRecordsTemplate tag when results are empty. Any clue?

    Read the article

  • Netflix, jQuery, JSONP, and OData

    - by Stephen Walther
    At the last MIX conference, Netflix announced that they are exposing their catalog of movie information using the OData protocol. This is great news! This means that you can take advantage of all of the advanced OData querying features against a live database of Netflix movies. In this blog entry, I’ll demonstrate how you can use Netflix, jQuery, JSONP, and OData to create a simple movie lookup form. The form enables you to enter a movie title, or part of a movie title, and display a list of matching movies. For example, Figure 1 illustrates the movies displayed when you enter the value robot into the lookup form.   Using the Netflix OData Catalog API You can learn about the Netflix OData Catalog API at the following website: http://developer.netflix.com/docs/oData_Catalog The nice thing about this website is that it provides plenty of samples. It also has a good general reference for OData. For example, the website includes a list of OData filter operators and functions. The Netflix Catalog API exposes 4 top-level resources: Titles – A database of Movie information including interesting movie properties such as synopsis, BoxArt, and Cast. People – A database of people information including interesting information such as Awards, TitlesDirected, and TitlesActedIn. Languages – Enables you to get title information in different languages. Genres – Enables you to get title information for specific movie genres. OData is REST based. This means that you can perform queries by putting together the right URL. For example, if you want to get a list of the movies that were released after 2010 and that had an average rating greater than 4 then you can enter the following URL in the address bar of your browser: http://odata.netflix.com/Catalog/Titles?$filter=ReleaseYear gt 2010&AverageRating gt 4 Entering this URL returns the movies in Figure 2. Creating the Movie Lookup Form The complete code for the Movie Lookup form is contained in Listing 1. Listing 1 – MovieLookup.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Netflix with jQuery</title> <style type="text/css"> #movieTemplateContainer div { width:400px; padding: 10px; margin: 10px; border: black solid 1px; } </style> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="App_Scripts/Microtemplates.js" type="text/javascript"></script> </head> <body> <label>Search Movies:</label> <input id="movieName" size="50" /> <button id="btnLookup">Lookup</button> <div id="movieTemplateContainer"></div> <script id="movieTemplate" type="text/html"> <div> <img src="<%=BoxArtSmallUrl %>" /> <strong><%=Name%></strong> <p> <%=Synopsis %> </p> </div> </script> <script type="text/javascript"> $("#btnLookup").click(function () { // Build OData query var movieName = $("#movieName").val(); var query = "http://odata.netflix.com/Catalog" // netflix base url + "/Titles" // top-level resource + "?$filter=substringof('" + escape(movieName) + "',Name)" // filter by movie name + "&$callback=callback" // jsonp request + "&$format=json"; // json request // Make JSONP call to Netflix $.ajax({ dataType: "jsonp", url: query, jsonpCallback: "callback", success: callback }); }); function callback(result) { // unwrap result var movies = result["d"]["results"]; // show movies in template var showMovie = tmpl("movieTemplate"); var html = ""; for (var i = 0; i < movies.length; i++) { // flatten movie movies[i].BoxArtSmallUrl = movies[i].BoxArt.SmallUrl; // render with template html += showMovie(movies[i]); } $("#movieTemplateContainer").html(html); } </script> </body> </html> The HTML page in Listing 1 includes two JavaScript libraries: <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="App_Scripts/Microtemplates.js" type="text/javascript"></script> The first script tag retrieves jQuery from the Microsoft Ajax CDN. You can learn more about the Microsoft Ajax CDN by visiting the following website: http://www.asp.net/ajaxLibrary/cdn.ashx The second script tag is used to reference Resig’s micro-templating library. Because I want to use a template to display each movie, I need this library: http://ejohn.org/blog/javascript-micro-templating/ When you enter a value into the Search Movies input field and click the button, the following JavaScript code is executed: // Build OData query var movieName = $("#movieName").val(); var query = "http://odata.netflix.com/Catalog" // netflix base url + "/Titles" // top-level resource + "?$filter=substringof('" + escape(movieName) + "',Name)" // filter by movie name + "&$callback=callback" // jsonp request + "&$format=json"; // json request // Make JSONP call to Netflix $.ajax({ dataType: "jsonp", url: query, jsonpCallback: "callback", success: callback }); This code Is used to build a query that will be executed against the Netflix Catalog API. For example, if you enter the search phrase King Kong then the following URL is created: http://odata.netflix.com/Catalog/Titles?$filter=substringof(‘King%20Kong’,Name)&$callback=callback&$format=json This query includes the following parameters: $filter – You assign a filter expression to this parameter to filter the movie results. $callback – You assign the name of a JavaScript callback method to this parameter. OData calls this method to return the movie results. $format – you assign either the value json or xml to this parameter to specify how the format of the movie results. Notice that all of the OData parameters -- $filter, $callback, $format -- start with a dollar sign $. The Movie Lookup form uses JSONP to retrieve data across the Internet. Because WCF Data Services supports JSONP, and Netflix uses WCF Data Services to expose movies using the OData protocol, you can use JSONP when interacting with the Netflix Catalog API. To learn more about using JSONP with OData, see Pablo Castro’s blog: http://blogs.msdn.com/pablo/archive/2009/02/25/adding-support-for-jsonp-and-url-controlled-format-to-ado-net-data-services.aspx The actual JSONP call is performed by calling the $.ajax() method. When this call successfully completes, the JavaScript callback() method is called. The callback() method looks like this: function callback(result) { // unwrap result var movies = result["d"]["results"]; // show movies in template var showMovie = tmpl("movieTemplate"); var html = ""; for (var i = 0; i < movies.length; i++) { // flatten movie movies[i].BoxArtSmallUrl = movies[i].BoxArt.SmallUrl; // render with template html += showMovie(movies[i]); } $("#movieTemplateContainer").html(html); } The movie results from Netflix are passed to the callback method. The callback method takes advantage of Resig’s micro-templating library to display each of the movie results. A template used to display each movie is passed to the tmpl() method. The movie template looks like this: <script id="movieTemplate" type="text/html"> <div> <img src="<%=BoxArtSmallUrl %>" /> <strong><%=Name%></strong> <p> <%=Synopsis %> </p> </div> </script>   This template looks like a server-side ASP.NET template. However, the template is rendered in the client (browser) instead of the server. Summary The goal of this blog entry was to demonstrate how well jQuery works with OData. We managed to use a number of interesting open-source libraries and open protocols while building the Movie Lookup form including jQuery, JSONP, JSON, and OData.

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • NSFetchedResultsController crashing on performFetch: when using a cache

    - by Oliver
    I make use of NSFetchedResultsController to display a bunch of objects, which are sectioned using dates. On a fresh install, it all works perfectly and the objects are displayed in the table view. However, it seems that when the app is relaunched I get a crash. I specify a cache when initialising the NSFetchedResultsController, and when I don't it works perfectly. Here is how I create my NSFetchedResultsController: - (NSFetchedResultsController *)results { // If we are not nil, stop here if (results != nil) return results; // Create the fetch request, entity and sort descriptors NSFetchRequest *fetch = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Event" inManagedObjectContext:self.managedObjectContext]; NSSortDescriptor *descriptor = [[NSSortDescriptor alloc] initWithKey:@"utc_start" ascending:YES]; NSArray *descriptors = [[NSArray alloc] initWithObjects:descriptor, nil]; // Set properties on the fetch [fetch setEntity:entity]; [fetch setSortDescriptors:descriptors]; // Create a fresh fetched results controller NSFetchedResultsController *fetched = [[NSFetchedResultsController alloc] initWithFetchRequest:fetch managedObjectContext:self.managedObjectContext sectionNameKeyPath:@"day" cacheName:@"Events"]; fetched.delegate = self; self.results = fetched; // Release objects and return our controller [fetched release]; [fetch release]; [descriptor release]; [descriptors release]; return results; } These are the messages I get when the app crashes: FATAL ERROR: The persistent cache of section information does not match the current configuration. You have illegally mutated the NSFetchedResultsController's fetch request, its predicate, or its sort descriptor without either disabling caching or using +deleteCacheWithName: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'FATAL ERROR: The persistent cache of section information does not match the current configuration. You have illegally mutated the NSFetchedResultsController's fetch request, its predicate, or its sort descriptor without either disabling caching or using +deleteCacheWithName:' I really have no clue as to why it's saying that, as I don't believe I'm doing anything special that would cause this. The only potential issue is the section header (day), which I construct like this when creating a new object: // Set the new format [formatter setDateFormat:@"dd MMMM"]; // Set the day of the event [event setValue:[formatter stringFromDate:[event valueForKey:@"utc_start"]] forKey:@"day"]; Like I mentioned, all of this works fine if there is no cache involved. Any help appreciated!

    Read the article

  • Parallelism in .NET – Part 4, Imperative Data Parallelism: Aggregation

    - by Reed
    In the article on simple data parallelism, I described how to perform an operation on an entire collection of elements in parallel.  Often, this is not adequate, as the parallel operation is going to be performing some form of aggregation. Simple examples of this might include taking the sum of the results of processing a function on each element in the collection, or finding the minimum of the collection given some criteria.  This can be done using the techniques described in simple data parallelism, however, special care needs to be taken into account to synchronize the shared data appropriately.  The Task Parallel Library has tools to assist in this synchronization. The main issue with aggregation when parallelizing a routine is that you need to handle synchronization of data.  Since multiple threads will need to write to a shared portion of data.  Suppose, for example, that we wanted to parallelize a simple loop that looked for the minimum value within a dataset: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This seems like a good candidate for parallelization, but there is a problem here.  If we just wrap this into a call to Parallel.ForEach, we’ll introduce a critical race condition, and get the wrong answer.  Let’s look at what happens here: // Buggy code! Do not use! double min = double.MaxValue; Parallel.ForEach(collection, item => { double value = item.PerformComputation(); min = System.Math.Min(min, value); }); This code has a fatal flaw: min will be checked, then set, by multiple threads simultaneously.  Two threads may perform the check at the same time, and set the wrong value for min.  Say we get a value of 1 in thread 1, and a value of 2 in thread 2, and these two elements are the first two to run.  If both hit the min check line at the same time, both will determine that min should change, to 1 and 2 respectively.  If element 1 happens to set the variable first, then element 2 sets the min variable, we’ll detect a min value of 2 instead of 1.  This can lead to wrong answers. Unfortunately, fixing this, with the Parallel.ForEach call we’re using, would require adding locking.  We would need to rewrite this like: // Safe, but slow double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach(collection, item => { double value = item.PerformComputation(); lock(syncObject) min = System.Math.Min(min, value); }); This will potentially add a huge amount of overhead to our calculation.  Since we can potentially block while waiting on the lock for every single iteration, we will most likely slow this down to where it is actually quite a bit slower than our serial implementation.  The problem is the lock statement – any time you use lock(object), you’re almost assuring reduced performance in a parallel situation.  This leads to two observations I’ll make: When parallelizing a routine, try to avoid locks. That being said: Always add any and all required synchronization to avoid race conditions. These two observations tend to be opposing forces – we often need to synchronize our algorithms, but we also want to avoid the synchronization when possible.  Looking at our routine, there is no way to directly avoid this lock, since each element is potentially being run on a separate thread, and this lock is necessary in order for our routine to function correctly every time. However, this isn’t the only way to design this routine to implement this algorithm.  Realize that, although our collection may have thousands or even millions of elements, we have a limited number of Processing Elements (PE).  Processing Element is the standard term for a hardware element which can process and execute instructions.  This typically is a core in your processor, but many modern systems have multiple hardware execution threads per core.  The Task Parallel Library will not execute the work for each item in the collection as a separate work item. Instead, when Parallel.ForEach executes, it will partition the collection into larger “chunks” which get processed on different threads via the ThreadPool.  This helps reduce the threading overhead, and help the overall speed.  In general, the Parallel class will only use one thread per PE in the system. Given the fact that there are typically fewer threads than work items, we can rethink our algorithm design.  We can parallelize our algorithm more effectively by approaching it differently.  Because the basic aggregation we are doing here (Min) is communitive, we do not need to perform this in a given order.  We knew this to be true already – otherwise, we wouldn’t have been able to parallelize this routine in the first place.  With this in mind, we can treat each thread’s work independently, allowing each thread to serially process many elements with no locking, then, after all the threads are complete, “merge” together the results. This can be accomplished via a different set of overloads in the Parallel class: Parallel.ForEach<TSource,TLocal>.  The idea behind these overloads is to allow each thread to begin by initializing some local state (TLocal).  The thread will then process an entire set of items in the source collection, providing that state to the delegate which processes an individual item.  Finally, at the end, a separate delegate is run which allows you to handle merging that local state into your final results. To rewriting our routine using Parallel.ForEach<TSource,TLocal>, we need to provide three delegates instead of one.  The most basic version of this function is declared as: public static ParallelLoopResult ForEach<TSource, TLocal>( IEnumerable<TSource> source, Func<TLocal> localInit, Func<TSource, ParallelLoopState, TLocal, TLocal> body, Action<TLocal> localFinally ) The first delegate (the localInit argument) is defined as Func<TLocal>.  This delegate initializes our local state.  It should return some object we can use to track the results of a single thread’s operations. The second delegate (the body argument) is where our main processing occurs, although now, instead of being an Action<T>, we actually provide a Func<TSource, ParallelLoopState, TLocal, TLocal> delegate.  This delegate will receive three arguments: our original element from the collection (TSource), a ParallelLoopState which we can use for early termination, and the instance of our local state we created (TLocal).  It should do whatever processing you wish to occur per element, then return the value of the local state after processing is completed. The third delegate (the localFinally argument) is defined as Action<TLocal>.  This delegate is passed our local state after it’s been processed by all of the elements this thread will handle.  This is where you can merge your final results together.  This may require synchronization, but now, instead of synchronizing once per element (potentially millions of times), you’ll only have to synchronize once per thread, which is an ideal situation. Now that I’ve explained how this works, lets look at the code: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Although this is a bit more complicated than the previous version, it is now both thread-safe, and has minimal locking.  This same approach can be used by Parallel.For, although now, it’s Parallel.For<TLocal>.  When working with Parallel.For<TLocal>, you use the same triplet of delegates, with the same purpose and results. Also, many times, you can completely avoid locking by using a method of the Interlocked class to perform the final aggregation in an atomic operation.  The MSDN example demonstrating this same technique using Parallel.For uses the Interlocked class instead of a lock, since they are doing a sum operation on a long variable, which is possible via Interlocked.Add. By taking advantage of local state, we can use the Parallel class methods to parallelize algorithms such as aggregation, which, at first, may seem like poor candidates for parallelization.  Doing so requires careful consideration, and often requires a slight redesign of the algorithm, but the performance gains can be significant if handled in a way to avoid excessive synchronization.

    Read the article

  • SharePoint: Can't Connect to a Configuration Database When Using Configuration Wizard

    - by Denis
    Hello everyone. I wonder if you could help me with the following problem: We have a SharePoint farm consisted of two servers with an NLB (a load balancer), a database server and an index server (4 servers in total). The issues initially appeared when we were trying to change Search settings via Shared services provider and an error was appearing with the message I can’t even remember now. To fix that problem, we decided to restart Search services on the Index server via Central Administration… and the process had stuck with a status “stopping” for several days. We’ve found a few solutions on the forums, but nothing had helped. Then we’ve tried to exclude the Index server from the farm, but yet again, Search services would not stop. However, after a week we’ve noticed that the services finally stopped. That was a prologue. So, after the Search services have finally stopped, we decided to return the Index server back to the farm via the SharePoint Configuration wizard. When we click on a button “Search database instances” (don’t know the exact name in English, we have a Russian version), a correct database name and user are automatically appear in the relevant fields. Then we supply a password for the user and click “next” two times and the configuration begins. Unfortunately, at the second stage we receive the error “Could not connect to a configuration database” with the exception in the log file: “Exception: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation.” Now we can’t figure out why the configuration wizard can’t connect to the configuration database. Any suggestions will be really appreciated.

    Read the article

  • Server Error in '/' Application. - The resource cannot be Found.

    - by Bigced_21
    I am new to ASP.NET MVC 2. I do not understand why I am receiving this error. Is there something missing that i'm not referencing correctly. I'm trying to create a simple jquery autocomplete online search textbox and view the details of the person that i select using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using System.Web.Mvc.Ajax; using DOC_Kools.Models; namespace DOC_Kools.Controllers { public class HomeController : Controller { private KOOLSEntities _dataModel = new KOOLSEntities(); // // GET: /Home/ public ActionResult Index() { ViewData["Message"] = "Welcome to ASP.NET MVC!"; return View(); } // // GET: /Home/ public ActionResult getAjaxResult(string q) { string searchResult = string.Empty; var offenders = (from o in _dataModel.OffenderSet where o.LastName.Contains(q) orderby o.LastName select o).Take(10); foreach (Offender o in offenders) { searchResult += string.Format("{0}|r\n", o.LastName); } return Content(searchResult); } [AcceptVerbs(HttpVerbs.Post)] public ActionResult Search(string searchTerm) { if (searchTerm == string.Empty) { return View(); } else { // if the search contains only one result return detials // otherwise a list var offenders = from o in _dataModel.OffenderSet where o.LastName.Contains(searchTerm) orderby o.LastName select o; if (offenders.Count() == 0) { return View("not found"); } if (offenders.Count() > 1) { return View("List", offenders); } else { return RedirectToAction("Details", new { id = offenders.First().SPN }); } } } // // GET: /Home/Details/5 public ActionResult Details(int id) { return View(); } // // GET: /Home/Create public ActionResult Create() { return View(); } // // POST: /Home/Create [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create(FormCollection collection) { try { // TODO: Add insert logic here return RedirectToAction("Index"); } catch { return View(); } } // // GET: /Home/Edit/5 public ActionResult Edit(int id) { return View(); } // // POST: /Home/Edit/5 [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(int id, FormCollection collection) { try { // TODO: Add update logic here return RedirectToAction("Index"); } catch { return View(); } } public ActionResult About() { return View(); } } } using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using System.Web.Routing; namespace DOC_Kools { // Note: For instructions on enabling IIS6 or IIS7 classic mode, // visit http://go.microsoft.com/?LinkId=9394801 public class MvcApplication : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); routes.MapRoute( "OffenderSearch", "Offenders/Search/{searchTerm}", new { controller = "Home", action = "Index", searchTerm = "" } ); routes.MapRoute( "OffenderAjaxSearch", "Offenders/getAjaxResult/", new { controller = "Home", action = "getAjaxResult" } ); } protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); } } } <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<DOC_Kools.Models.Offender>" %> $(document).ready(function() { $("#searchTerm").autocomplete("/Offenders/getAjaxResult/"); }); Home Page <%= Html.Encode(ViewData["Message"]) % <h2>Look for an offender</h2> <form action="/Offenders/Search" method="post" id="searchForm"> <input type="text" name="searchTerm" id="searchTerm" value="" size="10" maxlength="30" /> <input type="submit" value="Search" /> </form> <br /> what do i have to do in order for the textbox search to display on the index page? What else do i have to do for the autocomplete to function correctly. i have the autocomplete.js & jquery.js added to the index.aspx view Any help will be appreciated so that i can get this working. Thanks!

    Read the article

  • AD Password About to Expire check problem with ASP.Net

    - by Vince
    Hello everyone, I am trying to write some code to check the AD password age during a user login and notify them of the 15 remaining days. I am using the ASP.Net code that I found on the Microsoft MSDN site and I managed to add a function that checks the if the account is set to change password at next login. The login and the change password at next login works great but I am having some problems with the check for the password age. This is the VB.Net code for the DLL file: Imports System Imports System.Text Imports System.Collections Imports System.DirectoryServices Imports System.DirectoryServices.AccountManagement Imports System.Reflection 'Needed by the Password Expiration Class Only -Vince Namespace FormsAuth Public Class LdapAuthentication Dim _path As String Dim _filterAttribute As String 'Code added for the password expiration added by Vince Private _domain As DirectoryEntry Private _passwordAge As TimeSpan = TimeSpan.MinValue Const UF_DONT_EXPIRE_PASSWD As Integer = &H10000 'Function added by Vince Public Sub New() Dim root As New DirectoryEntry("LDAP://rootDSE") root.AuthenticationType = AuthenticationTypes.Secure _domain = New DirectoryEntry("LDAP://" & root.Properties("defaultNamingContext")(0).ToString()) _domain.AuthenticationType = AuthenticationTypes.Secure End Sub 'Function added by Vince Public ReadOnly Property PasswordAge() As TimeSpan Get If _passwordAge = TimeSpan.MinValue Then Dim ldate As Long = LongFromLargeInteger(_domain.Properties("maxPwdAge")(0)) _passwordAge = TimeSpan.FromTicks(ldate) End If Return _passwordAge End Get End Property Public Sub New(ByVal path As String) _path = path End Sub 'Function added by Vince Public Function DoesUserHaveToChangePassword(ByVal userName As String) As Boolean Dim ctx As PrincipalContext = New PrincipalContext(System.DirectoryServices.AccountManagement.ContextType.Domain) Dim up = UserPrincipal.FindByIdentity(ctx, userName) Return (Not up.LastPasswordSet.HasValue) 'returns true if last password set has no value. End Function Public Function IsAuthenticated(ByVal domain As String, ByVal username As String, ByVal pwd As String) As Boolean Dim domainAndUsername As String = domain & "\" & username Dim entry As DirectoryEntry = New DirectoryEntry(_path, domainAndUsername, pwd) Try 'Bind to the native AdsObject to force authentication. Dim obj As Object = entry.NativeObject Dim search As DirectorySearcher = New DirectorySearcher(entry) search.Filter = "(SAMAccountName=" & username & ")" search.PropertiesToLoad.Add("cn") Dim result As SearchResult = search.FindOne() If (result Is Nothing) Then Return False End If 'Update the new path to the user in the directory. _path = result.Path _filterAttribute = CType(result.Properties("cn")(0), String) Catch ex As Exception Throw New Exception("Error authenticating user. " & ex.Message) End Try Return True End Function Public Function GetGroups() As String Dim search As DirectorySearcher = New DirectorySearcher(_path) search.Filter = "(cn=" & _filterAttribute & ")" search.PropertiesToLoad.Add("memberOf") Dim groupNames As StringBuilder = New StringBuilder() Try Dim result As SearchResult = search.FindOne() Dim propertyCount As Integer = result.Properties("memberOf").Count Dim dn As String Dim equalsIndex, commaIndex Dim propertyCounter As Integer For propertyCounter = 0 To propertyCount - 1 dn = CType(result.Properties("memberOf")(propertyCounter), String) equalsIndex = dn.IndexOf("=", 1) commaIndex = dn.IndexOf(",", 1) If (equalsIndex = -1) Then Return Nothing End If groupNames.Append(dn.Substring((equalsIndex + 1), (commaIndex - equalsIndex) - 1)) groupNames.Append("|") Next Catch ex As Exception Throw New Exception("Error obtaining group names. " & ex.Message) End Try Return groupNames.ToString() End Function 'Function added by Vince Public Function WhenExpires(ByVal username As String) As TimeSpan Dim ds As New DirectorySearcher(_domain) ds.Filter = [String].Format("(&(objectClass=user)(objectCategory=person)(sAMAccountName={0}))", username) Dim sr As SearchResult = FindOne(ds) Dim user As DirectoryEntry = sr.GetDirectoryEntry() Dim flags As Integer = CInt(user.Properties("userAccountControl").Value) If Convert.ToBoolean(flags And UF_DONT_EXPIRE_PASSWD) Then 'password never expires Return TimeSpan.MaxValue End If 'get when they last set their password Dim pwdLastSet As DateTime = DateTime.FromFileTime(LongFromLargeInteger(user.Properties("pwdLastSet").Value)) ' return pwdLastSet.Add(PasswordAge).Subtract(DateTime.Now); If pwdLastSet.Subtract(PasswordAge).CompareTo(DateTime.Now) > 0 Then Return pwdLastSet.Subtract(PasswordAge).Subtract(DateTime.Now) Else Return TimeSpan.MinValue 'already expired End If End Function 'Function added by Vince Private Function LongFromLargeInteger(ByVal largeInteger As Object) As Long Dim type As System.Type = largeInteger.[GetType]() Dim highPart As Integer = CInt(type.InvokeMember("HighPart", BindingFlags.GetProperty, Nothing, largeInteger, Nothing)) Dim lowPart As Integer = CInt(type.InvokeMember("LowPart", BindingFlags.GetProperty, Nothing, largeInteger, Nothing)) Return CLng(highPart) << 32 Or CUInt(lowPart) End Function 'Function added by Vince Private Function FindOne(ByVal searcher As DirectorySearcher) As SearchResult Dim sr As SearchResult = Nothing Dim src As SearchResultCollection = searcher.FindAll() If src.Count > 0 Then sr = src(0) End If src.Dispose() Return sr End Function End Class End Namespace And this is the Login.aspx page: sub Login_Click(sender as object,e as EventArgs) Dim adPath As String = "LDAP://DC=xxx,DC=com" 'Path to your LDAP directory server Dim adAuth As LdapAuthentication = New LdapAuthentication(adPath) Try If (True = adAuth.DoesUserHaveToChangePassword(txtUsername.Text)) Then Response.Redirect("passchange.htm") ElseIf (True = adAuth.IsAuthenticated(txtDomain.Text, txtUsername.Text, txtPassword.Text)) Then Dim groups As String = adAuth.GetGroups() 'Create the ticket, and add the groups. Dim isCookiePersistent As Boolean = chkPersist.Checked Dim authTicket As FormsAuthenticationTicket = New FormsAuthenticationTicket(1, _ txtUsername.Text, DateTime.Now, DateTime.Now.AddMinutes(60), isCookiePersistent, groups) 'Encrypt the ticket. Dim encryptedTicket As String = FormsAuthentication.Encrypt(authTicket) 'Create a cookie, and then add the encrypted ticket to the cookie as data. Dim authCookie As HttpCookie = New HttpCookie(FormsAuthentication.FormsCookieName, encryptedTicket) If (isCookiePersistent = True) Then authCookie.Expires = authTicket.Expiration End If 'Add the cookie to the outgoing cookies collection. Response.Cookies.Add(authCookie) 'Retrieve the password life Dim t As TimeSpan = adAuth.WhenExpires(txtUsername.Text) 'You can redirect now. If (passAge.Days = 90) Then errorLabel.Text = "Your password will expire in " & DateTime.Now.Subtract(t) 'errorLabel.Text = "This is" 'System.Threading.Thread.Sleep(5000) Response.Redirect("http://somepage.aspx") Else Response.Redirect(FormsAuthentication.GetRedirectUrl(txtUsername.Text, False)) End If Else errorLabel.Text = "Authentication did not succeed. Check user name and password." End If Catch ex As Exception errorLabel.Text = "Error authenticating. " & ex.Message End Try End Sub ` Every time I have this Dim t As TimeSpan = adAuth.WhenExpires(txtUsername.Text) enabled, I receive "Arithmetic operation resulted in an overflow." during the login and won't continue. What am I doing wrong? How can I correct this? Please help!! Thank you very much for any help in advance. Vince

    Read the article

  • Resque Runtime Error at /workers: wrong number of arguments for 'exists' command

    - by Superflux
    I'm having a runtime errror when i'm looking at the "workers" tab on resque-web (localhost). Everything else works. Edit: when this error occurs, i also have some (3 or 4) unknown workers 'not working'. I think they are responsible for the error but i don't understand how they got here Can you help me on this ? Did i do something wrong ? Config: Resque 1.8.5 as a gem in a rails 2.3.8 app on Snow Leopard redis 1.0.7 / rack 1.1 / sinatra 1.0 / vegas 0.1.7 file: client.rb location: format_error_reply line: 558 BACKTRACE: * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in format_error_reply * 551. when DOLLAR then format_bulk_reply(line) 552. when ASTERISK then format_multi_bulk_reply(line) 553. else raise ProtocolError.new(reply_type) 554. end 555. end 556. 557. def format_error_reply(line) 558. raise "-" + line.strip 559. end 560. 561. def format_status_reply(line) 562. line.strip 563. end 564. 565. def format_integer_reply(line) * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in format_reply * 541. 542. def reconnect 543. disconnect && connect_to_server 544. end 545. 546. def format_reply(reply_type, line) 547. case reply_type 548. when MINUS then format_error_reply(line) 549. when PLUS then format_status_reply(line) 550. when COLON then format_integer_reply(line) 551. when DOLLAR then format_bulk_reply(line) 552. when ASTERISK then format_multi_bulk_reply(line) 553. else raise ProtocolError.new(reply_type) 554. end 555. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in read_reply * 478. disconnect 479. 480. raise Errno::EAGAIN, "Timeout reading from the socket" 481. end 482. 483. raise Errno::ECONNRESET, "Connection lost" unless reply_type 484. 485. format_reply(reply_type, @sock.gets) 486. end 487. 488. 489. if "".respond_to?(:bytesize) 490. def get_size(string) 491. string.bytesize 492. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in process_command * 448. return pipeline ? results : results[0] 449. end 450. 451. def process_command(command, argvv) 452. @sock.write(command) 453. argvv.map do |argv| 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe 461. @mutex.synchronize(&block) 462. else * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in map * 446. end 447. 448. return pipeline ? results : results[0] 449. end 450. 451. def process_command(command, argvv) 452. @sock.write(command) 453. argvv.map do |argv| 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in process_command * 446. end 447. 448. return pipeline ? results : results[0] 449. end 450. 451. def process_command(command, argvv) 452. @sock.write(command) 453. argvv.map do |argv| 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in raw_call_command * 435. @sock.write(command) 436. return true 437. end 438. # The normal command execution is reading and processing the reply. 439. results = maybe_lock do 440. begin 441. set_socket_timeout!(0) if requires_timeout_reset?(argvv[0][0].to_s) 442. process_command(command, argvv) 443. ensure 444. set_socket_timeout!(@timeout) if requires_timeout_reset?(argvv[0][0].to_s) 445. end 446. end 447. 448. return pipeline ? results : results[0] 449. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in synchronize * 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe 461. @mutex.synchronize(&block) 462. else 463. block.call 464. end 465. end 466. 467. def read_reply 468. * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in maybe_lock * 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe 461. @mutex.synchronize(&block) 462. else 463. block.call 464. end 465. end 466. 467. def read_reply 468. * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in raw_call_command * 432. end 433. # When in Pub/Sub mode we don't read replies synchronously. 434. if @pubsub 435. @sock.write(command) 436. return true 437. end 438. # The normal command execution is reading and processing the reply. 439. results = maybe_lock do 440. begin 441. set_socket_timeout!(0) if requires_timeout_reset?(argvv[0][0].to_s) 442. process_command(command, argvv) 443. ensure 444. set_socket_timeout!(@timeout) if requires_timeout_reset?(argvv[0][0].to_s) 445. end 446. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in call_command * 336. # try to reconnect just one time, otherwise let the error araise. 337. def call_command(argv) 338. log(argv.inspect, :debug) 339. 340. connect_to_server unless connected? 341. 342. begin 343. raw_call_command(argv.dup) 344. rescue Errno::ECONNRESET, Errno::EPIPE, Errno::ECONNABORTED 345. if reconnect 346. raw_call_command(argv.dup) 347. else 348. raise Errno::ECONNRESET 349. end 350. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in method_missing * 385. connect_to(@host, @port) 386. call_command([:auth, @password]) if @password 387. call_command([:select, @db]) if @db != 0 388. @sock 389. end 390. 391. def method_missing(*argv) 392. call_command(argv) 393. end 394. 395. def raw_call_command(argvp) 396. if argvp[0].is_a?(Array) 397. argvv = argvp 398. pipeline = true 399. else * /Library/Ruby/Gems/1.8/gems/redis-namespace-0.4.4/lib/redis/namespace.rb in send * 159. args = add_namespace(args) 160. args.push(last) if last 161. when :alternate 162. args = [ add_namespace(Hash[*args]) ] 163. end 164. 165. # Dispatch the command to Redis and store the result. 166. result = @redis.send(command, *args, &block) 167. 168. # Remove the namespace from results that are keys. 169. result = rem_namespace(result) if after == :all 170. 171. result 172. end 173. * /Library/Ruby/Gems/1.8/gems/redis-namespace-0.4.4/lib/redis/namespace.rb in method_missing * 159. args = add_namespace(args) 160. args.push(last) if last 161. when :alternate 162. args = [ add_namespace(Hash[*args]) ] 163. end 164. 165. # Dispatch the command to Redis and store the result. 166. result = @redis.send(command, *args, &block) 167. 168. # Remove the namespace from results that are keys. 169. result = rem_namespace(result) if after == :all 170. 171. result 172. end 173. * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/worker.rb in state * 416. def idle? 417. state == :idle 418. end 419. 420. # Returns a symbol representing the current worker state, 421. # which can be either :working or :idle 422. def state 423. redis.exists("worker:#{self}") ? :working : :idle 424. end 425. 426. # Is this worker the same as another worker? 427. def ==(other) 428. to_s == other.to_s 429. end 430. * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server/views/workers.erb in __tilt_a2112543c5200dbe0635da5124b47311 * 46. <tr> 47. <th>&nbsp;</th> 48. <th>Where</th> 49. <th>Queues</th> 50. <th>Processing</th> 51. </tr> 52. <% for worker in (workers = resque.workers.sort_by { |w| w.to_s }) %> 53. <tr class="<%=state = worker.state%>"> 54. <td class='icon'><img src="<%=u state %>.png" alt="<%= state %>" title="<%= state %>"></td> 55. 56. <% host, pid, queues = worker.to_s.split(':') %> 57. <td class='where'><a href="<%=u "workers/#{worker}"%>"><%= host %>:<%= pid %></a></td> 58. <td class='queues'><%= queues.split(',').map { |q| '<a class="queue-tag" href="' + u("/queues/#{q}") + '">' + q + '</a>'}.join('') %></td> 59. 60. <td class='process'> * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server/views/workers.erb in each * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server/views/workers.erb in __tilt_a2112543c5200dbe0635da5124b47311 * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/tilt.rb in send * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/tilt.rb in evaluate * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/tilt.rb in render * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in render * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in erb * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server.rb in show * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server.rb in GET /workers * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in instance_eval * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route_eval * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in catch * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in each * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in dispatch! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in instance_eval * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in invoke * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in catch * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in invoke * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/showexceptions.rb in call * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in synchronize * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/content_length.rb in call * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/chunked.rb in call * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/handler/mongrel.rb in process * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in process_client * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in each * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in process_client * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in run * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in initialize * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in new * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in run * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in initialize * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in new * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in run * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/handler/mongrel.rb in run * /Library/Ruby/Gems/1.8/gems/vegas-0.1.7/lib/vegas/runner.rb in run! * /Library/Ruby/Gems/1.8/gems/vegas-0.1.7/lib/vegas/runner.rb in start * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/bin/resque-web in new * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/bin/resque-web in nil * /usr/bin/resque-web in load

    Read the article

  • jQuery autocomplete disabled makes autocomplete partially transparent, not disabled

    - by Ryan Giglio
    I'm using the jQuery UI's "autocomplete" function on a search on my site. When you change a radio button from 'area search" to "name search" I want it to disable the autocomplete, and re-enable it when you switch back. However, when you disable the autocomplete it doesn't hide the dropdown, it just dims it to 20% opacity or so. Here's my javascript: var allFields = new Array(<?php echo $allFields ?>); $(document).ready(function() { if ($("input[name='searchType']:checked").val() == 'areaCode') { $("#siteSearch").autocomplete({ source: allFields, minLength: 2 }); } $("input[name='searchType']").change(function(){ if ($("input[name='searchType']:checked").val() == 'areaCode') { $( "#siteSearch" ).autocomplete( "option", "disabled", false ); alert("enabled"); } else { $( "#siteSearch" ).autocomplete( "option", "disabled", true ); alert("disabled"); } }); }); You can see it happening at http://crewinyourcode.com First you have to chose an area code to search, and then you can see the issue.

    Read the article

  • Invoke an AsyncController Action from within another Controller Action?

    - by Luis
    Hi, I'd like to accomplish the following: class SearchController : AsyncController { public ActionResult Index(string query) { if(!isCached(query)) { // here I want to asynchronously invoke the Search action } else { ViewData["results"] = Cache.Get("results"); } return View(); } public void SearchAsync() { // some work Cache.Add("results", result); } } I'm planning to make an AJAX 'ping' from the client in order to know when the results are available, and then display them. But I don't know how to invoke the asynchronous Action in an asynchronous way! Thank you very much. Luis

    Read the article

  • Autofilter with multi-columns in Excel VBA

    - by tlpd
    I need to use VBA to filter some information in excel. As I have an excel with 20 columns, now want to use AutoFilter function to search in some columns if it contains one value (Ex: ID010). what i want is it'll display all rows that have at least one column contains ID010. Currently, i use the below code to search. However, it could not find any data because all the criteria seem to tie together using AND operator ' Search range, [argIn]---> search value With [D5:M65536] .AutoFilter Field:=4, Criteria1:=argIn .AutoFilter Field:=5, Criteria1:=argIn .AutoFilter Field:=6, Criteria1:=argIn .AutoFilter Field:=7, Criteria1:=argIn .AutoFilter Field:=8, Criteria1:=argIn .AutoFilter Field:=9, Criteria1:=argIn .AutoFilter Field:=10, Criteria1:=argIn .AutoFilter Field:=11, Criteria1:=argIn .AutoFilter Field:=12, Criteria1:=argIn .AutoFilter Field:=13, Criteria1:=argIn End With I wonder if anyone could give me some hints or examples how to handle this issue. Thank you in advance.

    Read the article

< Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >