Search Results

Search found 11857 results on 475 pages for 'navigation properties'.

Page 420/475 | < Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >

  • Assign highest priority to my local repository

    - by Anwar Shah
    Original question was : "How to assign highest priority to local repository without using sources.list file" I have setup a local repository with packages I downloaded. I use it to avoid downloading the same packages over the Internet, when I need to reinstall my Ubuntu. It is a basic repository, created with apt-ftparchive packages . > Packages. I made this a trusted repository to avoid "unauthenticated repository" warning. (When you have a untrusted repository, apt or synaptic try to download the same packages over the Internet, 'cause it is trusted). I have been using this local repository for at least 1 years. But I have to always put my local repository line at the top of the sources.list file to use this. But this is annoying, since I must open a terminal and do some typing on it every time I reinstall Ubuntu, though there is a better tool software-properties-gtk. I cannot use this tool since it place the source line at the end of `sources.list. And the real problem is that, the apt or synaptic always download a package from the source which is mentioned earlier, without inspecting whether the packages are already available in the local repository. So, I have no choice but to place the local source at the top of sources.list doing terminal (I actually don't hate terminal, but I need a solution) . I have tried this method. But this does not help me. My preference file is this in /etc/apt/preferences.d/local-pin-900 Package: * Pin: release o=Local,n=ubuntu-local Pin-Priority: 900 My release file is this Origin: Local Label: Local-Ubuntu Description: Local Ubuntu Repository Codename: ubuntu-local MD5Sum: ed43222856d18f389c637ac3d7dd6f85 1043412 Packages d41d8cd98f00b204e9800998ecf8427e 0 Sources When I enable the apt-preference, the apt-cache policy correctly shows the preference, e.g. It shows the local repository has the highest priority. But when I do this sudo apt-get install <package-name>, apt tries to download it from Internet. But when I place my local-repo at the top, it installs from local repository. So, My question is - 'Is it possible to force apt to use local repository when the package is available in local repository, without explicitly placing "the local source" at the top of my repository list (e.g sources.list file) ?' Edit: output of apt-cache policy $package_name is as follows nautilus-wipe: Installed: (none) Candidate: 0.1.1-2 Version table: 0.1.1-2 0 500 http://archive.ubuntu.com/ubuntu/ precise/universe i386 Packages 900 file:/media/Main/Linux-Software/Ubuntu/Precise/ Packages It is showing that my local repository has higher preference, though it is not the one which comes first in sources.list file. Here is the output of apt-get install nautilus-wipe Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: nautilus-wipe 0 upgraded, 1 newly installed, 0 to remove and 131 not upgraded. Need to get 30.7 kB of archives. After this operation, 150 kB of additional disk space will be used. 'http://archive.ubuntu.com/ubuntu/pool/universe/n/nautilus-wipe/nautilus-wipe_0.1.1-2_i386.deb' nautilus-wipe_0.1.1-2_i386.deb 30730 MD5Sum:7d497b8dfcefe1c0b51a45f3b0466994 It is still trying to get the file from Internet, though I think it should be happy with the local one.

    Read the article

  • Lazy Evaluation &ndash; Why being lazy in F# blows my mind!

    - by MarkPearl
    First of all – shout out to Peter Adams – from the feedback I have gotten from him on the last few posts of F# that I have done – my mind has just been expanded. I did a blog post a few days ago about infinite sequences – I didn’t really understand what was going on with it, and I still don’t really get it – but I am getting closer. In Peter’s last comment he made mention of Lazy Evaluation. I am ashamed to say that up till then I had never heard about lazy evaluation – how can evaluation be lazy? I mean, I know about lazy loading and that makes sense… but surely something is either evaluated or not! Well… a bit of reading today and I have been enlightened to a point – if you do know of any good articles explaining lazy evaluation please send them to me. So what is lazy evaluation and why is it useful? Lazy evaluation is a process whereby the system only computes the values needed and “ignores” the computations not needed. I’m going out on a limb here, but with this explanation in hand, imagine the following C# code… public int CalculatedVal() { int Val1 = 0; int Val2 = 0; for (int Count = 0; Count < 1000000; Count++) { Val1++; } return Val2; }   Normally, even though Val1 is never needed, the system would loop 1000000 times and add 1 to the current value of Val1. Imagine if the system realized this and so just skipped this segment of code and instead did the following…. public int CalculatedVal() { int Val1 = 0; return Val2; }   A massive saving in computation and wasted effort. Now I am pretty sure it isn’t as simple as this but I think this is the basic idea. For a more detailed explanation of lazy evaluation in c#, Pedram Rezei has a wonderful post on lazy evaluation and makes some C# comparisons. I am not going to take any thunder from him by repeating everything he said since I think he did such a good job of explaining it himself. What I am interested in though is how in F# do you tell something to have lazy evalution, and how do you know if something will be eager or lazy by looking at it. I found this post was useful. From reading around F# by default uses eager evaluation unless explicitly told to use lazy evaluation. One exception to this is sequences, which are lazy by default. Now reading about lazy evaluation has helped me understand more about F# coding… From my understanding of F# because of its declarative nature, most of the actual code you are declaring properties and rules – very little code is actually saying do this right now - but when it comes to a “do this” code section, it then evaluates and optimizes code and applies the rules. So props to lazy evaluation and its optimizations…

    Read the article

  • ASP.NET MVC CRUD Validation

    - by Ricardo Peres
    One thing I didn’t refer on my previous post on ASP.NET MVC CRUD with AJAX was how to retrieve model validation information into the client. We want to send any model validation errors to the client in the JSON object that contains the ProductId, RowVersion and Success properties, specifically, if there are any errors, we will add an extra Errors collection property. Here’s how: 1: [HttpPost] 2: [AjaxOnly] 3: [Authorize] 4: public JsonResult Edit(Product product) 5: { 6: if (this.ModelState.IsValid == true) 7: { 8: using (ProductContext ctx = new ProductContext()) 9: { 10: Boolean success = false; 11:  12: ctx.Entry(product).State = (product.ProductId == 0) ? EntityState.Added : EntityState.Modified; 13:  14: try 15: { 16: success = (ctx.SaveChanges() == 1); 17: } 18: catch (DbUpdateConcurrencyException) 19: { 20: ctx.Entry(product).Reload(); 21: } 22:  23: return (this.Json(new { Success = success, ProductId = product.ProductId, RowVersion = Convert.ToBase64String(product.RowVersion) })); 24: } 25: } 26: else 27: { 28: Dictionary<String, String> errors = new Dictionary<String, String>(); 29:  30: foreach (KeyValuePair<String, ModelState> keyValue in this.ModelState) 31: { 32: String key = keyValue.Key; 33: ModelState modelState = keyValue.Value; 34:  35: foreach (ModelError error in modelState.Errors) 36: { 37: errors[key] = error.ErrorMessage; 38: } 39: } 40:  41: return (this.Json(new { Success = false, ProductId = 0, RowVersion = String.Empty, Errors = errors })); 42: } 43: } As for the view, we need to change slightly the onSuccess JavaScript handler on the Single view: 1: function onSuccess(ctx) 2: { 3: if (typeof (ctx.Success) != 'undefined') 4: { 5: $('input#ProductId').val(ctx.ProductId); 6: $('input#RowVersion').val(ctx.RowVersion); 7:  8: if (ctx.Success == false) 9: { 10: var errors = ''; 11:  12: if (typeof (ctx.Errors) != 'undefined') 13: { 14: for (var key in ctx.Errors) 15: { 16: errors += key + ': ' + ctx.Errors[key] + '\n'; 17: } 18:  19: window.alert('An error occurred while updating the entity: the model contained the following errors.\n\n' + errors); 20: } 21: else 22: { 23: window.alert('An error occurred while updating the entity: it may have been modified by third parties. Please try again.'); 24: } 25: } 26: else 27: { 28: window.alert('Saved successfully'); 29: } 30: } 31: else 32: { 33: if (window.confirm('Not logged in. Login now?') == true) 34: { 35: document.location.href = '<% 1: : FormsAuthentication.LoginUrl %>?ReturnURL=' + document.location.pathname; 36: } 37: } 38: } The logic is as this: If the Edit action method is called for a new entity (the ProductId is 0) and it is valid, the entity is saved, and the JSON results contains a Success flag set to true, a ProductId property with the database-generated primary key and a RowVersion with the server-generated ROWVERSION; If the model is not valid, the JSON result will contain the Success flag set to false and the Errors collection populated with all the model validation errors; If the entity already exists in the database (ProductId not 0) and the model is valid, but the stored ROWVERSION is different that the one on the view, the result will set the Success property to false and will return the current (as loaded from the database) value of the ROWVERSION on the RowVersion property. On a future post I will talk about the possibilities that exist for performing model validation, stay tuned!

    Read the article

  • Create Custom Windows Key Keyboard Shortcuts in Windows

    - by Asian Angel
    Nearly everyone uses keyboard shortcuts of some sort on their Windows system but what if you could create new ones for your favorite apps or folders? You might just be amazed at how simple it can be with just a few clicks and no programming using WinKey. WinKey in Action During the installation process you will see this window that gives you a good basic idea of just what can be accomplished with this wonderful little app. As soon as the installation process has finished you will see the “Main App Window”. It provides a simple straightforward listing of all the keyboard shortcuts that it is currently managing. Note: WinKey will automatically add an entry to the “Startup Listing” in your “Start Menu” during installation. To see the regular built-in Windows keyboard shortcuts that it is managing click “Standard Shortcuts” to select it and then click on “Properties”. For those who are curious WinKey does have a “System Tray Icon” that can be disabled if desired. Now onto creating those new keyboard shortcuts… For our example we decided to create a keyboard shortcut for an app rather than a folder. To create a shortcut for an app click on the small “Paper Icon” as shown here. Once you have done that browse to the appropriate folder and select the exe file. The second step will be choosing which keyboard shortcut you would like to associate with that particular app. You can use the drop-down list to choose from a listing of available keyboard combinations. For our example we chose “Windows Key + A”. The final step is choosing the “Run Mode”. There are three options available in the drop-down list…choose the one that best suits your needs. Here is what our example looked like once finished. All that is left to do at this point is click “OK” to finish the process. And just like that your new keyboard shortcut is now listed in the “Main App Window”. Time to try out your new keyboard shortcut! One quick use of our new keyboard shortcut and Iron Browser opened right up. WinKey really does make creating new keyboard shortcuts as simple as possible. Conclusion If you have been wanting to create new keyboard shortcuts for your favorite apps and folders then it really does not get any simpler than with WinKey. This is definitely a recommended app for anyone who loves “get it done” software. Links Download WinKey at Softpedia Similar Articles Productive Geek Tips Show Keyboard Shortcut Access Keys in Windows VistaCreate a Keyboard Shortcut to Access Hidden Desktop Icons and FilesKeyboard Ninja: 21 Keyboard Shortcut ArticlesAnother Desktop Cube for Windows XP/VistaHow-To Geek on Lifehacker: Control Your Computer with Shortcuts & Speed Up Vista Setup TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Recycle ! Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems

    Read the article

  • SQLAuthority News – Follow up on – Replace a Column Name in Multiple Stored Procedure all together

    - by pinaldave
    Last month I had a fantastic time with lots of puzzles and brain teasers, the amount of participation which I have received on the blog is indeed inspiring to write more. One of the blog post was about how to replace a column name in all the stored procedures. The article had very interesting conversation as a follow up. Please read the original article Replace a Column Name in Multiple Stored Procedure all together before reading this blog further as they are connected. Let us start few of the interesting comments. SQL Server Expert Imran Mohammed had a wonderful first and excellent note. I suggest all of you to read it. Imran stresses on the Data Modelling and Logical as well as Physical Design. Developers must create a logical design and get approval on naming convention, data types, references, constraints, indexes etc. He further suggested that one should not cut steps but must follow all the industry standards and guidelines. Here extended my blog post with following note – “Extending Pinal’s answer, what you can do is go to database properties, all tasks, scripts objects, in scripting wizard select all the stored procedure for which you want to change column name, export the query to a new window and then do find and replace, all in once window and execute the script. But make sure you check what you are replacing, sometimes column names are also used in table names, for ex:Table Name: Product and Column Name: ProductId, ProductName”. Thanks Imran Great Points!  Gatej Alexandru suggested that it is not good idea to DROP or CREATE but rather use ALTER as quite possible there may be permissions issue as well. Very good point let me see if I can write blog post over it. Vinay Kumar and SQLStudent144 have proposed another method to achieve the same. I am combining their solution and writing them here. Step 1. Press Ctrl+T or change “Result to Text” mode. Step 2. Execute below commands.SELECT 'EXEC sp_helptext [' + referencing_schema_name + '.' + referencing_entity_name + ']' FROM sys.dm_sql_referencing_entities('schema.objectname','OBJECT') Where schema.objectname is the object or table you are searching for. Step 3. Now copy the result and paste in new window. Again Press Ctrl+T or change “Result to Text” mode. Step 4. Copy the result and paste in new window. Execute the query. Step 5. Copy the result and paste in new window. Step 6. Now find your searching text in the script, make necessary changes and execute this script. Do not forget to remove the code which is generated in resultset which are not relevant to the T-SQL Script. Digitqr suggest we can do this for other objects besides Stored Procedure as well. Iosif suggests to use tool SQL Search from RedGate. I guess this sums it well. We have an alternative perspective to our original issue of replacing the column name in multiple stored procedure. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • concurrency::accelerator_view

    - by Daniel Moth
    Overview We saw previously that accelerator represents a target for our C++ AMP computation or memory allocation and that there is a notion of a default accelerator. We ended that post by introducing how one can obtain accelerator_view objects from an accelerator object through the accelerator class's default_view property and the create_view method. The accelerator_view objects can be thought of as handles to an accelerator. You can also construct an accelerator_view given another accelerator_view (through the copy constructor or the assignment operator overload). Speaking of operator overloading, you can also compare (for equality and inequality) two accelerator_view objects between them to determine if they refer to the same underlying accelerator. We'll see later that when we use concurrency::array objects, the allocation of data takes place on an accelerator at array construction time, so there is a constructor overload that accepts an accelerator_view object. We'll also see later that a new concurrency::parallel_for_each function overload can take an accelerator_view object, so it knows on what target to execute the computation (represented by a lambda that the parallel_for_each also accepts). Beyond normal usage, accelerator_view is a quality of service concept that offers isolation to multiple "consumers" of an accelerator. If in your code you are accessing the accelerator from multiple threads (or, in general, from different parts of your app), then you'll want to create separate accelerator_view objects for each thread. flush, wait, and queuing_mode When you create an accelerator_view via the create_view method of the accelerator, you pass in an option of immediate or deferred, which are the two members of the queuing_mode enum. At any point you can access this value from the queuing_mode property of the accelerator_view. When the queuing_mode value is immediate (which is the default), any commands sent to the device such as kernel invocations and data transfers (e.g. parallel_for_each and copy, as we'll see in future posts), will get submitted as soon as the runtime sees fit (that is the definition of immediate). When the value of queuing_mode is deferred, the commands will be batched up. To send all buffered commands to the device for execution, there is a non-blocking flush method that you can call. If you wish to block until all the commands have been sent, there is a wait method you can call. Deferring is a more advanced scenario aimed at performance gains when you are submitting many device commands and you want to avoid the tiny overhead of flushing/submitting each command separately. Querying information Just like accelerator, accelerator_view exposes the is_debug and version properties. In fact, you can always access the accelerator object from the accelerator property on the accelerator_view class to access the accelerator interface we looked at previously. Interop with D3D (aka DX) In a later post I'll show an example of an app that uses C++ AMP to compute data that is used in pixel shaders. In those scenarios, you can benefit by integrating C++ AMP into your graphics pipeline and one of the building blocks for that is being able to use the same device context from both the compute kernel and the other shaders. You can do that by going from accelerator_view to device context (and vice versa), through part of our interop API in amp.h: *get_device, create_accelerator_view. More on those in a later post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Part 15: Fail a build based on the exit code of a console application

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application When you have a Console Application or a batch file that has errors, the exitcode is set to another value then 0. You would expect that the build would see this and report an error. This is not true however. First we setup the scenario. Add a ConsoleApplication project to your solution you are building. In the Main function set the ExitCode to 1     class Program    {        static void Main(string[] args)        {            Console.WriteLine("This is an error in the script.");            Environment.ExitCode = 1;        }    } Checkin the code. You can choose to include this Console Application in the build or you can decide to add the exe to source control Now modify the Build Process Template CustomTemplate.xaml Add an argument ErrornousScript Scroll down beneath the TryCatch activity called “Try Compile, Test, and Associate Changesets and Work Items” Add an Sequence activity to the template In the Sequence, add a ConvertWorkspaceItem and an InvokeProcess activity (see Part 14: Execute a PowerShell script  for more detailed steps) In the FileName property of the InvokeProcess use the ErrornousScript so the ConsoleApplication will be called. Modify the build definition and make sure that the ErrornousScript is executing the exe that is setting the ExitCode to 1. You have now setup a build definition that will execute the errornous Console Application. When you run it, you will see that the build succeeds. This is not what you want! To solve this, you can make use of the Result property on the InvokeProcess activity. So lets change our Build Process Template. Add the new variables (scoped to the sequence where you run the Console Application) called ExitCode (type = Int32) and ErrorMessage Click on the InvokeProcess activity and change the Result property to ExitCode In the Handle Standard Output of the InvokeProcess add a Sequence activity In the Sequence activity, add an Assign primitive. Set the following properties: To = ErrorMessage Value = If(Not String.IsNullOrEmpty(ErrorMessage), Environment.NewLine + ErrorMessage, "") + stdOutput And add the default BuildMessage to the sequence that outputs the stdOutput Add beneath the InvokeProcess activity and If activity with the condition ExitCode <> 0 In the Then section add a Throw activity and set the Exception property to New Exception(ErrorMessage) The complete workflow looks now like When you now check in the Build Process Template and run the build, you get the following result And that is exactly what we want.   You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • SQL Server source control from Visual Studio

    - by David Atkinson
    Developers have long since had to context switch between two IDEs, Visual Studio for application code development and SQL Server Management Studio for database development. While this is accepted, especially given the richness of the database development feature set in SSMS, loading a separate tool can seem a little overkill. This is where SQL Connect comes in. This is an add-in to Visual Studio that provides a connected development experience for the SQL Server developer. Connected database development involves modifying a development sandbox database, as opposed to offline development, where SQL text files are modified independently of the database. One of the main complaints of Data Dude (VS DBPro) is that it enforces the offline approach. This gripe is what SQL Connect addresses. If you don't already use SQL Source Control, you can get up and running with SQL Connect by adding a new project to your Visual Studio solution as follows: Then choose your existing development database and you're ready to go. If you already use SQL Source Control, you will need to link SQL Connect to your existing database scripts folder repository, so SQL Connect and SQL Source Control can be used collaboratively (note that SQL Source Control v.3.0.9.18 or later is required). Locate the repository (this can be found in the Setup tab in SQL Source Control). .and create a working folder for it (here I'm using TortoiseSVN). Back in Visual Studio, locate the SQL Connect panel (in the View menu if it hasn't auto loaded) and select Import SQL Source Control project Locate your working folder and click Import. This creates a Red Gate database project under your solution: From here you can modify your development database, and manage your changes in source control. To associate your development database with the project, right click on the project node, select Properties, set the database and Save. Now you're ready to make some changes. Locate the object you'd like to modify in the Solution Explorer, and double click it to invoke a query window or table designer. You also have the option to edit the creation SQL directly using Edit SQL File in Project. Keeping the development database and Visual Studio project in sync is as easy as clicking on a button. One you've made your change, you can use whichever mechanism you choose to commit to source control. Here I'm using the free open-source AnkhSVN to integrate Subversion with Visual Studio. Maintaining your database in a Visual Studio solution means that you can commit database changes and application code changes in the same changeset. This is desirable if you have continuous integration set up as you want to ensure that all files related to a change are committed atomically, so you avoid an interim "broken build". More discussion on SQL Connect and its benefits can be found in the following article on Simple Talk: No More Disconnected SQL Development in Visual Studio The SQL Connect project team is currently assessing the backlog for the next development effort, and they'd appreciate your feature suggestions, as well as your votes on their suggestions site: http://redgate.uservoice.com/forums/140800-sql-connect-for-visual-studio- A 28-day free trial of SQL Connect is available from the Red Gate website. Technorati Tags: SQL Server

    Read the article

  • SQL SERVER – Identity Fields – Contest Win Joes 2 Pros Combo (USD 198) – Day 2 of 5

    - by pinaldave
    August 2011 we ran a contest where every day we give away one book for an entire month. The contest had extreme success. Lots of people participated and lots of give away. I have received lots of questions if we are doing something similar this month. Absolutely, instead of running a contest a month long we are doing something more interesting. We are giving away USD 198 worth gift every day for this week. We are giving away Joes 2 Pros 5 Volumes (BOOK) SQL 2008 Development Certification Training Kit every day. One copy in India and One in USA. Total 2 of the giveaway (worth USD 198). All the gifts are sponsored from the Koenig Training Solution and Joes 2 Pros. The books are available here Amazon | Flipkart | Indiaplaza How to Win: Read the Question Read the Hints Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India residents only) 2 Winners will be randomly selected announced on August 20th. Question of the Day: Which of the following statement is incorrect? a) Identity value can be negative. b) Identity value can have negative interval. c) Identity value can be of datatype VARCHAR d) Identity value can have increment interval larger than 1 Query Hints: BIG HINT POST A simple way to determine if a table contains an identity field is to use the SSMS Object Explorer Design Interface. Navigate to the table, then right-click it and choose Design from the pop-up window. When your design tab opens, select the first field in the table to view its list of properties in the lower pane of the tab (In this case the field is ProductID). Look to see if the Identity Specification property in the lower pane is set to either yes or no. SQL Server will allow you to utilize IDENTITY_INSERT with just one table at a time. After you’ve completed the needed work, it’s very important to reset the IDENTITY_INSERT back to OFF. Additional Hints: I have previously discussed various concepts from SQL Server Joes 2 Pros Volume 2. SQL Joes 2 Pros Development Series – Output Clause in Simple Examples SQL Joes 2 Pros Development Series – Ranking Functions – Advanced NTILE in Detail SQL Joes 2 Pros Development Series – Ranking Functions – RANK( ), DENSE_RANK( ), and ROW_NUMBER( ) SQL Joes 2 Pros Development Series – Advanced Aggregates with the Over Clause SQL Joes 2 Pros Development Series – Aggregates with the Over Clause SQL Joes 2 Pros Development Series – Overriding Identity Fields – Tricks and Tips of Identity Fields SQL Joes 2 Pros Development Series – Many to Many Relationships Next Step: Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India) Bonus Winner Leave a comment with your favorite article from the “additional hints” section and you may be eligible for surprise gift. There is no country restriction for this Bonus Contest. Do mention why you liked it any particular blog post and I will announce the winner of the same along with the main contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Editor's Notebook - Social Aura: Insights from the Oracle Social Media Summit

    - by user462779
    Panelists talk social marketing at the Oracle Social Media Summit On November 14, I traveled to Las Vegas for the first-ever Oracle Social Media Summit. The two day event featured an impressive collection of social media luminaries including: David Kirkpatrick (founder and CEO of Techonomy Media and author of The Facebook Effect), John Yi (Head of Marketing Partnerships, Facebook), Matt Dickman (EVP of Social Business Innovation, Weber Shandwick), and Lyndsay Iorio (Social Media & Communications Manager, NBC Sports Group) among others. It was also a great opportunity to talk shop with some of our new Vitrue and Involver colleagues who have been returning great social media results even before their companies were acquired by Oracle. I was live tweeting the event from @OracleProfit which was great for those who wanted to follow along with the proceedings from the comfort of their office or blackjack table. But I've also found over the years that live tweeting an event is a handy way to take notes: I can sift back through my record of what people said or thoughts I had at the time and organize the Twitter messages into some kind of summary account of the proceedings. I've had nearly a month to reflect on the presentations and conversations at the event and a few key topics have emerged: David Kirkpatrick's comment during the opening presentation really set the stage for the conversations that followed. Especially if you are a marketer or publisher, the idea that you are in a one-way broadcast relationship with your audience is a thing of the past. "Rising above the noise" does not mean reaching for a megaphone, ALL CAPS, or exclamation marks. Hype will not motivate social media denizens to do anything but unfollow and tune you out. But knowing your audience, creating quality content and/or offers for them, treating them with respect, and making an authentic effort to please them: that's what I believe is now necessary. And Kirkpatrick's comment early in the day really made the point. Later in the day, our friends @Vitrue demonstrated this point by elaborating on a comment by Facebook's John Yi. If a social strategy is comprised of nothing more than cutting/pasting the same message into different social media properties, you're missing the opportunity to have an actual conversation. That's not shouting at your audience, but it does feel like an empty gesture. Walter Benjamin, perplexed by auraless Twitter messages Not to get too far afield, but 20th century cultural critic Walter Benjamin has a concept that is useful for understanding the dynamics of the empty social media gesture: Aura. In his work The Work of Art in the Age of Mechanical Reproduction, Benjamin struggled to understand the difference he percieved between the value of a hand-made art object (a painting, wood cutting, sculpture, etc.) and a photograph. For Benjamin, aura is similar to the "soul" of an artwork--the intangible essence that is created when an artist picks up a tool and puts creative energy and effort into a work. I'll defer to Wikipedia: "He argues that the "sphere of authenticity is outside the technical" so that the original artwork is independent of the copy, yet through the act of reproduction something is taken from the original by changing its context. He also introduces the idea of the "aura" of a work and its absence in a reproduction." So make sure you put aura into your social interactions. Don't just mechanically reproduce them. Keeping aura in your interactions requires the intervention of an actual human being. That's why @NoahHorton's comment about content curation struck me as incredibly important. Maybe it's just my own prejudice, being in the content curation business myself. And it's not to totally discount machine-aided content management systems, content recommendation engines, and other tech-driven tools for building an exceptional content experience. It's just that without that human interaction--that editor who reviews the analytics and responds to user feedback--interactions over social media feel a bit empty. It is SOCIAL media, right? (We'll leave the conversation about social machines for another day). At the end of the day, experimentation is key. Just like trying to find that right joke to tell at the beginning of your presentation or that good opening like at a cocktail party, social media messages and interactions can take some trial and error. Don't be afraid to try things, tinker with incomplete ideas, abandon things that don't work, and engage in the conversation. And make sure your heart is in it, otherwise your audience can tell. And finally:

    Read the article

  • Does my use of the strategy pattern violate the fundamental MVC pattern in iOS?

    - by Goodsquirrel
    I'm about to use the 'strategy' pattern in my iOS app, but feel like my approach violates the somehow fundamental MVC pattern. My app is displaying visual "stories", and a Story consists (i.e. has @properties) of one Photo and one or more VisualEvent objects to represent e.g. animated circles or moving arrows on the photo. Each VisualEvent object therefore has a eventType @property, that might be e.g. kEventTypeCircle or kEventTypeArrow. All events have things in common, like a startTime @property, but differ in the way they are being drawn on the StoryPlayerView. Currently I'm trying to follow the MVC pattern and have a StoryPlayer object (my controller) that knows about both the model objects (like Story and all kinds of visual events) and the view object StoryPlayerView. To chose the right drawing code for each of the different visual event types, my StoryPlayer is using a switch statement. @implementation StoryPlayer // (...) - (void)showVisualEvent:(VisualEvent *)event onStoryPlayerView:storyPlayerView { switch (event.eventType) { case kEventTypeCircle: [self showCircleEvent:event onStoryPlayerView:storyPlayerView]; break; case kEventTypeArrow: [self showArrowDrawingEvent:event onStoryPlayerView:storyPlayerView]; break; // (...) } But switch statements for type checking are bad design, aren't they? According to Uncle Bob they lead to tight coupling and can and should almost always be replaced by polymorphism. Having read about the "Strategy"-Pattern in Head First Design Patterns, I felt this was a great way to get rid of my switch statement. So I changed the design like this: All specialized visual event types are now subclasses of an abstract VisualEvent class that has a showOnStoryPlayerView: method. @interface VisualEvent : NSObject - (void)showOnStoryPlayerView:(StoryPlayerView *)storyPlayerView; // abstract Each and every concrete subclass implements a concrete specialized version of this drawing behavior method. @implementation CircleVisualEvent - (void)showOnStoryPlayerView:(StoryPlayerView *)storyPlayerView { [storyPlayerView drawCircleAtPoint:self.position color:self.color lineWidth:self.lineWidth radius:self.radius]; } The StoryPlayer now simply calls the same method on all types of events. @implementation StoryPlayer - (void)showVisualEvent:(VisualEvent *)event onStoryPlayerView:storyPlayerView { [event showOnStoryPlayerView:storyPlayerView]; } The result seems to be great: I got rid of the switch statement, and if I ever have to add new types of VisualEvents in the future, I simply create new subclasses of VisualEvent. And I won't have to change anything in StoryPlayer. But of cause this approach violates the MVC pattern since now my model has to know about and depend on my view! Now my controller talks to my model and my model talks to the view calling methods on StoryPlayerView like drawCircleAtPoint:color:lineWidth:radius:. But this kind of calls should be controller code not model code, right?? Seems to me like I made things worse. I'm confused! Am I completely missing the point of the strategy pattern? Is there a better way to get rid of the switch statement without breaking model-view separation?

    Read the article

  • Migrating SQL Server Compact Edition (SQL CE) database to SQL Server using Web Matrix

    - by Harish Ranganathan
    One of the things that is keeping us busy is the Web Camps we are delivering across 5 cities.  If you are a reader of this blog, and also attended one of these web camps, there is a good chance that you have seen me since I was there in all the places, so far.  The topics that we cover include Visual Studio 2010 SP1, SQL CE, ASP.NET MVC & HTML5.  Whenever I talk about SQL CE, the immediate response is that, people are wow that Microsoft has shipped a FREE compact edition database, which is an embedded database that can be x-copy deployed.  If you think, well didn’t Microsoft ship SQL Express which is FREE?  The difference is that, SQL Express runs as a service in the machine (if you open SQL Configuration Manager, you can notice that SQL Express is running as a service along with your SQL Server Engine (if you have installed ).  This makes it that, even if you are willing to use SQL Express when you deploy your application, it needs to be installed on the production machine (hosting provider) and it needs to run as a service.  Many hosters don’t allow such services to run on their space. SQL CE comes as a x-Copy deploy-able database with just a few DLLs required to run it on the machine and they don’t even need to be installed in GAC on the production machine.  In fact, if you have Visual Studio 2010 SP1 installed, you can use the “Add Deployable Dependencies” option in Project-Properties and it would detect that SQL CE is something you would probably want to add as a deploy-able dependency for your project.  With that, it bundles the required DLLs as a part of the “_bin_deployableAssemblies” folder.  So your project can be x-Copy deployed and just works fine. However, SQL CE has the limit of 4GB storage space.  Real world applications often require more than just 4GB of data storage and it often turns out that people would like to use SQL CE for development/ramp up stages but would like to migrate to full fledged SQL Server after a while.  So, its only natural that the question arises “How do I move my SQL CE database to SQL Server”  And honestly, it doesn’t come across as a straight forward support.  I was talking to Ambrish Mishra (PM in SQL CE Team, Hyderabad) since I got this question in almost all the places where we talked about SQL CE.   He was kind enough to demonstrate how this can be accomplished using Web Matrix.  Open Web Matrix (Web Matrix can be installed for free from www.microsoft.com/web) and click on “Site from Template” Click on the “Bakery” template (since by default it uses a SQL CE database and has all the required sample data) and click “Ok”. In the project, you can navigate to the Database tab and will be able to find that the Bakery site uses a SQL CE database “bakery.sdf” Select the “bakery.sdf” and you will be able to see the “Migrate” button on the top right Once you click on the “Migrate” button, you will notice that the popup wizard opens up and by default is configured for SQL Express.  You can edit the same to point to your local SQL Server instance, or a remote server. Upon filling in the Server Name, Username and Password, when you click “Ok”, couple of things happen.  1. The database is migrated to SQL Server (local or remote – subject to permissions on remote server).   You can open up SQL Server Management Studio and connect to the server to verify that the “bakery” database exists under “Databases” node. 2. You can also notice that in Web Matrix, when you navigate to the “Files” tab and open up the web.config file, connection string now points to the SQL Server instance (yes, the Migrate button was smart enough to make this change too ) And there it is, your SQL Server Compact Edition database, now migrated to SQL Server!! In a future post, I would explain the steps involved when using Visual Studio. Cheers !!!

    Read the article

  • Implementing Database Settings Using Policy Based Management

    - by Ashish Kumar Mehta
    Introduction Database Administrators have always had a tough time to ensuring that all the SQL Servers administered by them are configured according to the policies and standards of organization. Using SQL Server’s  Policy Based Management feature DBAs can now manage one or more instances of SQL Server 2008 and check for policy compliance issues. In this article we will utilize Policy Based Management (aka Declarative Management Framework or DMF) feature of SQL Server to implement and verify database settings on all production databases. It is best practice to enforce the below settings on each Production database. However, it can be tedious to go through each database and then check whether the below database settings are implemented across databases. In this article I will explain it to you how to utilize the Policy Based Management Feature of SQL Server 2008 to create a policy to verify these settings on all databases and in cases of non-complaince how to bring them back into complaince. Database setting to enforce on each user database : Auto Close and Auto Shrink Properties of database set to False Auto Create Statistics and Auto Update Statistics set to True Compatibility Level of all the user database set as 100 Page Verify set as CHECKSUM Recovery Model of all user database set to Full Restrict Access set as MULTI_USER Configure a Policy to Verify Database Settings 1. Connect to SQL Server 2008 Instance using SQL Server Management Studio 2. In the Object Explorer, Click on Management > Policy Management and you will be able to see Policies, Conditions & Facets as child nodes 3. Right click Policies and then select New Policy…. from the drop down list as shown in the snippet below to open the  Create New Policy Popup window. 4. In the Create New Policy popup window you need to provide the name of the policy as “Implementing and Verify Database Settings for Production Databases” and then click the drop down list under Check Condition. As highlighted in the snippet below click on the New Condition… option to open up the Create New Condition window. 5. In the Create New Condition popup window you need to provide the name of the condition as “Verify and Change Database Settings”. In the Facet drop down list you need to choose the Facet as Database Options as shown in the snippet below. Under Expression you need to select Field value as @AutoClose and then choose Operator value as ‘ = ‘ and finally choose Value as False. Now that you have successfully added the first field you can now go ahead and add rest of the fields as shown in the snippet below. Once you have successfully added all the above shown fields of Database Options Facet, click OK to save the changes and to return to the parent Create New Policy – Implementing and Verify Database Settings for Production Database windows where you will see that the newly created condition “Verify and Change Database Settings” is selected by default. Continues…

    Read the article

  • Building a project in VS that depends on a static and dynamic library

    - by fg nu
    Noob noobin'. I would appreciate some very careful handholding in setting up an example in Visual Studio 2010 Professional where I am trying to build a project which links: a previously built static library, for which the VS project folder is "C:\libjohnpaul\" a previously built dynamic library, for which the VS project folder is "C:\libgeorgeringo\" These are listed as Recipes 1.11, 1.12 and 1.13 in the C++ Cookbook. The project fails to compile for me with unresolved dependencies (see details below), and I can't figure out why. Project 1: Static Library The following are the header and source files that were compiled in this project. I was able to compile this project fine in VS2010, to the named standard library "libjohnpaul.lib" which lives in the folder ("C:/libjohnpaul/Release/"). // libjohnpaul/john.hpp #ifndef JOHN_HPP_INCLUDED #define JOHN_HPP_INCLUDED void john( ); // Prints "John, " #endif // JOHN_HPP_INCLUDED // libjohnpaul/john.cpp #include <iostream> #include "john.hpp" void john( ) { std::cout << "John, "; } // libjohnpaul/paul.hpp #ifndef PAUL_HPP_INCLUDED #define PAUL_HPP_INCLUDED void paul( ); // Prints " Paul, " #endif // PAUL_HPP_INCLUDED // libjohnpaul/paul.cpp #include <iostream> #include "paul.hpp" void paul( ) { std::cout << "Paul, "; } // libjohnpaul/johnpaul.hpp #ifndef JOHNPAUL_HPP_INCLUDED #define JOHNPAUL_HPP_INCLUDED void johnpaul( ); // Prints "John, Paul, " #endif // JOHNPAUL_HPP_INCLUDED // libjohnpaul/johnpaul.cpp #include "john.hpp" #include "paul.hpp" #include "johnpaul.hpp" void johnpaul( ) { john( ); paul( ); Project 2: Dynamic Library Here are the header and source files for the second project, which also compiled fine with VS2010, and the "libgeorgeringo.dll" file lives in the directory "C:\libgeorgeringo\Debug". // libgeorgeringo/george.hpp #ifndef GEORGE_HPP_INCLUDED #define GEORGE_HPP_INCLUDED void george( ); // Prints "George, " #endif // GEORGE_HPP_INCLUDED // libgeorgeringo/george.cpp #include <iostream> #include "george.hpp" void george( ) { std::cout << "George, "; } // libgeorgeringo/ringo.hpp #ifndef RINGO_HPP_INCLUDED #define RINGO_HPP_INCLUDED void ringo( ); // Prints "and Ringo\n" #endif // RINGO_HPP_INCLUDED // libgeorgeringo/ringo.cpp #include <iostream> #include "ringo.hpp" void ringo( ) { std::cout << "and Ringo\n"; } // libgeorgeringo/georgeringo.hpp #ifndef GEORGERINGO_HPP_INCLUDED #define GEORGERINGO_HPP_INCLUDED // define GEORGERINGO_DLL when building libgerogreringo.dll # if defined(_WIN32) && !defined(__GNUC__) # ifdef GEORGERINGO_DLL # define GEORGERINGO_DECL _ _declspec(dllexport) # else # define GEORGERINGO_DECL _ _declspec(dllimport) # endif # endif // WIN32 #ifndef GEORGERINGO_DECL # define GEORGERINGO_DECL #endif // Prints "George, and Ringo\n" #ifdef __MWERKS__ # pragma export on #endif GEORGERINGO_DECL void georgeringo( ); #ifdef __MWERKS__ # pragma export off #endif #endif // GEORGERINGO_HPP_INCLUDED // libgeorgeringo/ georgeringo.cpp #include "george.hpp" #include "ringo.hpp" #include "georgeringo.hpp" void georgeringo( ) { george( ); ringo( ); } Project 3: Executable that depends on the previous libraries Lastly, I try to link the aforecompiled static and dynamic libraries into one project called "helloBeatlesII" which has the project directory "C:\helloBeatlesII" (note that this directory does not nest the other project directories). The linking process that I did is described below: To the "helloBeatlesII" solution, I added the solutions "libjohnpaul" and "libgeorgeringo"; then I changed the properties of the "helloBeatlesII" project to additionally point to the include directories of the other two projects on which it depends ("C:\libgeorgeringo\libgeorgeringo" & "C:\libjohnpaul\libjohnpaul"); added "libgeorgeringo" and "libjohnpaul" to the project dependencies of the "helloBeatlesII" project and made sure that the "helloBeatlesII" project was built last. Trying to compile this project gives me the following unsuccessful build: 1------ Build started: Project: helloBeatlesII, Configuration: Debug Win32 ------ 1Build started 10/13/2012 5:48:32 PM. 1InitializeBuildStatus: 1 Touching "Debug\helloBeatlesII.unsuccessfulbuild". 1ClCompile: 1 helloBeatles.cpp 1ManifestResourceCompile: 1 All outputs are up-to-date. 1helloBeatles.obj : error LNK2019: unresolved external symbol "void __cdecl georgeringo(void)" (?georgeringo@@YAXXZ) referenced in function _main 1helloBeatles.obj : error LNK2019: unresolved external symbol "void __cdecl johnpaul(void)" (?johnpaul@@YAXXZ) referenced in function _main 1E:\programming\cpp\vs-projects\cpp-cookbook\helloBeatlesII\Debug\helloBeatlesII.exe : fatal error LNK1120: 2 unresolved externals 1 1Build FAILED. 1 1Time Elapsed 00:00:01.34 ========== Build: 0 succeeded, 1 failed, 2 up-to-date, 0 skipped ========== At this point I decided to call in the cavalry. I am new to VS2010, so in all likelihood I am missing something straightforward.

    Read the article

  • jtreg update, March 2012

    - by jjg
    There is a new update for jtreg 4.1, b04, available. The primary changes have been to support faster and more reliable test runs, especially for tests in the jdk/ repository. [ For users inside Oracle, there is preliminary direct support for gathering code coverage data using jcov while running tests, and for generating a coverage report when all the tests have been run. ] -- jtreg can be downloaded from the OpenJDK jtreg page: http://openjdk.java.net/jtreg/. Scratch directories On platforms like Windows, if a test leaves a file open when the test is over, that can cause a problem for downstream tests, because the scratch directory cannot be emptied beforehand. This is addressed in agentvm mode by discarding any agents using that scratch directory and starting new agents using a new empty scratch directory. Successive directives use suffices _1, _2, etc. If you see such directories appearing in the work directory, that is an indication that files were left open in the preceding directory in the series. Locking support Some tests use shared system resources such as fixed port numbers. This causes a problem when running tests concurrently. So, you can now mark a directory such that all the tests within all such directories will be run sequentially, even if you use -concurrency:N on the command line to run the rest of the tests in parallel. This is seen as a short term solution: it is recommended that tests not use shared system resources whenever possible. If you are running multiple instances of jtreg on the same machine at the same time, you can use a new option -lock:file to specify a file to be used for file locking; otherwise, the locking will just be within the JVM used to run jtreg. "autovm mode" By default, if no options to the contrary are given on the command line, tests will be run in othervm mode. Now, a test suite can be marked so that the default execution mode is "agentvm" mode. In conjunction with this, you can now mark a directory such that all the tests within that directory will be run in "othervm" mode. Conceptually, this is equivalent to putting /othervm on every appropriate action on every test in that directory and any subdirectories. This is seen as a short term solution: it is recommended tests be adapted to use agentvm mode, or use "@run main/othervm" explicitly. Info in test result files The user name and jtreg version info are now stored in the properties near the beginning of the .jtr file. Build The makefiles used to build and test jtreg have been reorganized and simplified. jtreg is now using JT Harness version 4.4. Other jtreg provides access to GNOME_DESKTOP_SESSION_ID when set. jtreg ensures that shell tests are given an absolute path for the JDK under test. jtreg now honors the "first sentence rule" for the description given by @summary. jtreg saves the default locale before executing a test in samevm or agentvm mode, and restores it afterwards. Bug fixes jtreg tried to execute a test even if the compilation failed in agentvm mode because of a JVM crash. jtreg did not correctly handle the -compilejdk option. Acknowledgements Thanks to Alan, Amy, Andrey, Brad, Christine, Dima, Max, Mike, Sherman, Steve and others for their help, suggestions, bug reports and for testing this latest version.

    Read the article

  • Announcement: Employee Info Starter Kit (v5.0) is Released

    - by Mohammad Ashraful Alam
    Ever wanted to have a simple jQuery menu bound with ASP.NET web site map file? Ever wanted to have cool css design stuffs implemented on your ASP.NET data bound controls? Ever wanted to let Visual Studio generate logical layers for you, which can be easily tested, customized and bound with ASP.NET data controls? If your answers with respect to above questions are ‘yes’, then you will probably happy to try out latest release (v5.0) of Employee Starter Kit, which is intended to address different types of real world challenges faced by web application developers when performing common CRUD operations. Using a single database table ‘Employee’, the current release illustrates how to utilize Microsoft ASP.NET 4.0 Web Form Data Controls, Entity Framework 4.0 and Visual Studio 2010 effectively in that context. Employee Info Starter Kit is an open source ASP.NET project template that is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule, where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. This project template is titled as “Employee Info Starter Kit”, which was initially hosted on Microsoft Code Gallery and been downloaded 1, 50,000+ of copies afterword.  The latest version of this starter kit is hosted in Codeplex. Release Highlights User End Functional Specification The user end functionalities of this starter kit are pretty simple and straight forward that are focused in to perform CRUD operation on employee records as described below. Creating a new employee record Read existing employee records Update an existing employee record Delete existing employee records Architectural Overview Simple 3 layer architecture (presentation, business logic and data access layer) ASP.NET web form based user interface Built-in code generators for logical layers, implemented in Visual Studio default template engine (T4) Built-in Entity Framework entities as business entities (aka: data containers) Data Mapper design pattern based Data Access Layer, implemented in C# and Entity Framework Domain Model design pattern based Business Logic Layer, implemented in C# Object Model for Cross Cutting Concerns (such as validation, logging, exception management) Minimum System Requirements Visual Studio 2010 (Web Developer Express Edition) or higher Sql Server 2005 (Express Edition) or higher Technology Utilized Programming Languages/Scripts Browser side: JavaScript Web server side: C# Code Generation Template: T-4 Template Frameworks .NET Framework 4.0 JavaScript Framework: jQuery 1.5.1 CSS Framework: 960 grid system .NET Framework Components .NET Entity Framework .NET Optional/Named Parameters (new in .net 4.0) .NET Tuple (new in .net 4.0) .NET Extension Method .NET Lambda Expressions .NET Anonymous Type .NET Query Expressions .NET Automatically Implemented Properties .NET LINQ .NET Partial Classes and Methods .NET Generic Type .NET Nullable Type ASP.NET Meta Description and Keyword Support (new in .net 4.0) ASP.NET Routing (new in .net 4.0) ASP.NET Grid View (CSS support for sorting - (new in .net 4.0)) ASP.NET Repeater ASP.NET Form View ASP.NET Login View ASP.NET Site Map Path ASP.NET Skin ASP.NET Theme ASP.NET Master Page ASP.NET Object Data Source ASP.NET Role Based Security Getting Started Guide To see Employee Info Starter Kit in action is pretty easy! Download the latest version. Extract the file. From the extracted folder click the C# project file (Eisk.Web.csproj) to open it in Visual Studio 2010 Hit Ctrl+F5! The current release (v5.0) of Employee Info Starter Kit is properly packaged, fully documented and well tested. If you want to learn more about it in details, just check the following links: Release Home Page Installation Walkthrough Hand on Coding Walkthrough Technical Reference Enjoy!

    Read the article

  • MvcExtensions - ActionFilter

    - by kazimanzurrashid
    One of the thing that people often complains is dependency injection in Action Filters. Since the standard way of applying action filters is to either decorate the Controller or the Action methods, there is no way you can inject dependencies in the action filter constructors. There are quite a few posts on this subject, which shows the property injection with a custom action invoker, but all of them suffers from the same small bug (you will find the BuildUp is called more than once if the filter implements multiple interface e.g. both IActionFilter and IResultFilter). The MvcExtensions supports both property injection as well as fluent filter configuration api. There are a number of benefits of this fluent filter configuration api over the regular attribute based filter decoration. You can pass your dependencies in the constructor rather than property. Lets say, you want to create an action filter which will update the User Last Activity Date, you can create a filter like the following: public class UpdateUserLastActivityAttribute : FilterAttribute, IResultFilter { public UpdateUserLastActivityAttribute(IUserService userService) { Check.Argument.IsNotNull(userService, "userService"); UserService = userService; } public IUserService UserService { get; private set; } public void OnResultExecuting(ResultExecutingContext filterContext) { // Do nothing, just sleep. } public void OnResultExecuted(ResultExecutedContext filterContext) { Check.Argument.IsNotNull(filterContext, "filterContext"); string userName = filterContext.HttpContext.User.Identity.IsAuthenticated ? filterContext.HttpContext.User.Identity.Name : null; if (!string.IsNullOrEmpty(userName)) { UserService.UpdateLastActivity(userName); } } } As you can see, it is nothing different than a regular filter except that we are passing the dependency in the constructor. Next, we have to configure this filter for which Controller/Action methods will execute: public class ConfigureFilters : ConfigureFiltersBase { protected override void Configure(IFilterRegistry registry) { registry.Register<HomeController, UpdateUserLastActivityAttribute>(); } } You can register more than one filter for the same Controller/Action Methods: registry.Register<HomeController, UpdateUserLastActivityAttribute, CompressAttribute>(); You can register the filters for a specific Action method instead of the whole controller: registry.Register<HomeController, UpdateUserLastActivityAttribute, CompressAttribute>(c => c.Index()); You can even set various properties of the filter: registry.Register<ControlPanelController, CustomAuthorizeAttribute>( attribute => { attribute.AllowedRole = Role.Administrator; }); The Fluent Filter registration also reduces the number of base controllers in your application. It is very common that we create a base controller and decorate it with action filters and then we create concrete controller(s) so that the base controllers action filters are also executed in the concrete controller. You can do the  same with a single line statement with the fluent filter registration: Registering the Filters for All Controllers: registry.Register<ElmahHandleErrorAttribute>(new TypeCatalogBuilder().Add(GetType().Assembly).Include(type => typeof(Controller).IsAssignableFrom(type))); Registering Filters for selected Controllers: registry.Register<ElmahHandleErrorAttribute>(new TypeCatalogBuilder().Add(GetType().Assembly).Include(type => typeof(Controller).IsAssignableFrom(type) && (type.Name.StartsWith("Home") || type.Name.StartsWith("Post")))); You can also use the built-in filters in the fluent registration, for example: registry.Register<HomeController, OutputCacheAttribute>(attribute => { attribute.Duration = 60; }); With the fluent filter configuration you can even apply filters to controllers that source code is not available to you (may be the controller is a part of a third part component). That’s it for today, in the next post we will discuss about the Model binding support in MvcExtensions. So stay tuned.

    Read the article

  • Weblogic 10.3.4 (PS3) nodemanager wont start?

    - by angelo.santagata
    Hi all, well Im back from Australia and one of the things which happened was Oracle announced the PS3 release of oracles SOA & Webcenter products have been released. Now I normally use pre-installed images but I always like to install the products at least once that way I get to see its installation caveats.. Here’s one. Installation on Windows 7 64bit, 64bit JVM, generic weblogic Server installer. All worked fine, EXCEPT I cant start the node manager, I get the following error <08-Feb-2011 17:16:48> <INFO> <Loading domains file: D:\products\wls1034\WLSERV~1.3\common\NODEMA~1\nodemanager.domains> <08-Feb-2011 17:16:48> <SEVERE> <Fatal error in node manager server> weblogic.nodemanager.common.ConfigException: Native version is enabled but nodemanager native library could not be loaded     at weblogic.nodemanager.server.NMServerConfig.initProcessControl(NMServerConfig.java:249)     at weblogic.nodemanager.server.NMServerConfig.<init>(NMServerConfig.java:190)     at weblogic.nodemanager.server.NMServer.init(NMServer.java:182)     at weblogic.nodemanager.server.NMServer.<init>(NMServer.java:148)     at weblogic.nodemanager.server.NMServer.main(NMServer.java:390)     at weblogic.NodeManager.main(NodeManager.java:31) Caused by: java.lang.UnsatisfiedLinkError: D:\products\wls1034\wlserver_10.3\server\native\win\32\nodemanager.dll: Can't load IA 32-bit .dll on a AMD 64-bit platform     at java.lang.ClassLoader$NativeLibrary.load(Native Method)     at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803)     at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1728)     at java.lang.Runtime.loadLibrary0(Runtime.java:823)     at java.lang.System.loadLibrary(System.java:1028)     at weblogic.nodemanager.util.WindowsProcessControl.<init>(WindowsProcessControl.java:17)     at weblogic.nodemanager.util.ProcessControlFactory.getProcessControl(ProcessControlFactory.java:24)     at weblogic.nodemanager.server.NMServerConfig.initProcessControl(NMServerConfig.java:247)     ... 5 more Ok it appears that the node manager has gotten confused and thinks this is a 32bit install of Weblogic Server whereas it is the 64bit install.. Might have been something I did, or didnt do, on installation (e.g. –d64 on the jvm command line), however the workaround is pretty easy. 1. Create a file called nodemanager.properties in %WL_HOME%\common\nodemanager on my machine it was D:\products\wls1034\wlserver_10.3\common\nodemanager 2. Add the following line to it NativeVersionEnabled=false 3. And start it up!, this will force it not to use .DLL files and use emulation/non native methods instead..  

    Read the article

  • JustMock is here !!

    - by mehfuzh
    As announced earlier by Hristo Kosev at Telerik blogs , we have started giving out JustMock builds from today. This is the first of early builds before the official Q2 release and we are pretty excited to get your feedbacks. Its pretty early to say anything on it. It actually depends on your feedback. To add few, with JustMock we tried to build a mocking tool with simple and intuitive syntax as possible excluding more and more noises and avoiding any smell that can be made to your code [We are still trying everyday] and we want to make the tool even better with your help. JustMock can be used to mock virtually anything. Moreover, we left an option open that it can be used to reduce / elevate the features  just though a single click. We tried to make a strong API and make stuffs fluent and guided as possible so that you never have the chance to get de-railed. Our syntax is AAA (Arrange – Act – Assert) , we don’t believe in Record – Reply model which some of the smarter mocking tools are planning to remove from their coming release or even don’t have [its always fun to lean from each other]. Overall more signals equals more complexity , reminds me of 37 signals :-). Currently, here are the things you can do with JustMock ( will cover more in-depth in coming days) Proxied mode Mock interfaces and class with virtuals Mock properties that includes indexers Set raise event for specific calls Use matchers to control mock arguments Assert specific occurrence of a mocked calls. Assert using matchers Do recursive mocks Do Sequential mocking ( same method with argument returns different values or perform different tasks) Do strict mocking (by default and i prefer loose , so that i can use it as stubs) Elevated mode Mock static calls Mock final class Mock sealed classes Mock Extension methods Partially mock a  class member directly using Mock.Arrange Mock MsCorlib (we will support more and more members in coming days) , currently we support FileInfo, File and DateTime. These are few, you need to take a look at the test project that is provided with the build to find more [Along with the document]. Also, one of feature that will i will be using it for my next OS projects is the ability to run it separately in  proxied mode which makes it easy to redistribute and do some personal development in a more DI model and my option to elevate as it go.   I’ve surely forgotten tons of other features to mention that i will cover time but  don’t for get the URL : www.telerik.com/justmock   Finally a little mock code:   var lvMock = Mock.Create<ILoveJustMock>();    // set your goal  Mock.Arrange(() => lvMock.Response(Arg.Any<string>())).Returns((int result) => result);    //perform  string ret =  lvMock.Echo("Yes");    Assert.Equal(ret, "Yes");  // make sure everything is fine  Mock.Assert(() => lvMock.Echo("Yes"), Occurs.Once());   Hope that helps to get started,  will cover if not :-).

    Read the article

  • How to Create and Manage Contact Groups in Outlook 2010

    - by Mysticgeek
    If you find you’re sending emails to the same people all the time during the day, it’s tedious entering in their addresses individually. Today we take a look at creating Contact Groups to make the process a lot easier. Create Contact Groups Open Outlook and click on New Items \ More Items \ Contact Group. This opens the Contract Group window. Give your group a name, click on Add Members, and select the people you want to add from your Outlook Contacts, Address Book, or Create new ones. If you select from your address book you can scroll through and add the contacts you want. If you have a large amount of contacts you might want to search for them or use Advanced Find. If you want to add a new email contact to your group, you’ll just need to enter in their display name and email address then click OK. If you want the new member added to your Contacts list then make sure Add to Contacts is checked. After you have the contacts you want in the group, click Save & Close. Now when you compose a message you should be able to type in the name of the Contact Group you created… If you want to make sure you have everyone included in the group, click on the plus icon to expand the contacts. You will get a dialog box telling you the members of the group will be shown and you cannot collapse it again. Check the box not to see the message again then click OK. Then the members of the group will appear in the To field. Of course you can enter a Contact Group into the CC or Bcc fields as well. Add or Remove Members to a Contact Group After expanding the group you might notice some contacts aren’t included, or there is an old contact you don’t want to be in the group anymore. Click on the To button… Right-click on the Contact Group and select Properties. Now you can go ahead and Add Members… Or highlight a member and remove them…when finished click Save & Close. If you need to send emails to several of the same people, creating Contact Groups is a great way to save time by not entering them individually. If you work in for a large company, creating Contact Groups by department is a must! Similar Articles Productive Geek Tips Schedule Auto Send & Receive in Microsoft OutlookCreate An Electronic Business Card In Outlook 2007Create an Email Template in Outlook 2003Clear the Auto-Complete Email Address Cache in OutlookGet Maps and Directions to Your Contacts in Outlook 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

  • Silverlight Cream for February 02, 2011 -- #1039

    - by Dave Campbell
    In this Issue: Tony Champion, Gill Cleeren, Alex van Beek, Michael James, Ollie Riches, Peter Kuhn, Mike Ormond, WindowsPhoneGeek(-2-), Daniel N. Egan, Loek Van Den Ouweland, and Paul Thurott. Above the Fold: Silverlight: "Using the AutoCompleteBox" Peter Kuhn WP7: "Windows Phone Image Button" Loek Van Den Ouweland Training: "New WP7 Virtual Labs" Daniel N. Egan Shoutouts: SilverlightShow has their top 5 most popular news articles up: SilverlightShow for Jan 24-30, 2011 Rudi Grobler posted answers he gives to questions about Silverlight - Where do I start? Brian Noyes starts a series of Webinars at SilverlightShow this morning at 10am PDT: Free Silverlight Show Webinar: Querying and Updating Data From Silverlight Clients with WCF RIA Services Join your fellow geeks at Gangplank in Chandler Arizona this Saturday as Scott Cate and AZGroups brings you Azure Boot Camp – Feb 5th 2011 From SilverlightCream.com: Deploying Silverlight with WCF Services Tony Champion takes a step out of his norm (Pivot) and has a post up about deploying WCF Services with your SL app, and how to take the pain out of that without pulling out your hair. Getting ready for Microsoft Silverlight Exam 70-506 (Part 3) Gill Cleeren's part 3 of getting ready for the Silverlight Exam is up at SilverlightShow... with links to the first two parts. There's so much good information linked off these... thanks Gill and 'The Show'! A guide through WCF RIA Services attributes Alex van Beek has a post up you will probably want to bookmark unless you're not using WCF RIA... do you know all the attributes by heart? ... how about an excellent explanation of 10 of them? Using DeferredLoadListBox in a Pivot Control Michael James discusses using the DeferredLoadListBox, and then also using it with the Pivot control... but not without some pain points which he defines and gives the workaround for. WP7: Know your data Ollie Riches' latest is about Data and WP7 ... specifically 'knowing' what data you're needing/using to avoid the 90MB memory limit... He gives a set of steps to follow to measure your data model to avoid getting in trouble. Using the AutoCompleteBox Peter Kuhn takes a great look at the AutoCompleteBox... the basics, and then well beyond with custom data, item templates, custom filters, asynchronous filtering, and a behavior for MVVM async filtering. OData and Windows Phone 7 Part 2 Mike Ormond has part 2 of his OData/WP7 post up... lashing up the images to go along with the code this time out... nice looking app. WP7 RoundToggleButton and RoundButton in depth WindowsPhoneGeek is checking out the RoundToggleButton and RoundButton controls from the Coding4fun Toolkit in detail... of course where to get them, and then the setup, demo project included. All about Dependency Properties in Silverlight for WP7 WindowsPhoneGeek's latest post is a good dependency-property discussion related to WP7 development, but if you're just learning, it's a good place to learn about the subject. New WP7 Virtual Labs Daniel N. Egan posted links to 6 new WP7 Virtual Labs released on 1/25. Windows Phone Image Button Loek Van Den Ouweland has a style up on his blog that gives you an imageButton for your WP7 apps, and a sweet little video showing how it's done in Expression Blend too. Yet another free Windows Phone book for developers Paul Thurott found a link to another Free eBook for WP7 development. This one is by Puja Pramudya and is an English translation of the original, and is an introductory text, but hey... it's free... give it a look! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Design suggestions needed to create a MathBuilder framework

    - by Darf Zon
    Let explain what I'm trying to create. I'm creating a framework, the idea is to provide base classes to generate a math problem. Why do I need this framework? Because at first time, I realized when I create a new math problem I always do the same steps. Configuration settings such range numbers. For example if I'm developing multiplications, in beginner level only generate the first number between 2-5 or in advanced level, the first number will be between 6- 9, for example. Generate problem method. All the time I need to invoke a method like this to generate the problem. This one receives the configuration settings and generate the number according to them. And generate the object with the respective data. Validate the problem. Sometimes the problem generated is not valid. For example, supposed I'm creating fractions in most simplified, if I receive 2/4, the program should detect that this is not valid and must generate another like this one, 1/4. Load the view. All of them, have a custom view (please watch below the images). All of the problems must know how to CHECK if the user result is correct. All of this problems has answers. Some of them just require one answer, anothers may require more than one, so I guess a way to maintain flexibility to the developer has all the answers he wanna used. At the beginning I started using PRISM. Generate modules for each math problem was the idea and load it in the main system. I guess are the most important things of this idea. Let me showing some problems which I create in a WPF standalone program. Here I have a math problem about areas. When I generate the problem a set to the view the object and it draw it. In beginner level, I set in the configuration settings that just load square types. But in advance level, can load triangles and squares randomly. In this another, generate a binary problem like addition, subtraction, multiplication or division. Above just generate a single problem. The idea of this is to show a test o quiz, I mean get a worksheet (this I call as a collection of problems) where the user can answer it. I hope gets the idea with my ugly drawing. How to load this math problems? As I said above, I started using PRISM, and each module contains a math problem kind. This is a snapshot of my first demo. Below show the modules loaded, and center the respective configurations or levels. Until momment, I have no idea to start creating this software. I just know that I need a question | problem class, response class, user class. But I get lost about what properties should have to contain in it. Please give a little hand of this framework. I put much effort on this question, so if any isn't clear, let me know to clarify it.

    Read the article

  • HPCM 11.1.2.x - Outline Optimisation for Calculation Performance

    - by Jane Story
    When an HPCM application is first created, it is likely that you will want to carry out some optimisation on the HPCM application’s Essbase outline in order to improve calculation execution times. There are several things that you may wish to consider. Because at least one dense dimension for an application is required to deploy from HPCM to Essbase, “Measures” and “AllocationType”, as the only required dimensions in an HPCM application, are created dense by default. However, for optimisation reasons, you may wish to consider changing this default dense/sparse configuration. In general, calculation scripts in HPCM execute best when they are targeting destinations with one or more dense dimensions. Therefore, consider your largest target stage i.e. the stage with the most assignment destinations and choose that as a dense dimension. When optimising an outline in this way, it is not possible to have a dense dimension in every target stage and so testing with the dense/sparse settings in every stage is the key to finding the best configuration for each individual application. It is not possible to change the dense/sparse setting of individual cloned dimensions from EPMA. When a dimension that is to be repeated in multiple stages, and therefore cloned, is defined in EPMA, every instance of that dimension has the same storage setting. However, such manual changes may not be preserved in all cases. Please see below for full explanation. However, once the application has been deployed from EPMA to HPCM and from HPCM to Essbase, it is possible to make the dense/sparse changes to a cloned dimension directly in Essbase. This can be done by editing the properties of the outline in Essbase Administration Services (EAS) and manually changing the dense/sparse settings of individual dimensions. There are two methods of deployment from HPCM to Essbase from 11.1.2.1. There is a “replace” deploy method and an “update” deploy method: “Replace” will delete the Essbase application and replace it. If this method is chosen, then any changes made directly on the Essbase outline will be lost. If you use the update deploy method (with or without archiving and reloading data), then the Essbase outline, including any manual changes you have made (i.e. changes to dense/sparse settings of the cloned dimensions), will be preserved. Notes If you are using the calculation optimisation technique mentioned in a previous blog to calculate multiple POVs (https://blogs.oracle.com/pa/entry/hpcm_11_1_2_optimising) and you are calculating all members of that POV dimension (e.g. all months in the Period dimension) then you could consider making that dimension dense. Always review Block sizes after all changes! The maximum block size recommended in the Essbase Database Administrator’s Guide is 100k for 32 bit Essbase and 200k for 64 bit Essbase. However, calculations may perform better with a larger than recommended block size provided that sufficient memory is available on the Essbase server. Test different configurations to determine the most optimal solution for your HPCM application. Please note that this blog article covers HPCM outline optimisation only. Additional performance tuning can be achieved by methodically testing database settings i.e data cache, index cache and/or commit block settings. For more information on Essbase tuning best practices, please review these items in the Essbase Database Administrators Guide. For additional information on the commit block setting, please see the previous PA blog article https://blogs.oracle.com/pa/entry/essbase_11_1_2_commit

    Read the article

  • Extending Oracle CEP with Predictive Analytics

    - by vikram.shukla(at)oracle.com
    Introduction: OCEP is often used as a business rules engine to execute a set of business logic rules via CQL statements, and take decisions based on the outcome of those rules. There are times where configuring rules manually is sufficient because an application needs to deal with only a small and well-defined set of static rules. However, in many situations customers don't want to pre-define such rules for two reasons. First, they are dealing with events with lots of columns and manually crafting such rules for each column or a set of columns and combinations thereof is almost impossible. Second, they are content with probabilistic outcomes and do not care about 100% precision. The former is the case when a user is dealing with data with high dimensionality, the latter when an application can live with "false" positives as they can be discarded after further inspection, say by a Human Task component in a Business Process Management software. The primary goal of this blog post is to show how this can be achieved by combining OCEP with Oracle Data Mining® and leveraging the latter's rich set of algorithms and functionality to do predictive analytics in real time on streaming events. The secondary goal of this post is also to show how OCEP can be extended to invoke any arbitrary external computation in an RDBMS from within CEP. The extensible facility is known as the JDBC cartridge. The rest of the post describes the steps required to achieve this: We use the dataset available at http://blogs.oracle.com/datamining/2010/01/fraud_and_anomaly_detection_made_simple.html to showcase the capabilities. We use it to show how transaction anomalies or fraud can be detected. Building the model: Follow the self-explanatory steps described at the above URL to build the model.  It is very simple - it uses built-in Oracle Data Mining PL/SQL packages to cleanse, normalize and build the model out of the dataset.  You can also use graphical Oracle Data Miner®  to build the models. To summarize, it involves: Specifying which algorithms to use. In this case we use Support Vector Machines as we're trying to find anomalies in highly dimensional dataset.Build model on the data in the table for the algorithms specified. For this example, the table was populated in the scott/tiger schema with appropriate privileges. Configuring the Data Source: This is the first step in building CEP application using such an integration.  Our datasource looks as follows in the server config file.  It is advisable that you use the Visualizer to add it to the running server dynamically, rather than manually edit the file.    <data-source>         <name>DataMining</name>         <data-source-params>             <jndi-names>                 <element>DataMining</element>             </jndi-names>             <global-transactions-protocol>OnePhaseCommit</global-transactions-protocol>         </data-source-params>         <connection-pool-params>             <credential-mapping-enabled></credential-mapping-enabled>             <test-table-name>SQL SELECT 1 from DUAL</test-table-name>             <initial-capacity>1</initial-capacity>             <max-capacity>15</max-capacity>             <capacity-increment>1</capacity-increment>         </connection-pool-params>         <driver-params>             <use-xa-data-source-interface>true</use-xa-data-source-interface>             <driver-name>oracle.jdbc.OracleDriver</driver-name>             <url>jdbc:oracle:thin:@localhost:1522:orcl</url>             <properties>                 <element>                     <value>scott</value>                     <name>user</name>                 </element>                 <element>                     <value>{Salted-3DES}AzFE5dDbO2g=</value>                     <name>password</name>                 </element>                                 <element>                     <name>com.bea.core.datasource.serviceName</name>                     <value>oracle11.2g</value>                 </element>                 <element>                     <name>com.bea.core.datasource.serviceVersion</name>                     <value>11.2.0</value>                 </element>                 <element>                     <name>com.bea.core.datasource.serviceObjectClass</name>                     <value>java.sql.Driver</value>                 </element>             </properties>         </driver-params>     </data-source>   Designing the EPN: The EPN is very simple in this example. We briefly describe each of the components. The adapter ("DataMiningAdapter") reads data from a .csv file and sends it to the CQL processor downstream. The event payload here is same as that of the table in the database (refer to the attached project or do a "desc table-name" from a SQL*PLUS prompt). While this is for convenience in this example, it need not be the case. One can still omit fields in the streaming events, and need not match all columns in the table on which the model was built. Better yet, it does not even need to have the same name as columns in the table, as long as you alias them in the USING clause of the mining function. (Caveat: they still need to draw values from a similar universe or domain, otherwise it constitutes incorrect usage of the model). There are two things in the CQL processor ("DataMiningProc") that make scoring possible on streaming events. 1.      User defined cartridge function Please refer to the OCEP CQL reference manual to find more details about how to define such functions. We include the function below in its entirety for illustration. <?xml version="1.0" encoding="UTF-8"?> <jdbcctxconfig:config     xmlns:jdbcctxconfig="http://www.bea.com/ns/wlevs/config/application"     xmlns:jc="http://www.oracle.com/ns/ocep/config/jdbc">        <jc:jdbc-ctx>         <name>Oracle11gR2</name>         <data-source>DataMining</data-source>               <function name="prediction2">                                 <param name="CQLMONTH" type="char"/>                      <param name="WEEKOFMONTH" type="int"/>                      <param name="DAYOFWEEK" type="char" />                      <param name="MAKE" type="char" />                      <param name="ACCIDENTAREA"   type="char" />                      <param name="DAYOFWEEKCLAIMED"  type="char" />                      <param name="MONTHCLAIMED" type="char" />                      <param name="WEEKOFMONTHCLAIMED" type="int" />                      <param name="SEX" type="char" />                      <param name="MARITALSTATUS"   type="char" />                      <param name="AGE" type="int" />                      <param name="FAULT" type="char" />                      <param name="POLICYTYPE"   type="char" />                      <param name="VEHICLECATEGORY"  type="char" />                      <param name="VEHICLEPRICE" type="char" />                      <param name="FRAUDFOUND" type="int" />                      <param name="POLICYNUMBER" type="int" />                      <param name="REPNUMBER" type="int" />                      <param name="DEDUCTIBLE"   type="int" />                      <param name="DRIVERRATING"  type="int" />                      <param name="DAYSPOLICYACCIDENT"   type="char" />                      <param name="DAYSPOLICYCLAIM" type="char" />                      <param name="PASTNUMOFCLAIMS" type="char" />                      <param name="AGEOFVEHICLES" type="char" />                      <param name="AGEOFPOLICYHOLDER" type="char" />                      <param name="POLICEREPORTFILED" type="char" />                      <param name="WITNESSPRESNT" type="char" />                      <param name="AGENTTYPE" type="char" />                      <param name="NUMOFSUPP" type="char" />                      <param name="ADDRCHGCLAIM"   type="char" />                      <param name="NUMOFCARS" type="char" />                      <param name="CQLYEAR" type="int" />                      <param name="BASEPOLICY" type="char" />                                     <return-component-type>char</return-component-type>                                                      <sql><![CDATA[             SELECT to_char(PREDICTION_PROBABILITY(CLAIMSMODEL, '0' USING *))               AS probability             FROM (SELECT  :CQLMONTH AS MONTH,                                            :WEEKOFMONTH AS WEEKOFMONTH,                          :DAYOFWEEK AS DAYOFWEEK,                           :MAKE AS MAKE,                           :ACCIDENTAREA AS ACCIDENTAREA,                           :DAYOFWEEKCLAIMED AS DAYOFWEEKCLAIMED,                           :MONTHCLAIMED AS MONTHCLAIMED,                           :WEEKOFMONTHCLAIMED,                             :SEX AS SEX,                           :MARITALSTATUS AS MARITALSTATUS,                            :AGE AS AGE,                           :FAULT AS FAULT,                           :POLICYTYPE AS POLICYTYPE,                            :VEHICLECATEGORY AS VEHICLECATEGORY,                           :VEHICLEPRICE AS VEHICLEPRICE,                           :FRAUDFOUND AS FRAUDFOUND,                           :POLICYNUMBER AS POLICYNUMBER,                           :REPNUMBER AS REPNUMBER,                           :DEDUCTIBLE AS DEDUCTIBLE,                            :DRIVERRATING AS DRIVERRATING,                           :DAYSPOLICYACCIDENT AS DAYSPOLICYACCIDENT,                            :DAYSPOLICYCLAIM AS DAYSPOLICYCLAIM,                           :PASTNUMOFCLAIMS AS PASTNUMOFCLAIMS,                           :AGEOFVEHICLES AS AGEOFVEHICLES,                           :AGEOFPOLICYHOLDER AS AGEOFPOLICYHOLDER,                           :POLICEREPORTFILED AS POLICEREPORTFILED,                           :WITNESSPRESNT AS WITNESSPRESENT,                           :AGENTTYPE AS AGENTTYPE,                           :NUMOFSUPP AS NUMOFSUPP,                           :ADDRCHGCLAIM AS ADDRCHGCLAIM,                            :NUMOFCARS AS NUMOFCARS,                           :CQLYEAR AS YEAR,                           :BASEPOLICY AS BASEPOLICY                 FROM dual)                 ]]>         </sql>        </function>     </jc:jdbc-ctx> </jdbcctxconfig:config> 2.      Invoking the function for each event. Once this function is defined, you can invoke it from CQL as follows: <?xml version="1.0" encoding="UTF-8"?> <wlevs:config xmlns:wlevs="http://www.bea.com/ns/wlevs/config/application">   <processor>     <name>DataMiningProc</name>     <rules>        <query id="q1"><![CDATA[                     ISTREAM(SELECT S.CQLMONTH,                                   S.WEEKOFMONTH,                                   S.DAYOFWEEK, S.MAKE,                                   :                                         S.BASEPOLICY,                                    C.F AS probability                                                 FROM                                 StreamDataChannel [NOW] AS S,                                 TABLE(prediction2@Oracle11gR2(S.CQLMONTH,                                      S.WEEKOFMONTH,                                      S.DAYOFWEEK,                                       S.MAKE, ...,                                      S.BASEPOLICY) AS F of char) AS C)                       ]]></query>                 </rules>               </processor>           </wlevs:config>   Finally, the last stage in the EPN prints out the probability of the event being an anomaly. One can also define a threshold in CQL to filter out events that are normal, i.e., below a certain mark as defined by the analyst or designer. Sample Runs: Now let's see how this behaves when events are streamed through CEP. We use only two events for brevity, one normal and other one not. This is one of the "normal" looking events and the probability of it being anomalous is less than 60%. Event is: eventType=DataMiningOutEvent object=q1  time=2904821976256 S.CQLMONTH=Dec, S.WEEKOFMONTH=5, S.DAYOFWEEK=Wednesday, S.MAKE=Honda, S.ACCIDENTAREA=Urban, S.DAYOFWEEKCLAIMED=Tuesday, S.MONTHCLAIMED=Jan, S.WEEKOFMONTHCLAIMED=1, S.SEX=Female, S.MARITALSTATUS=Single, S.AGE=21, S.FAULT=Policy Holder, S.POLICYTYPE=Sport - Liability, S.VEHICLECATEGORY=Sport, S.VEHICLEPRICE=more than 69000, S.FRAUDFOUND=0, S.POLICYNUMBER=1, S.REPNUMBER=12, S.DEDUCTIBLE=300, S.DRIVERRATING=1, S.DAYSPOLICYACCIDENT=more than 30, S.DAYSPOLICYCLAIM=more than 30, S.PASTNUMOFCLAIMS=none, S.AGEOFVEHICLES=3 years, S.AGEOFPOLICYHOLDER=26 to 30, S.POLICEREPORTFILED=No, S.WITNESSPRESENT=No, S.AGENTTYPE=External, S.NUMOFSUPP=none, S.ADDRCHGCLAIM=1 year, S.NUMOFCARS=3 to 4, S.CQLYEAR=1994, S.BASEPOLICY=Liability, probability=.58931702982118561 isTotalOrderGuarantee=true\nAnamoly probability: .58931702982118561 However, the following event is scored as an anomaly with a very high probability of  89%. So there is likely to be something wrong with it. A close look reveals that the value of "deductible" field (10000) is not "normal". What exactly constitutes normal here?. If you run the query on the database to find ALL distinct values for the "deductible" field, it returns the following set: {300, 400, 500, 700} Event is: eventType=DataMiningOutEvent object=q1  time=2598483773496 S.CQLMONTH=Dec, S.WEEKOFMONTH=5, S.DAYOFWEEK=Wednesday, S.MAKE=Honda, S.ACCIDENTAREA=Urban, S.DAYOFWEEKCLAIMED=Tuesday, S.MONTHCLAIMED=Jan, S.WEEKOFMONTHCLAIMED=1, S.SEX=Female, S.MARITALSTATUS=Single, S.AGE=21, S.FAULT=Policy Holder, S.POLICYTYPE=Sport - Liability, S.VEHICLECATEGORY=Sport, S.VEHICLEPRICE=more than 69000, S.FRAUDFOUND=0, S.POLICYNUMBER=1, S.REPNUMBER=12, S.DEDUCTIBLE=10000, S.DRIVERRATING=1, S.DAYSPOLICYACCIDENT=more than 30, S.DAYSPOLICYCLAIM=more than 30, S.PASTNUMOFCLAIMS=none, S.AGEOFVEHICLES=3 years, S.AGEOFPOLICYHOLDER=26 to 30, S.POLICEREPORTFILED=No, S.WITNESSPRESENT=No, S.AGENTTYPE=External, S.NUMOFSUPP=none, S.ADDRCHGCLAIM=1 year, S.NUMOFCARS=3 to 4, S.CQLYEAR=1994, S.BASEPOLICY=Liability, probability=.89171554529576691 isTotalOrderGuarantee=true\nAnamoly probability: .89171554529576691 Conclusion: By way of this example, we show: real-time scoring of events as they flow through CEP leveraging Oracle Data Mining.how CEP applications can invoke complex arbitrary external computations (function shipping) in an RDBMS.

    Read the article

  • SQL SERVER – Read Only Files and SQL Server Management Studio (SSMS)

    - by pinaldave
    Just like any other Developer or DBA SQL Server Management Studio is my favorite application. Any any moment of the time I have multiple instances of the same application are open and I am working on it. Recently, I have come across a very interesting feature in SSMS related to “Read Only” files. I believe it is a little unknown feature as well so decided to write a blog about the same. First create a read only SQL file. You can make any file read by Right Click >> Properties >> Select Attribute Read Only. Now open the same file in SQL Server Management Studio. You will find that besides the file name there is a small ‘lock’ icon. This small icon indicates that the file is read only. Now let us attempt to edit the read only file. It will let us edit the file any way we want, however when we attempt to save it, it gives following pop-up value. The options in the pop-up are self explanatory and I liked it. The goal of the read only file is to prevent users to make un-intended changes. However, when a user should have complete control over the user file. User should be aware that the file is read only but if he wants to edit the file or save as a new file the choices should be present in front of it and the pop-up menu precisely captures the same. Now let us check option related to this feature in SSMS. Go to Menu >> Options >> Environment >> Documents You will find the third option which is “Allow editing of read-only files; warn when attempt to save”. In the above scenario it was already checked. Let us uncheck the same and do the same exercise which we have done earlier. I closed all the earlier window to avoid confusion. With the new option selected when I attempt to even modify the Read Only file, it gives me totally different pop up screen. It gives me an option like “Edit In-Memory”, “Make Writeable” etc. When you select “Edit In-Memory” it allows you to edit the file and later you can save as new file – just like the earlier scenario which we have discussed. . If clicked on the Make Writeable it will remove the restriction of the Read Only and file can be edited as pleased. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >