Search Results

Search found 14131 results on 566 pages for 'note'.

Page 332/566 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • Retrieving only the first record or record at a certain index in LINQ

    - by vik20000in
    While working with data it’s not always required that we fetch all the records. Many a times we only need to fetch the first record, or some records in some index, in the record set. With LINQ we can get the desired record very easily with the help of the provided element operators. Simple get the first record. If you want only the first record in record set we can use the first method [Note that this can also be done easily done with the help of the take method by providing the value as one].     List<Product> products = GetProductList();      Product product12 = (         from prod in products         where prod.ProductID == 12         select prod)         .First();   We can also very easily put some condition on which first record to be fetched.     string[] strings = { "zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine" };     string startsWithO = strings.First(s => s[0] == 'o');  In the above example the result would be “one” because that is the first record starting with “o”.  Also the fact that there will be chances that there are no value returned in the result set. When we know such possibilities we can use the FirstorDefault() method to return the first record or incase there are no records get the default value.        int[] numbers = {};     int firstNumOrDefault = numbers.FirstOrDefault();  In case we do not want the first record but the second or the third or any other later record then we can use the ElementAt() method. In the ElementAt() method we need to pass the index number for which we want the record and we will receive the result for that element.      int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 };      int fourthLowNum = (         from num in numbers         where num > 5         select num )         .ElementAt(1); Vikram

    Read the article

  • SQL SERVER – Automation Process Good or Ugly

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by SQL Server Insane Asylum. The idea of this post really caught my attention. Automation – something getting itself done after the initial programming, is my understanding of the subject. The very next thought was – is it good or evil? The reality is there is no right answer. However, what if we quickly note a few things, then I would like to request your help to complete this post. We will start with the positive parts in SQL Server where automation happens. The Good If I start thinking of SQL Server and Automation the very first thing that comes to my mind is SQL Agent, which runs various jobs. Once I configure any task or job, it runs fine (till something goes wrong!). Well, automation has its own advantages. We all have used SQL Agent for so many things – backup, various validation jobs, maintenance jobs and numerous other things. What other kinds of automation tasks do you run in your database server? The Ugly This part is very interesting, because it can get really ugly(!). During my career I have found so many bad automation agent jobs. Client had an agent job where he was dropping the clean buffers every hour Client using database mail to send regular emails instead of necessary alert related emails The best one – A client used new Missing Index and Unused Index scripts in SQL Agent Job to follow suggestions 100%. Believe me, I have never seen such a badly performing and hard to optimize database. (I ended up dropping all non-clustered indexes on the development server and ran production workload on the development server again, then configured with optimal indexes). Shrinking database is performance killer. It should never be automated. SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance The one I hate the most is AutoShrink Database. It has given me hard time in my career quite a few times. SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server Automation is necessary but common sense is a must when creating automation. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • How to add items to the Document Types context menu.

    - by Vizioz Limited
    I am currently working on an extension to Umbraco that needed an extra menu item on the Document Types menu in the Settings section, my first thought was to use the BeforeNodeRender event to add the menu item, but this event only fires on the Content and Media menu trees.See: Codeplex Issue 21623The "temporary" solution has been to extend the "loadNodeTypes" class, I say temporary because I assume that the core team may well extend the event functionality to the entire menu tree in the future which would be much better and would prevent you from having to complete override the menu items.This was suggested by Slace on my "is it possible to add an item to the context menu in tree's other than media content" post on the our.umbraco.org site.There are three things you need to do:1) Override the loadNodeTypesusing System.Collections.Generic;using umbraco.interfaces;using umbraco.BusinessLogic.Actions;namespace Vizioz.xMind2Umbraco{ // Note: Remember these menu's might change in future versions of Umbraco public class loadNodeTypesMenu : umbraco.loadNodeTypes { public loadNodeTypesMenu(string application) : base(application) { } protected override void CreateRootNodeActions(ref List<IAction> actions) { actions.Clear(); actions.Add(ActionNew.Instance); actions.Add(xMindImportAction.Instance); actions.Add(ContextMenuSeperator.Instance); actions.Add(ActionImport.Instance); actions.Add(ContextMenuSeperator.Instance); actions.Add(ActionRefresh.Instance); } protected override void CreateAllowedActions(ref List<IAction> actions) { actions.Clear(); actions.Add(ActionCopy.Instance); actions.Add(xMindImportAction.Instance); actions.Add(ContextMenuSeperator.Instance); actions.Add(ActionExport.Instance); actions.Add(ContextMenuSeperator.Instance); actions.Add(ActionDelete.Instance); } }}2) Create a new Action to be used on the menuusing umbraco.interfaces;namespace Vizioz.xMind2Umbraco{ public class xMindImportAction : IAction { #region Implementation of IAction public static xMindImportAction Instance { get { return umbraco.Singleton<xMindImportAction>.Instance; } } public char Letter { get { return 'p'; } } public bool ShowInNotifier { get { return true; } } public bool CanBePermissionAssigned { get { return true; } } public string Icon { get { return "editor/OPEN.gif"; } } public string Alias { get { return "Import from xMind"; } } public string JsFunctionName { get { return "openModal('dialogs/xMindImport.aspx?id=' + nodeID, 'Publish Management', 550, 480);"; } } public string JsSource { get { return string.Empty; } } #endregion }}3) Update the UmbracoAppTree table, my case I changed the following:TreeHandlerTypeFrom: loadNodeTypesTo: loadNodeTypesMenuTreeHandlerAssemblyFrom: umbracoTo: Vizioz.xMind2UmbracoAnd the result is:

    Read the article

  • SQL SERVER – Puzzle to Win Print Book – Explain Value of PERCENTILE_CONT() Using Simple Example

    - by pinaldave
    From last several days I am working on various Denali Analytical functions and it is indeed really fun to refresh the concept which I studied in the school. Earlier I wrote article where I explained how we can use PERCENTILE_CONT() to find median over here SQL SERVER – Introduction to PERCENTILE_CONT() – Analytic Functions Introduced in SQL Server 2012. Today I am going to ask question based on the same blog post. Again just like last time the intention of this puzzle is as following: Learn new concept of SQL Server 2012 Learn new concept of SQL Server 2012 even if you are on earlier version of SQL Server. On another note, SQL Server 2012 RC0 has been announced and available to download SQL SERVER – 2012 RC0 Various Resources and Downloads. Now let’s have fun following query: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS MedianCont FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO The above query will give us the following result: The reason we get median is because we are passing value .05 to PERCENTILE_COUNT() function. Now run read the puzzle. Puzzle: Run following T-SQL code: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, PERCENTILE_CONT(0.9) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS MedianCont FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO Observe the result and you will notice that MidianCont has different value than before, the reason is PERCENTILE_CONT function has 0.9 value passed. For first four value the value is 775.1. Now run following T-SQL code: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, PERCENTILE_CONT(0.1) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS MedianCont FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO Observe the result and you will notice that MidianCont has different value than before, the reason is PERCENTILE_CONT function has 0.1 value passed. For first four value the value is 709.3. Now in my example I have explained how the median is found using this function. You have to explain using mathematics and explain (in easy words) why the value in last columns are 709.3 and 775.1 Hint: SQL SERVER – Introduction to PERCENTILE_CONT() – Analytic Functions Introduced in SQL Server 2012 Rules Leave a comment with your detailed answer by Nov 25's blog post. Open world-wide (where Amazon ships books) If you blog about puzzle’s solution and if you win, you win additional surprise gift as well. Prizes Print copy of my new book SQL Server Interview Questions Amazon|Flipkart If you already have this book, you can opt for any of my other books SQL Wait Stats [Amazon|Flipkart|Kindle] and SQL Programming [Amazon|Flipkart|Kindle]. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • First impressions of Scala

    - by Scott Weinstein
    I have an idea that it may be possible to predict build success/failure based on commit data. Why Scala? It’s a JVM language, has lots of powerful type features, and it has a linear algebra library which I’ll need later. Project definition and build Neither maven or the scala build tool (sbt) are completely satisfactory. This maven **archetype** (what .Net folks would call a VS project template) mvn archetype:generate `-DarchetypeGroupId=org.scala-tools.archetypes `-DarchetypeArtifactId=scala-archetype-simple `-DremoteRepositories=http://scala-tools.org/repo-releases `-DgroupId=org.SW -DartifactId=BuildBreakPredictor gets you started right away with “hello world” code, unit tests demonstrating a number of different testing approaches, and even a ready made `.gitignore` file - nice! But the Scala version is behind at v2.8, and more seriously, compiling and testing was painfully slow. So much that a rapid edit – test – edit cycle was not practical. So Lab49 colleague Steve Levine tells me that I can either adjust my pom to use fsc – the fast scala compiler, or use sbt. Sbt has some nice features It’s fast – it uses fsc by default It has a continuous mode, so  `> ~test` will compile and run your unit test each time you save a file It’s can consume (and produce) Maven 2 dependencies the build definition file can be much shorter than the equivalent pom (about 1/5 the size, as repos and dependencies can be declared on a single line) And some real limitations Limited support for 3rd party integration – for instance out of the box, TeamCity doesn’t speak sbt, nor does IntelliJ IDEA Steeper learning curve for build steps outside the default Side note: If a language has a fast compiler, why keep the slow compiler around? Even worse, why make it the default? I choose sbt, for the faster development speed it offers. Syntax Scala APIs really like to use punctuation – sometimes this works well, as in the following map1 |+| map2 The `|+|` defines a merge operator which does addition on the `values` of the maps. It’s less useful here: http(baseUrl / url >- parseJson[BuildStatus] sure you can probably guess what `>-` does from the context, but how about `>~` or `>+`? Language features I’m still learning, so not much to say just yet. However case classes are quite usefull, implicits scare me, and type constructors have lots of power. Community A number of projects, such as https://github.com/scalala and https://github.com/scalaz/scalaz are split between github and google code – github for the src, and google code for the docs. Not sure I understand the motivation here.

    Read the article

  • Add a Cache Clearing Button to Firefox

    - by Asian Angel
    While emptying your browser’s cache may not be something that you need to worry with often or at all there are times when clearing it can be helpful. The Empty Cache Button extension lets you have instant on-demand cache clearing in Firefox. Some reasons why you might want or need to clear your browser’s cache: Clear out older (or out of date) versions of images, etc. from your favorite websites Free up disk space Clearing the cache may help fix browser behavior issues Help protect privacy (i.e. images, etc. displayed within a personal account) Before For our example we loaded three webpages in order to add content to our browser’s cache. Using the “CacheViewer” we were able to easily see the contents of our browser’s cache after the webpages finished loading. What if you need to clear your cache immediately without restarting your browser (if the options are set to empty the cache on browser exit)? Note: CacheViewer is available via a separate extension and can be found here. Empty Cache Button in Action Once you install the extension all that you need to do is right click on any of your browser’s toolbars and select “Customise”. Drag the “Toolbar Button” to an appropriate location in your browser’s UI and you are ready to go. To clear your browser’s cache simply click the button…that is all there is to it. When the cache is empty you will see this small message window appear in the lower right corner of your “Desktop”. Opening up the “CacheViewer” again shows that everything has been cleared out. Terrific! Conclusion If you ever find yourself needing to clear your browser’s cache immediately then the Empty Cache Button extension provides an easy way to do so without restarting your browser (if the options are set to empty the cache on browser exit). Links Download the Empty Cache Button extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Change SuperFetch to Only Cache System Boot Files in VistaTroubleshoot Browsing Issues by Reloading the DNS Client Cache in VistaSearch for Install Packages from the Ubuntu Command LineQuick Tip: Empty Internet Explorer 7 Cache when Browser is ClosedRemove the New Tab Button in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Use Quick Translator to Translate Text in 50 Languages (Firefox) Get Better Windows Search With UltraSearch Scan News With NY Times Article Skimmer SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone

    Read the article

  • How to Customize Your How-To Geek RSS Feeds (We’re Changing Things)

    - by The Geek
    If you’re an RSS subscriber, you’ll soon notice that we’re making a few changes. Why? It’s time to simplify our system, while providing you a little more control over which articles you want to see. The point, of course, is that people like different things, and that’s OK. What’s not so great is getting complaints—Linux users are always whining about Windows posts, and Windows users are whining when we write Linux posts. It’s also worth pointing out that if you aren’t interested in a post—you don’t have to click on it to read it. This is probably fairly obvious to reasonable people. The New Feeds Here’s the new set of feeds you can subscribe to. We’ll probably add more fine-grained feeds in the future, as we get some more things straightened out. Everything we publish (news, how-tos, features) Just the Feature Articles (the absolute best stuff) Just News (ETC) Posts Just Windows Articles Just Linux Articles Just Apple Articles Just Desktop Fun Articles You can obviously subscribe to one or many of them if you feel like it. The Once Daily Summary Feed! If you’d rather get all your How-To Geek in a single dose each day, you can subscribe to the summary feed, which is pretty much the same as our daily email newsletter. You can subscribe to this summary feed by clicking here. Note: we’re working on a lot of backend changes to hopefully make things a little better for you, the reader. One of the things we’ve consistently had feedback on is the comment system, which we’ll tackle a little later. Also, if you suddenly saw a barrage of posts earlier… oops! Our mistake. Latest Features How-To Geek ETC Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide CyanogenMod Updates; Rolls out Android 2.3 to the Less Fortunate MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents

    Read the article

  • How to internally rewrite a page when requested from specific HTTP_HOST

    - by Andy
    Hi all, I have a Drupal site, site.com, and our client has a campaign that they're promoting for which they've bought a new domain name, campaign.com. I'd like it so that a request for campaign.com internally rewrites to a particular page of the Drupal site. Note Drupal uses an .htaccess file in the document root. The normal Drupal rewrite is # Rewrite URLs of the form 'x' to the form 'index.php?q=x'. RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] I added the following before the normal rewrite. # Custom URLS (eg. microsites) go here RewriteCond %{HTTP_HOST} =campaign.com RewriteCond %{REQUEST_URI} =/ RewriteRule ^ index.php?q=node/22 [L] Unfortunately it doesn't work, it just shows the homepage. Turning on the rewrite log I get this. 1. [rid#2da8ea8/initial] (3) [perdir D:/wamp/www/] strip per-dir prefix: D:/wamp/www/ - 2. [rid#2da8ea8/initial] (3) [perdir D:/wamp/www/] applying pattern '^' to uri '' 3. [rid#2da8ea8/initial] (2) [perdir D:/wamp/www/] rewrite '' - 'index.php?q=node/22' 4. [rid#2da8ea8/initial] (3) split uri=index.php?q=node/22 - uri=index.php, args=q=node/22 5. [rid#2da8ea8/initial] (3) [perdir D:/wamp/www/] add per-dir prefix: index.php - D:/wamp/www/index.php 6. [rid#2da8ea8/initial] (2) [perdir D:/wamp/www/] strip document_root prefix: D:/wamp/www/index.php - /index.php 7. [rid#2da8ea8/initial] (1) [perdir D:/wamp/www/] internal redirect with /index.php [INTERNAL REDIRECT] 8. [rid#2da7770/initial/redir#1] (3) [perdir D:/wamp/www/] strip per-dir prefix: D:/wamp/www/index.php - index.php 9. [rid#2da7770/initial/redir#1] (3) [perdir D:/wamp/www/] applying pattern '^' to uri 'index.php' 10.[rid#2da7770/initial/redir#1] (3) [perdir D:/wamp/www/] strip per-dir prefix: D:/wamp/www/index.php - index.php 11.[rid#2da7770/initial/redir#1] (3) [perdir D:/wamp/www/] applying pattern '^(.*)$' to uri 'index.php' 12.[rid#2da7770/initial/redir#1] (1) [perdir D:/wamp/www/] pass through D:/wamp/www/index.php I'm not used to mod_rewrite, so I might be missing something, but comparing the logs from a call to http://site.com/node/3 and from http://campaign.com/ I can't see any meaningful difference. Specifically uri and args on line 4 seem correct, the internal redirect on line 7 seems right, and the pass through on line 12 seems right (because the file index.php exists). But for some reason it seems the query string's been discarded/ignored around the time of the internal redirect. I'm completely stumped. Also, if anyone could provide a reference on understanding the rewrite log, that might help. It'd be great if there's a way to track the query string through the internal redirect. FWIW I'm using WampServer 2.1 with Apache 2.2.17.

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 1 of 2 &ndash; CLR Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible.  Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind…  In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve.  One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well.  In this review, I am going to cover some of the features of the ANTS profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program.  I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction The ANTS Profiler pack provided by Red Gate was something that I had not heard of before receiving an email regarding an offer to review it for a license.  Since I look to make my code efficient, it was a no brainer for me to try it out!  One thing that I have to say took me by surprise is that upon downloading the program and installing it you fill out a form for your usual contact information.  Sure enough within 2 hours, I received an email from a sales representative at Red Gate asking if she could help me to achieve the most out of my trial time so it wouldn’t go to waste.  After replying to her and explaining that I was looking to review its feature set, she put me in contact with someone that setup a demo session to give me a quick rundown of its features via an online meeting.  After having dealt with a massive ordeal with one of my utility companies and their complete lack of customer service, Red Gates friendly and helpful representatives were a breath of fresh air, and something I was thankful for. ANTS CLR Profiler The ANTS CLR profiler is the thing I want to focus on the most in this post, so I am going to dive right in now. Install was simple and took no time at all.  It installed both the profiler for the CLR and Memory, but also visual studio extensions to facilitate the usage of the profilers (click any images for full size images): The Visual Studio menu options (under ANTS menu) Starting the CLR Performance Profiler from the start menu yields this window If you follow the instructions after launching the program from the start menu (Click File > New Profiling Session to start a new project), you are given a dialog with plenty of options for profiling: The New Session dialog.  Lots of options.  One thing I noticed is that the buttons in the lower right were half-covered by the panel of the application.  If I had to guess, I would imagine that this is caused by my DPI settings being set to 125%.  This is a problem I have seen in other applications as well that don’t scale well to different dpi scales. The profiler options give you the ability to profile: .NET Executable ASP.NET web application (hosted in IIS) ASP.NET web application (hosted in IIS express) ASP.NET web application (hosted in Cassini Web Development Server) SharePoint web application (hosted in IIS) Silverlight 4+ application Windows Service COM+ server XBAP (local XAML browser application) Attach to an already running .NET 4 process Choosing each option provides a varying set of other variables/options that one can set including options such as application arguments, operating path, record I/O performance performance counters to record (43 counters in all!), etc…  All in all, they give you the ability to profile many different .Net project types, and make it simple to do so.  In most cases of my using this application, I would be using the built in Visual Studio extensions, as they automatically start a new profiling project in ANTS with the options setup, and start your program, however RedGate has made it easy enough to profile outside of Visual Studio as well. On the flip side of this, as someone who lives most of their work life in Visual Studio, one thing I do wish is that instead of opening an entirely separate application/gui to perform profiling after launching, that instead they would provide a Visual Studio panel with the information, and integrate more of the profiling project information into Visual Studio.  So, now that we have an idea of what options that the profiler gives us, its time to test its abilities and features. Horrendous Example Code – Prime Number Generator One of my interests besides development, is Physics and Math – what I went to college for.  I have especially always been interested in prime numbers, as they are something of a mystery…  So, I decided that I would go ahead and to test the abilities of the profiler, I would write a small program, website, and library to generate prime numbers in the quantity that you ask for.  I am going to start off with some terrible code, and show how I would see the profiler being used as a development tool. First off, the IPrimes interface (all code is downloadable at the end of the post): interface IPrimes { IEnumerable<int> GetPrimes(int retrieve); } Simple enough, right?  Anything that implements the interface will (hopefully) provide an IEnumerable of int, with the quantity specified in the parameter argument.  Next, I am going to implement this interface in the most basic way: public class DumbPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _analyzing = 4; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; //start dividing at 2 //divide until number is reached, or determined not prime for (int i = 2; i < _analyzing && isPrime; i++) { //if (i) goes into _analyzing without a remainder, //_analyzing is NOT prime if (_analyzing % i == 0) isPrime = false; } //if it is prime, add to found list if (isPrime) _foundPrimes.Add(_analyzing); //increment number to analyze next _analyzing++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } This is the simplest way to get primes in my opinion.  Checking each number by the straight definition of a prime – is it divisible by anything besides 1 and itself. I have included this code in a base class library for my solution, as I am going to use it to demonstrate a couple of features of ANTS.  This class library is consumed by a simple non-MVVM WPF application, and a simple MVC4 website.  I will not post the WPF code here inline, as it is simply an ObservableCollection<int>, a label, two textbox’s, and a button. Starting a new Profiling Session So, in Visual Studio, I have just completed my first stint developing the GUI and DumbPrimes IPrimes class, so now I want to check my codes efficiency by profiling it.  All I have to do is build the solution (surprised initiating a profiling session doesn’t do this, but I suppose I can understand it), and then click the ANTS menu, followed by Profile Performance.  I am then greeted by the profiler starting up and already monitoring my program live: You are provided with a realtime graph at the top, and a pane at the bottom giving you information on how to proceed.  I am going to start by asking my program to show me the first 15000 primes: After the program finally began responding again (I did all the work on the main UI thread – how bad!), I stopped the profiler, which did kill the process of my program too.  One important thing to note, is that the profiler by default wants to give you a lot of detail about the operation – line hit counts, time per line, percent time per line, etc…  The important thing to remember is that this itself takes a lot of time.  When running my program without the profiler attached, it can generate the 15000 primes in 5.18 seconds, compared to 74.5 seconds – almost a 1500 percent increase.  While this may seem like a lot, remember that there is a trade off.  It may be WAY more inefficient, however, I am able to drill down and make improvements to specific problem areas, and then decrease execution time all around. Analyzing the Profiling Session After clicking ‘Stop Profiling’, the process running my application stopped, and the entire execution time was automatically selected by ANTS, and the results shown below: Now there are a number of interesting things going on here, I am going to cover each in a section of its own: Real Time Performance Counter Bar (top of screen) At the top of the screen, is the real time performance bar.  As your application is running, this will constantly update with the currently selected performance counters status.  A couple of cool things to note are the fact that you can drag a selection around specific time periods to drill down the detail views in the lower 2 panels to information pertaining to only that period. After selecting a time period, you can bookmark a section and name it, so that it is easy to find later, or after reloaded at a later time.  You can also zoom in, out, or fit the graph to the space provided – useful for drilling down. It may be hard to see, but at the top of the processor time graph below the time ticks, but above the red usage graph, there is a green bar. This bar shows at what times a method that is selected in the ‘Call tree’ panel is called. Very cool to be able to click on a method and see at what times it made an impact. As I said before, ANTS provides 43 different performance counters you can hook into.  Click the arrow next to the Performance tab at the top will allow you to change between different counters if you have them selected: Method Call Tree, ADO.Net Database Calls, File IO – Detail Panel Red Gate really hit the mark here I think. When you select a section of the run with the graph, the call tree populates to fill a hierarchical tree of method calls, with information regarding each of the methods.   By default, methods are hidden where the source is not provided (framework type code), however, Red Gate has integrated Reflector into ANTS, so even if you don’t have source for something, you can select a method and get the source if you want.  Methods are also hidden where the impact is seen as insignificant – methods that are only executed for 1% of the time of the overall calling methods time; in other words, working on making them better is not where your efforts should be focused. – Smart! Source Panel – Detail Panel The source panel is where you can see line level information on your code, showing the code for the currently selected method from the Method Call Tree.  If the code is not available, Reflector takes care of it and shows the code anyways! As you can notice, there does seem to be a problem with how ANTS determines what line is the actual line that a call is completed on.  I have suspicions that this may be due to some of the inline code optimizations that the CLR applies upon compilation of the assembly.  In a method with comments, the problem is much more severe: As you can see here, apparently the most offending code in my base library was a comment – *gasp*!  Removing the comments does help quite a bit, however I hope that Red Gate works on their counter algorithm soon to improve the logic on positioning for statistics: I did a small test just to demonstrate the lines are correct without comments. For me, it isn’t a deal breaker, as I can usually determine the correct placements by looking at the application code in the region and determining what makes sense, but it is something that would probably build up some irritation with time. Feature – Suggest Method for Optimization A neat feature to really help those in need of a pointer, is the menu option under tools to automatically suggest methods to optimize/improve: Nice feature – clicking it filters the call tree and stars methods that it thinks are good candidates for optimization.  I do wish that they would have made it more visible for those of use who aren’t great on sight: Process Integration I do think that this could have a place in my process.  After experimenting with the profiler, I do think it would be a great benefit to do some development, testing, and then after all the bugs are worked out, use the profiler to check on things to make sure nothing seems like it is hogging more than its fair share.  For example, with this program, I would have developed it, ran it, tested it – it works, but slowly. After looking at the profiler, and seeing the massive amount of time spent in 1 method, I might go ahead and try to re-implement IPrimes (I actually would probably rewrite the offending code, but so that I can distribute both sets of code easily, I’m just going to make another implementation of IPrimes).  Using two pieces of knowledge about prime numbers can make this method MUCH more efficient – prime numbers fall into two buckets 6k+/-1 , and a number is prime if it is not divisible by any other primes before it: public class SmartPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _k = 1; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; int potentialPrime; //analyze 6k-1 //assign the value to potential potentialPrime = 6 * _k - 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); if (_foundPrimes.Count() == retrieve) break; //analyze 6k+1 //assign the value to potential potentialPrime = 6 * _k + 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); //increment k to analyze next _k++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } Now there are definitely more things I can do to help make this more efficient, but for the scope of this example, I think this is fine (but still hideous)! Profiling this now yields a happy surprise 27 seconds to generate the 15000 primes with the profiler attached, and only 1.43 seconds without.  One important thing I wanted to call out though was the performance graph now: Notice anything odd?  The %Processor time is above 100%.  This is because there is now more than 1 core in the operation.  A better label for the chart in my mind would have been %Core time, but to each their own. Another odd thing I noticed was that the profiler seemed to be spot on this time in my DumbPrimes class with line details in source, even with comments..  Odd. Profiling Web Applications The last thing that I wanted to cover, that means a lot to me as a web developer, is the great amount of work that Red Gate put into the profiler when profiling web applications.  In my solution, I have a simple MVC4 application setup with 1 page, a single input form, that will output prime values as my WPF app did.  Launching the profiler from Visual Studio as before, nothing is really different in the profiler window, however I did receive a UAC prompt for a Red Gate helper app to integrate with the web server without notification. After requesting 500, 1000, 2000, and 5000 primes, and looking at the profiler session, things are slightly different from before: As you can see, there are 4 spikes of activity in the processor time graph, but there is also something new in the call tree: That’s right – ANTS will actually group method calls by get/post operations, so it is easier to find out what action/page is giving the largest problems…  Pretty cool in my mind! Overview Overall, I think that Red Gate ANTS CLR Profiler has a lot to offer, however I think it also has a long ways to go.  3 Biggest Pros: Ability to easily drill down from time graph, to method calls, to source code Wide variety of counters to choose from when profiling your application Excellent integration/grouping of methods being called from web applications by request – BRILLIANT! 3 Biggest Cons: Issue regarding line details in source view Nit pick – Processor time vs. Core time Nit pick – Lack of full integration with Visual Studio Ratings Ease of Use (7/10) – I marked down here because of the problems with the line level details and the extra work that that entails, and the lack of better integration with Visual Studio. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Especially with its large variety of performance counters, a definite plus! Features (9/10) – Besides the real time performance monitoring, and the drill downs that I’ve shown here, ANTS also has great integration with ADO.Net, with the ability to show database queries run by your application in the profiler.  This, with the line level details, the web request grouping, reflector integration, and various options to customize your profiling session I think create a great set of features! Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (8/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (8/10) – Overall, I am happy with the Performance Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  I WOULD recommend you trying the application and seeing if it would fit into your process, BUT, remember there are still some kinks in it to hopefully be worked out. My next post will definitely be shorter (hopefully), but thank you for reading up to here, or skipping ahead!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • Build Open JDK 7 on Mac OSX (TOTD #172)

    - by arungupta
    The complete requirements, pre-requisites, and steps to build OpenJDK 7 port on Mac OSX are described here. The steps are very clearly explained and here are the exact ones I followed on my MacBook Pro 10.7.2: Confirm the version of pre-installed Java as: > java -versionjava version "1.6.0_26"Java(TM) SE Runtime Environment (build 1.6.0_26-b03-383-11A511c)Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02-383, mixed mode) Download and install Mercurial from mercurial.berkwood.com (zip bundle for 10.7 is here). It gets installed in the /usr/local/bin directory. Get the source code as (commands highlighted in bold): hg clone http://hg.openjdk.java.net/macosx-port/macosx-port destination directory: macosx-port requesting all changes adding changesets adding manifests adding file changes added 437 changesets with 364 changes to 33 files updating to branch default 31 files updated, 0 files merged, 0 files removed, 0 files unresolved cd macosx-port chmod 7555 get_source.sh ./get_source.sh # Repos:  corba jaxp jaxws langtools jdk hotspot Starting on corba Starting on jaxp Starting on jaxws Starting on langtools Starting on jdk Starting on hotspot # hg clone http://hg.openjdk.java.net/macosx-port/macosx-port/corba corba requesting all changes adding changesets adding manifests adding file changes added 396 changesets with 3275 changes to 1379 files . . . # exit code 0 # cd ./corba && hg pull -u pulling from http://hg.openjdk.java.net/macosx-port/macosx-port/corba searching for changes no changes found # exit code 0 # cd ./jaxp && hg pull -u pulling from http://hg.openjdk.java.net/macosx-port/macosx-port/jaxp searching for changes no changes found # exit code 0 Install Xcode from the App Store. Include /Developer/usr/bin in PATH. Note: JDK 1.6.0_26 ame pre-installed on my laptop and I installed Xode after that. The compilation went fine and there was no need to re-install the Java for Mac OS X as mentioned in the original steps. Build the code as: make ALLOW_DOWNLOADS=true SA_APPLE_BOOT_JAVA=true ALWAYS_PASS_TEST_GAMMA=true ALT_BOOTDIR=`/usr/libexec/java_home -v 1.6` HOTSPOT_BUILD_JOBS=`sysctl -n hw.ncpu` The final output is shown as: >>>Finished making images @ Sat Nov 19 00:59:04 WET 2011 ... >>>Finished making images @ Sat Nov 19 00:59:04 WET 2011 ...############################################################################# Leaving jdk for target(s) sanity all docs images ################################################################################## Build time 00:17:42 jdk for target(s) sanity all docs images ############################################################################### Build times ##########Target all_product_buildStart 2011-11-19 00:32:40End 2011-11-19 00:59:0400:01:46 corba00:04:07 hotspot00:00:51 jaxp00:01:21 jaxws00:17:42 jdk00:00:37 langtools00:26:24 TOTAL######################### Change the directory and verify the version: >cd build/macosx-universal/j2sdk-image/1.7.0.jdk/Contents/Home/bin >./java -version openjdk version "1.7.0-internal" OpenJDK Runtime Environment (build 1.7.0-internal-arungup_2011_11_19_00_32-b00) OpenJDK 64-Bit Server VM (build 21.0-b17, mixed mode) Now go fix some bugs, file new bugs, or discuss at the macosx-port-dev mailing list.

    Read the article

  • How to perform feature upgrade in SharePoint2010 part1

    - by ybbest
    Once your custom SharePoint solution went into production. Any changes made to it require you to perform feature upgrade. Today, I’d like to show you how to perform feature upgrade. For the first version of my solution, I deploy a document library with a custom document set content type. You can download the solution here. Once you extract your solution, the first version is in the original folder. In order to deploy the original solution, you need to run the sitecreation.ps1 in the script folder. Next, I will modify the solution so that I will index the application number in the document library I just created in my original solution. 1. Modify the ApplicationLibrary.Template.xml as highlighted below: 2. Adding the following code into the feature event receiver. public override void FeatureUpgrading(SPFeatureReceiverProperties properties, string upgradeActionName, System.Collections.Generic.IDictionary<string, string> parameters) { base.FeatureUpgrading(properties, upgradeActionName, parameters); SPWeb web = GetFeatureWeb(properties); switch (upgradeActionName) { case "IndexApplicationNumber": SPList applicationLibrary = web.Lists.TryGetList(ApplicationLibraryNamesConstant.ApplicationLibraryName); if (applicationLibrary != null) { SPField queueField = applicationLibrary.Fields["ApplicationNumber"]; queueField.Indexed = true; queueField.Update(); } break; } } 3. Package your solution and run the feature upgrade PowerShell script. $wspFolder ="v1.1" $scriptPath=Split-Path $myInvocation.MyCommand.Path $siteUrl = "http://ybbest" $featureToCheckGuid="1b9d84cd-227d-45f1-92d4-a43008aa8fe7" $requiredFeatureVersion="0.0.0.0" $siteUrlOfFeatureToBeChecked="http://ybbest" AppendLog "Starting Solution UpgradeSolutionAndFeatures.ps1" Magenta & "$scriptPath\UpgradeSolutionAndFeatures.ps1" $siteUrl $wspFolder $featureToCheckGuid $requiredFeatureVersion $siteUrlOfFeatureToBeChecked Write-Host AppendLog "All features updated" "Green" Note: If you have not version your feature explicitly , your feature version will be 0.0.0.0 . References: Feature upgrade.

    Read the article

  • SQL SERVER – SQL Server High Availability Options – Notes from the Field #032

    - by Pinal Dave
    [Notes from Pinal]: When it is about High Availability or Disaster Recovery, I often see people getting confused. There are so many options available that when the user has to select what is the most optimal solution for their organization they are often confused. Most of the people even know the salient features of various options, but when they have to figure out one single option to use they are often not sure which option to use. I like to give ask my dear friend time all these kinds of complicated questions. He has a skill to make a complex subject very simple and easy to understand. Linchpin People are database coaches and wellness experts for a data driven world. In this 26th episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words the best High Availability Option for your SQL Server.  Working with SQL Server a common challenge we are faced with is providing the maximum uptime possible.  To meet these demands we have to design a solution to provide High Availability (HA). Microsoft SQL Server depending on your edition provides you with several options.  This could be database mirroring, log shipping, failover clusters, availability groups or replication. Each possible solution comes with pro’s and con’s.  Not anyone one solution fits all scenarios so understanding which solution meets which need is important.  As with anything IT related, you need to fully understand your requirements before trying to solution the problem.  When it comes to building an HA solution, you need to understand the risk your organization needs to mitigate the most. I have found that most are concerned about hardware failure and OS failures. Other common concerns are data corruption or storage issues.  For data corruption or storage issues you can mitigate those concerns by having a second copy of the databases. That can be accomplished with database mirroring, log shipping, replication or availability groups with a secondary replica.  Failover clustering and virtualization with shared storage do not provide redundancy of the data. I recently created a chart outlining some pros and cons of each of the technologies that I posted on my blog. I like to use this chart to help illustrate how each technology provides a certain number of benefits.  Each of these solutions carries with it some level of cost and complexity.  As a database professional we should all be familiar with these technologies so we can make the best possible choice for our organization. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Shrinking Database

    Read the article

  • Do I suffer from encapsulation overuse?

    - by Florenc
    I have noticed something in my code in various projects that seems like code smell to me and something bad to do, but I can't deal with it. While trying to write "clean code" I tend to over-use private methods in order to make my code easier to read. The problem is that the code is indeed cleaner but it's also more difficult to test (yeah I know I can test private methods...) and in general it seems a bad habit to me. Here's an example of a class that reads some data from a .csv file and returns a group of customers (another object with various fields and attributes). public class GroupOfCustomersImporter { //... Call fields .... public GroupOfCustomersImporter(String filePath) { this.filePath = filePath; customers = new HashSet<Customer>(); createCSVReader(); read(); constructTTRP_Instance(); } private void createCSVReader() { //.... } private void read() { //.... Reades the file and initializes the class attributes } private void readFirstLine(String[] inputLine) { //.... Method used by the read() method } private void readSecondLine(String[] inputLine) { //.... Method used by the read() method } private void readCustomerLine(String[] inputLine) { //.... Method used by the read() method } private void constructGroupOfCustomers() { //this.groupOfCustomers = new GroupOfCustomers(**attributes of the class**); } public GroupOfCustomers getConstructedGroupOfCustomers() { return this.GroupOfCustomers; } } As you can see the class has only a constructor which calls some private methods to get the job done, I know that's not a good practice not a good practice in general but I prefer to encapsulate all the functionality in the class instead of making the methods public in which case a client should work this way: GroupOfCustomersImporter importer = new GroupOfCustomersImporter(filepath) importer.createCSVReader(); read(); GroupOfCustomer group = constructGoupOfCustomerInstance(); I prefer this because I don't want to put useless lines of code in the client's side code bothering the client class with implementation details. So, Is this actually a bad habit? If yes, how can I avoid it? Please note that the above is just a simple example. Imagine the same situation happening in something a little bit more complex.

    Read the article

  • Silverlight Cream for June 15, 2010 -- #882

    - by Dave Campbell
    In this Issue: Colin Eberhardt Zoltan Arvai, Marcel du Preez, Mark Tucker, John Papa, Phil Middlemiss, Andy Beaulieu, and Chad Campbell. From SilverlightCream.com: Throttling Silverlight Mouse Events to Keep the UI Responsive Colin Eberhardt sent me this link to his latest at Scott Logic... about how to throttle Silverlight -- no not that, you'd have to go to one of the *other* blogs for that :) ... this is throttling the mouse, particularly the mouse wheel to keep the UI from freezing up ... check out the demos, you'll want to read the code Data Driven Applications with MVVM Part I: The Basics Zoltan Arvai started a series of tutorials on Data-Driven Applications with MVVM at SilverlightShow... this is number 1, and it looks like it's going to be a good series to read. Red-To-Green scale using an IValueConverter Marcel du Preez has an interesting post up at SilverlightShow using an IValueConverter to do a red/yellow/green progress bar ... this is pretty cool. Infragistics XamWebOutlookBar & Caliburn With assistance from Rob Eisenburg, Mark Tucker was able to build a Caliburn sample including the Infragistics XamWebOutlookBar, and he's sharing his experience (and code) with all of us. Printing Tip – Handling User Initiated Dialogs Exceptions John Papa responded to a common printing problem by writing it up in his blog. Note this problem quite often appears during debug, so check it out... John also has a quick tip on an update to the PrintAPI in Silverlight 4. Automatic Rectangle Radius X and Y Phil Middlemiss has another great Blend post up -- this one on rounding off buttons... they look great to me, but he's looking for advice -- how about that Phil? They look great to me :) WP7 Back Button in Games Planning on selling 'stuff' in the Windows Phone Marketplace? Are you familiar with the required use of the Back Button? How about in a game? ... Andy Beaulieu discusses all this and has some code you'll want to use. Windows Phone 7 – Call Phone Number from HyperlinkButton Chad Campbell [no relation :) ] is discussing dialing a number from a hyperlink in WP7 - oh yeah, it's a phone as well :) -- I think I've only seen a number attempt to be called -- hmm... and we're not yet either because we all have emulators, but this is a good intro to the functionality for when we may actually have devices! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Configure clean URLs using Laravel using a rewrite rule to index.php

    - by yannis hristofakis
    Recently I've started learning Laravel , I have none experience with framework before. I'm encountering the following problem .I'm trying to configure the .htaccess file so I can have clean URLs but the only thing I get are 404 Not Found error pages. I have created a virtual host - you can see below the configuration file - and changed the .htaccesss file on the public directory. /etc/apache2/sites-available <VirtualHost *:80> ServerAdmin [email protected] ServerName laravel.lar DocumentRoot "/home/giannis/Desktop/laravel/public" <Directory "/home/giannis/Desktop/laravel/public"> Options Indexes FollowSymLinks MultiViews AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> .htaccesss file: laravel/public # Apache configuration file # http://httpd.apache.org/docs/current/mod/quickreference.html # Note: ".htaccess" files are an overhead for each request. This logic should # be placed in your Apache config whenever possible. # http://httpd.apache.org/docs/current/howto/htaccess.html # Turning on the rewrite engine is necessary for the following rules and # features. "+FollowSymLinks" must be enabled for this to work symbolically. <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine On </IfModule> # For all files not found in the file system, reroute the request to the # "index.php" front controller, keeping the query string intact <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] </IfModule> In order to test it, I have created a view named about and made the proper routing. If I link to http://laravel.lar/index.php/about/ I'm routing to the about page instead if I link to http://laravel.lar/about/ I get a 404 Not Found error. I'm using a Debian based system.

    Read the article

  • Deploying an ADF Secure Application using WLS Console

    - by juan.ruiz
    Last week I worked on a requirement from a customer that wanted to understand how to deploy to WLS an application with ADF Security without using JDeveloper. The main question was, what steps where needed in order to set up Enterprise Roles, Security Policies and Application Credentials. In this entry I will explain the steps taken using JDeveloper 11.1.1.2. 0 Requirements: Instead of building a sample application from scratch, we can use Andrejus 's sample application that contains all the security pieces that we need. Open and migrate the project. Also make sure you adjust the database settings accordingly. Creating the EAR file Review the Security settings of the application by going into the Application -> Secure menu and see that there are two enterprise roles as well as the ADF Policies enforcing security on the main page. Make sure the Application Module uses the Data Source instead of JDBC URL for its connection type, also take note of the data source name - in my case I have: java:comp/env/jdbc/HrDS To facilitate the access to this application once we deploy it. Go to your ViewController project properties select the Java EE Application category and give it a meaningful name to the context root as well to the Application Name Go to the ADFSecurityWL Application properties -> Deployment  and create a new EAR deployment profile. Uncheck the Auto generate and Synchronize weblogic-jdbc.xml Descriptors During Deployment Deploy the application as an EAR file. Deploying the Application to WLS using the WLS Console On the WLS console create a JNDI data source. This is the part that I found more tricky of the hole exercise given that the name should match the AM's data source name, however the naming convention that worked for me was jdbc.HrDS Now, deploy the application manually by selecting deployments ->Install look for the EAR and follow the default steps. If this is the firs time you deploy the application, once the deployment finishes you will be asked to Activate Changes on the domain, these changes contain all the security policies and application roles insertion into the WLS instance. Creating Roles and User Groups for the Application To finish the after-deployment set up, we need to create the groups that are the equivalent of the Enterprise Roles of ADF Security. For our sample we have two Enterprise Roles employeesApplication and managersApplication. After that, we create the application users and assign them into their respective groups. Now we can run the application and test the security constraints

    Read the article

  • How to Customize the Internet Explorer 8 Title Bar

    - by Mysticgeek
    If you’re looking for a way to personalize IE 8, one method is to customize the Title Bar. Here we look at a simple Registry hack that will get the job done. The Internet Explorer Title Bar is displayed on the top of the browser with the site name followed by Windows Internet Explorer by default. If you have a small office you might want to change it to the company name, or just change it something more personal at home.   Customize the IE 8 Title Bar Note: Before making any changes to the Registry, make sure to back it up. The first thing we need to do is open the Registry by typing regedit into the Search box in the Start Menu and hit Enter. With the Registry open, navigate to HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Main. Then create a new String Value and name it Window Title. Right-click on the Window Title String and enter in whatever you want to display on the Title Bar in the Value data field and click OK. When you’re done, you should see the new String called Windows Title with whatever you entered in as the value. Close out of the Registry. Restart or launch Internet Explorer and you’ll now see your new text in the Title Bar. If you want to change it to something else, just go in and modify the Value data. If you want to switch it back to the default, just go back in and delete the string we created. A lot of times you’ll see corporate branding already in the title bar from your ISP or some computer company. To get rid of it, check out The Geek’s article on how to remove it. This should work with other versions of Internet Explorer as well. Similar Articles Productive Geek Tips Remove ISP Text or Corporate Branding from Internet Explorer Title BarReset All Internet Explorer 8 Settings to Fix Stability ProblemsMysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPDisable Third Party Extensions in Internet ExplorerToggle Flash On or Off in Internet Explorer the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall

    Read the article

  • OBIEE 11.1.1 - How to configure HTTP compression / caching on Oracle BI Mobile app

    - by Ahmed Awan
     Applies to: OBIEE 11.1.1.5 Supported Physical Devices and OS: The Oracle BI Mobile application with HTTP compression / caching configurations is tested on following devices: iPhone 4S, 4, 3GS. iPad 2 and 1. Note these devices must be running the latest version of the iOS version, i.e. iOS 4.2.1 / iOS 5 is also supported. Configuring Pre-requisites: Prior to configuration, the Oracle Web tier software must be installed on server, as described in product documentation i.e. Enterprise Deployment Guide for Oracle Business Intelligence in Section 3.2, "Installing Oracle HTTP Server." The steps for configuring the compression and caching on Oracle HTTP Server are described in this PA blog at http://blogs.oracle.com/pa/entry/obiee_11g_user_interface_ui and in support Doc ID 1312299.1. Configuration Steps in Oracle BI Mobile application: 1. Download the BI Mobile app from the Apple iTunes App Store. The link is http://itunes.apple.com/us/app/oracle-business-intelligence/id434559909?mt=8 . 2. Add Server for example http://pew801.us.oracle.com:7777/analytics/ , here is how your “Server Setting” screen should look like on your OBI Mobile app:                                 Performance Gain Test (using Oracle® HTTP Server with OBIEE) The test with/without HTTP compression / caching was conducted on iPhone 4S / iPad 2 to measure the throughput (i.e. total bytes received) for Oracle® Business Intelligence Enterprise Edition. Below table shows the throughput comparison before and after using HTTP compression / caching for SampleApp using “QuickStart” dashboard accessing reports i.e. Overview, Details, Published Reporting and Scorecard. Testing shows that total bytes received were reduced from 2.3 MB to 723 KB. a. Test Results > Without HTTP Compression / Caching setting - Total Throughput (in Bytes) captured below: Total Bytes Statistics:        b. Test Results > With HTTP Compression / Caching settings - Total Throughput (in Bytes) captured below: Total Bytes Statistics:      

    Read the article

  • Chroot into a 32 bit version of ubuntu from a 64 bit host

    - by Leif Andersen
    I have a piece of software that only runs on 32 bit linux (Xilinx webPack 10.1, apperently it 'has' to be the old version because that's the latest one compatible with their boards), anyway, this version is only compatible with 32 bit linux. So, I head off to this page to see what I can do: https://help.ubuntu.com/community/32bit_and_64bit Of the 4 options (listed at the bottom): I already installed ia32-libs, and it's still not working I could do that one if needed (which I ended up doing). No, I don't want to be working from a vm all of next semester, that would be painful and I'd rather just reinstall my whole computer to a 32 bit os (which I don't want to do). It didn't sound like it was the best option based on what I've seen. So I went off to do #2, and set up a chroot for 32 bit ubuntu. It linked to this tutorial: https://help.ubuntu.com/community/DebootstrapChroot As I'm running ubuntu 10.10 I made the lucid and newer version changes. Which is to say I wrote: [hardy-i386] description=Ubuntu 8.04 Hardy for i386 directory=/srv/chroot/hardy-i386 personality=linux32 root-users=leif type=directory users=leif to /etc/schroot/chroot.d/hardy-i386 (Note though that I did save it once before I had the file properly formatted, I saved the correct version moments later though). I then ran: $ sudo mkdir -p /srv/chroot/hardy_i386 $ sudo debootstrap --variant=buildd --arch i386 hardy /srv/chroot/hardy_i386 http://archive.ubuntu.com/ubuntu/ Then I ran: $ schroot -l And it showed the proper chroot, but then when I ran: $ schroot -c hardy-i386 -u root I got the following error: E: 10mount: error: Directory '/srv/chroot/hardy-i386' does not exist E: 10mount: warning: Mount location /var/lib/schroot/mount/hardy-i386-80359697-2164-4b10-a05a-89b0f497c4f1 no longer exists; skipping unmount E: hardy-i386-80359697-2164-4b10-a05a-89b0f497c4f1: Chroot setup failed: stage=setup-start Can anyone help me figure out what the problem is? Oh, by the way: /srv/chroot/hardy-i386 most certainly exists. I've also tried it replacing all references with hardy to lucid, to no avail. Oh, one more thing, I did set up the chrome os environment a month back or so: http://www.chromium.org/chromium-os/developer-guide and it had me use something with chmod. So, can anyone figure out what the problem is? Thank you.

    Read the article

  • Big Data – Buzz Words: What is NoSQL – Day 5 of 21

    - by Pinal Dave
    In yesterday’s blog post we explored the basic architecture of Big Data . In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – NoSQL. What is NoSQL? NoSQL stands for Not Relational SQL or Not Only SQL. Lots of people think that NoSQL means there is No SQL, which is not true – they both sound same but the meaning is totally different. NoSQL does use SQL but it uses more than SQL to achieve its goal. As per Wikipedia’s NoSQL Database Definition – “A NoSQL database provides a mechanism for storage and retrieval of data that uses looser consistency models than traditional relational databases.“ Why use NoSQL? A traditional relation database usually deals with predictable structured data. Whereas as the world has moved forward with unstructured data we often see the limitations of the traditional relational database in dealing with them. For example, nowadays we have data in format of SMS, wave files, photos and video format. It is a bit difficult to manage them by using a traditional relational database. I often see people using BLOB filed to store such a data. BLOB can store the data but when we have to retrieve them or even process them the same BLOB is extremely slow in processing the unstructured data. A NoSQL database is the type of database that can handle unstructured, unorganized and unpredictable data that our business needs it. Along with the support to unstructured data, the other advantage of NoSQL Database is high performance and high availability. Eventual Consistency Additionally to note that NoSQL Database may not provided 100% ACID (Atomicity, Consistency, Isolation, Durability) compliance.  Though, NoSQL Database does not support ACID they provide eventual consistency. That means over the long period of time all updates can be expected to propagate eventually through the system and data will be consistent. Taxonomy Taxonomy is the practice of classification of things or concepts and the principles. The NoSQL taxonomy supports column store, document store, key-value stores, and graph databases. We will discuss the taxonomy in detail in later blog posts. Here are few of the examples of the each of the No SQL Category. Column: Hbase, Cassandra, Accumulo Document: MongoDB, Couchbase, Raven Key-value : Dynamo, Riak, Azure, Redis, Cache, GT.m Graph: Neo4J, Allegro, Virtuoso, Bigdata As of now there are over 150 NoSQL Database and you can read everything about them in this single link. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – Hadoop. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – LOGBUFFER – Wait Type – Day 18 of 28

    - by pinaldave
    At first, I was not planning to write about this wait type. The reason was simple- I have faced this only once in my lifetime so far maybe because it is one of the top 5 wait types. I am not sure if it is a common wait type or not, but in the samples I had it really looks rare to me. From Book On-Line: LOGBUFFER Occurs when a task is waiting for space in the log buffer to store a log record. Consistently high values may indicate that the log devices cannot keep up with the amount of log being generated by the server. LOGBUFFER Explanation: The book online definition of the LOGBUFFER seems to be very accurate. On the system where I faced this wait type, the log file (LDF) was put on the local disk, and the data files (MDF, NDF) were put on SanDrives. My client then was not familiar about how the file distribution was supposed to be. Once we moved the LDF to a faster drive, this wait type disappeared. Reducing LOGBUFFER wait: There are several suggestions to reduce this wait stats: Move Transaction Log to Separate Disk from mdf and other files. (Make sure your drive where your LDF is has no IO bottleneck issues). Avoid cursor-like coding methodology and frequent commit statements. Find the most-active file based on IO stall time, as shown in the script written over here. You can also use fn_virtualfilestats to find IO-related issues using the script mentioned over here. Check the IO-related counters (PhysicalDisk:Avg.Disk Queue Length, PhysicalDisk:Disk Read Bytes/sec and PhysicalDisk :Disk Write Bytes/sec) for additional details. Read about them over here. If you have noticed, my suggestions for reducing the LOGBUFFER is very similar to WRITELOG. Although the procedures on reducing them are alike, I am not suggesting that LOGBUFFER and WRITELOG are same wait types. From the definition of the two, you will find their difference. However, they are both related to LOG and both of them can severely degrade the performance. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • How to Run Apache Commands From Oracle HTTP Server 11g Home

    - by Daniel Mortimer
    Every now and then you come across a problem when there is nothing in the "troubleshooting manual" which can help you. Instead you need to think outside the box. This happened to me two or three years back. Oracle HTTP Server (OHS) 11g did not start. The error reported back by OPMN was generic and gave no clue, and worse the HTTP Server error log was empty, and remained so even after I had increased the OPMN and HTTP Server log levels. After checking configuration files, operating system resources, etc I was still no nearer the solution. And then the light bulb moment! OHS is based on Apache - what happens if I attempt to start HTTP Server using the native apache command. Trouble was the OHS 11g solution has its binaries and configuration files in separate "home" directories ORACLE_HOME contains the binaries ORACLE_INSTANCE contains the configuration files How to set the environment so that native apache commands run without error? Eventually, with help from a colleague, the knowledge articleHow to Start Oracle HTTP Server 11g Without Using opmnctl [ID 946532.1]was born! To be honest, I cannot remember the exact cause and solution to that OHS problem two or three years ago. But, I do remember that an attempt to start HTTP Server using the native apache command threw back an error to the console which led me to discover the culprit was some unusual filesystem fault.The other day, I was asked to review and publish a new knowledge article which described how to use the apache command to dump a list of static and shared loaded modules. This got me thinking that it was time [ID 946532.1] was given an update. The resultHow To Run Native Apache Commands in an Oracle HTTP Server 11g Environment [ID 946532.1] Highlights: Title change Improved environment setting scripts Interactive, should be no need to manually edit the scripts (although readers are welcome to do so) Automatically dump out some diagnostic information Inclusion of some links to other troubleshooting collateral To view the knowledge article you need a My Oracle Support login. For convenience, you can obtain the scripts via the links below.MS Windows:Wrapper cmd script - calls main cmd script [After download, remove the ".txt" file extension]Main cmd script - sets OHS 11g environment to run Apache commands [After download, remove the ".txt" file extension]Unix:Shell script - sets OHS 11g environment to run Apache commands on Unix Please note: I cannot guarantee that the scripts held in the blog repository will be maintained. Any enhancements or faults will applied to the scripts attached to the knowledge article. Lastly, to find out more about native apache commands, refer to the Apache Documentation apachectl - Apache HTTP Server Control Interface[http://httpd.apache.org/docs/2.2/programs/apachectl.html]httpd - Apache Hypertext Transfer Protocol Server[http://httpd.apache.org/docs/2.2/programs/httpd.html]

    Read the article

  • How to make Connect Communications VPN connection in 10.10?

    - by Bilal Mohammad Qazi
    these steps were send by my iSP admin for ver10.10 and i'm using 11.10... step 1 sucessfully implemented till point 7 after that the problems are marked after '//' Step 2 i cannot completely do the step 2 How to make Connect Communications VPN connection in Ubuntu 10.10. 1st Step:- 1- Go to System > Administration > Synaptic Package Manage 2- Search for “PPTP”, check “network-manager-PPTP” and click “Apply” 3- Click on the Network Manager tray icon with your right mouse button and choose “Edit Connections…”. 4- Go to the “VPN” tab and click “Add”. 5- Choose “Point-to-Point Tunneling Protocol (PPTP)” as the VPN Connection Type 6- Check the VPN Connection Type and click “Create”. 7- Give your VPN connection a name and assign all the necessary information • Gateway = blue.connect.net.pk if you got Blue Package or • Gateway = green.connect.net.pk if you got Green Package or • Gateway = blueplus.connect.net.pk if you got BluePlus Package or • Gateway = red.connect.net.pk if you got Red Package • User name = Connect Communications Userid • Password = Connect Communications Password 8- Now Click on “Advanced” Authentication • Unchecked “PAP" // cannot uncheck • Unchecked “MSCHAP" // cannot uncheck • Unchecked “CHAP" • Checked only “MSCHAPv2" EAP shown in ver11.10 and cannot be unchecked Security And Compression. • Unchecked “Use Point-to-Point encryption (MPPE)”. • Unchecked “Allow statefull encryption”. • Unchecked “Allow BSD data Compression”. • Unchecked “Allow Deflate data Compression”. • Unchecked “Use TCP Header Compression”. • Unchecked “Send PPP echo Packets” Then Press “OK” then “Apply”. 9-Now you are able to connect to the specified VPN connection via the Networking Manager Then you can connect to VPN in the menu bar and your Internet icon will have a lock when the connection is successful. 2nd Step:- Open Terminal window. First, you open a terminal (Applications > Accessories > Terminal): Run command “sudo” Now gave root Password. Then run command “netstat -r -n” It will show some lines and for example from the last line pick the IP from 2nd column like 10.111.0.1 0.0.0.0 10.111.0.1 0.0.0.0 UG 0 0 0 eth0 Now run the fallowing command. echo “route add -net 10.101.8.0 netmask 255.255.252.0 gw 10.152.24.1” > /etc/rc.local note :- 10.111.0.1 is an example IP now run “ sh /etc/rc.local “

    Read the article

  • Understanding Oracle: Demystifying OpenWorld

    - by mseika
    Seminar: Wednesday 24th October 2012: Avnet, Bracknell Oracle OpenWorld is the world's largest event dedicated to helping enterprises harness the power of technology, during a full week in October. Oracle Corporation always uses Oracle OpenWorld to make its most important product announcements, and this year is no exception. We realise that not all our partners can attend this prestigious event in San Francisco, primarily due to time and cost pressures. Oracle OpenWorld is the only conference that goes this deep and wide with Oracle technology, providing thousands of sessions and hundreds of demonstrations geared toward helping partners and customers get better results with the technology it has —and plan strategically for the technology it will need to keep ahead of the competition in the years to come. With the sheer number of announcements planned, it is sometimes difficult to find your way through the fog and identify the opportunities relevant to your business to take advantage of, this coming year. So why not engage with the Oracle's UK team via Avnet and get the announcements shared with you face-to-face, in the UK? As a key Value Added Distributor of Oracle Applications, Technology and Hardware solutions, Avnet has been attending Oracle OpenWorld for a number of years and invites our partners to attend a half day summary event which will share the keynote announcements. We will also help prioritise for you the announcements of greatest interest and business opportunity for the UK channel. Agenda Time Module 12:00-13:15 Registration and lunch 13:15-14:00 Introductions and Key Hardware announcements Discover how Oracle's complete and integrated application-aware virtualization solutions, including virtualization for SPARC and x86 architectures, can help you gain better efficiencies across your business. Get updates on how Oracle storage products and solutions can accelerate database performance, improve application responsiveness, and meet your data protection needs. 14:00-14:15 Q&A and Break 14:15-15:00 Key Technology announcements Technology products, encompassing Oracle's Database 12c and Middleware, are revolutionizing the industry with record-breaking performance, helping customers consolidate onto private clouds and achieve high returns on investment. 15:00-15:15 Q&A and Break 15:15-16:00 Key Applications announcements Presentations focused on Oracle's strategy and vision for its applications business, including Oracle E-Business Suite; Oracle's PeopleSoft, JD Edwards, Siebel, Hyperion, and Agile products; and the newly available Oracle Fusion Applications. 16:00-16:30 Oracle-on-Oracle announcements & business opportunities with Avnet Learn about Oracle's cloud computing and Oracle-on-Oracle strategies and find out more about Oracle's engineered systems for the broad market 16:30 Close * Please note agenda may be subject to change What do you need to do now Register now or for more information email our Oracle events team at [email protected]. N.B. Places are limited, so please register early to avoid disappointment.

    Read the article

  • 2D Barcode Addendum

    - by Tim Dexter
    Having finally got my external drive back(long story) today from Oklahoma (thank you so much Sammy) Im back with a full compliment of Oracle and blogging tools at my disposal. I have missed JDeveloper this past week, which I have found, I immensely prefer over Eclipse (let the flaming commence :0) I use Zoundry Raven for writing articles and its not installed locally but on my external drove, so I have been soldiering on with the blog server's pain in the backside UI for writing. Now I have my favority editor back and things are calming down workwise, I will start to get the Excel template posts out. Today thou, a note about 2D barcode support or more specifically any barcode that needs some data manipulation before the barcode font is applied. I wrote about these fonts a long time back and laid out the java class you would need to write if you had an algorithm from the font manufacturer to use. I missed out a valuable point and James at Luminex fell into the trap. He was wanting to use the datamatrix font from IDAutomation but and had built the java class to be called from the RTF template but it was not encoding or at least did not appear to be. New debugging feature to the rescue. Kan over at the bipconsultng blog documented the feature a while back. Just adding <?xdo-debug-level:'STATEMENT'?> to my test template generated all the debug files in my c:\temp directory. No messing with files, just a simple command ... at last! Kan has documented the feature here. With the log in hand I spotted a java error stack referencing a missing code128a method, huh? Looking at James' class he had the following snippet: ENCODERS.put("code128a",mUtility.getClass().getMethod("code128a",clazz)); ENCODERS.put("code128b",mUtility.getClass().getMethod("code128b", clazz)); ENCODERS.put("code128c",mUtility.getClass().getMethod("code128c", clazz)); ENCODERS.put("pdf417",mUtility.getClass().getMethod("pdf417", clazz)); ENCODERS.put("datamatrix",mUtility.getClass().getMethod("datamatrix", clazz)); His class did not include the other code128 and pdf147 methods and BIP was expecting them. An easy fix, just comment them out, rebuild and deploy and the encoding started working. If you are hitting similar problems, check that class and ensure all of the referenced methods are available, if not, delete or get commenting. James now has purdy labels popping out that his hard ware can read, sweet!

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >