Search Results

Search found 7466 results on 299 pages for 'selected'.

Page 258/299 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • Help trying to get two-finger scrolling to work on Asus UL80VT

    - by Dan2k3k4
    Multi-touch works fine on Windows 7 with: two-fingers scroll vertical and horizontally, two-finger tap for middle click, and three-finger tap for right click. However with Ubuntu, I've never been able to get multi-touch to "save" and work, I was able to get it to work a few times but after restarting - it would just reset back. I have the settings for two-finger scrolling on: Mouse and Touchpad Touchpad Two-finger scrolling (selected) Enable horizontal scrolling (ticked) The cursor stops moving when I try to scroll with two fingers, but it doesn't actually scroll the page. When I perform xinput list, I get: Virtual core pointer id=2 [master pointer (3)] ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ETPS/2 Elantech ETF0401 id=13 [slave pointer (2)] I've tried to install some 'synaptics-dkms' bug-fix (from a few years back) but that didn't work, so I removed that. I've tried installing 'uTouch' but that didn't seem to do anything so removed it. Here's what I have installed now: dpkg --get-selections installed-software grep 'touch\|mouse\|track\|synapt' installed-software libsoundtouch0 --- install libutouch-evemu1 --- install libutouch-frame1 --- install libutouch-geis1 --- install libutouch-grail1 --- install printer-driver-ptouch --- install ptouch-driver --- install xserver-xorg-input-multitouch --- install xserver-xorg-input-mouse --- install xserver-xorg-input-vmmouse --- install libnetfilter-conntrack3 --- install libxatracker1 --- install xserver-xorg-input-synaptics --- install So, I'll start again, what should I do now to get two-finger scrolling to work and ensure it works after restarting? Also doing: synclient TapButton1=1 TapButton2=2 TapButton3=3 ...works but doesn't save after restarting. However doing: synclient VertTwoFingerScroll=1 HorizTwoFingerScroll=1 Does NOT work to fix the two-finger scrolling. Output of: cat /var/log/Xorg.0.log | grep -i synaptics [ 4.576] (II) LoadModule: "synaptics" [ 4.577] (II) Loading /usr/lib/xorg/modules/input/synaptics_drv.so [ 4.577] (II) Module synaptics: vendor="X.Org Foundation" [ 4.577] (II) Using input driver 'synaptics' for 'ETPS/2 Elantech ETF0401' [ 4.577] (II) Loading /usr/lib/xorg/modules/input/synaptics_drv.so [ 4.584] (--) synaptics: ETPS/2 Elantech ETF0401: x-axis range 0 - 1088 [ 4.584] (--) synaptics: ETPS/2 Elantech ETF0401: y-axis range 0 - 704 [ 4.584] (--) synaptics: ETPS/2 Elantech ETF0401: pressure range 0 - 255 [ 4.584] (--) synaptics: ETPS/2 Elantech ETF0401: finger width range 0 - 16 [ 4.584] (--) synaptics: ETPS/2 Elantech ETF0401: buttons: left right middle double triple scroll-buttons [ 4.584] (--) synaptics: ETPS/2 Elantech ETF0401: Vendor 0x2 Product 0xe [ 4.584] (--) synaptics: ETPS/2 Elantech ETF0401: touchpad found [ 4.588] (**) synaptics: ETPS/2 Elantech ETF0401: (accel) MinSpeed is now constant deceleration 2.5 [ 4.588] (**) synaptics: ETPS/2 Elantech ETF0401: MaxSpeed is now 1.75 [ 4.588] (**) synaptics: ETPS/2 Elantech ETF0401: AccelFactor is now 0.154 [ 4.589] (--) synaptics: ETPS/2 Elantech ETF0401: touchpad found Tried installing synaptiks but that didn't seem to work either, so removed it. Temporary Fix (works until I restart) Doing the following commands: modprobe -r psmouse modprobe psmouse proto=imps Works but now xinput list shows up as: Virtual core pointer id=2 [master pointer (3)] ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ImPS/2 Generic Wheel Mouse id=13 [slave pointer (2)] Instead of Elantech, and it gets reset when I reboot. Solution (not ideal for most people) So, I ended up reinstalling a fresh 12.04 after indirectly playing around with burg and plymouth then removing plymouth which removed 50+ packages (I saw the warnings but was way too tired and assumed I could just 'reinstall' them all after (except that didn't work). Right now xinput list shows up as: ? Virtual core pointer --- id=2 [master pointer (3)] ? ? Virtual core XTEST pointer --- id=4 [slave pointer (2)] ? ? ETPS/2 Elantech Touchpad --- id=13 [slave pointer (2)] grep 'touch\|mouse\|track\|synapt' installed-software libnetfilter-conntrack3 --- install libsoundtouch0 --- install libutouch-evemu1 --- install libutouch-frame1 --- install libutouch-geis1 --- install libutouch-grail1 --- install libxatracker1 --- install mousetweaks --- install printer-driver-ptouch --- install xserver-xorg-input-mouse --- install xserver-xorg-input-synaptics --- install xserver-xorg-input-vmmouse --- install cat /var/log/Xorg.0.log | grep -i synaptics [ 4.890] (II) LoadModule: "synaptics" [ 4.891] (II) Loading /usr/lib/xorg/modules/input/synaptics_drv.so [ 4.892] (II) Module synaptics: vendor="X.Org Foundation" [ 4.892] (II) Using input driver 'synaptics' for 'ETPS/2 Elantech Touchpad' [ 4.892] (II) Loading /usr/lib/xorg/modules/input/synaptics_drv.so [ 4.956] (II) synaptics: ETPS/2 Elantech Touchpad: ignoring touch events for semi-multitouch device [ 4.956] (--) synaptics: ETPS/2 Elantech Touchpad: x-axis range 0 - 1088 [ 4.956] (--) synaptics: ETPS/2 Elantech Touchpad: y-axis range 0 - 704 [ 4.956] (--) synaptics: ETPS/2 Elantech Touchpad: pressure range 0 - 255 [ 4.956] (--) synaptics: ETPS/2 Elantech Touchpad: finger width range 0 - 15 [ 4.956] (--) synaptics: ETPS/2 Elantech Touchpad: buttons: left right double triple [ 4.956] (--) synaptics: ETPS/2 Elantech Touchpad: Vendor 0x2 Product 0xe [ 4.956] (--) synaptics: ETPS/2 Elantech Touchpad: touchpad found [ 4.980] () synaptics: ETPS/2 Elantech Touchpad: (accel) MinSpeed is now constant deceleration 2.5 [ 4.980] () synaptics: ETPS/2 Elantech Touchpad: MaxSpeed is now 1.75 [ 4.980] (**) synaptics: ETPS/2 Elantech Touchpad: AccelFactor is now 0.154 [ 4.980] (--) synaptics: ETPS/2 Elantech Touchpad: touchpad found So, if all else fails, reinstall Linux :/

    Read the article

  • SQL SERVER – Identity Fields – Contest Win Joes 2 Pros Combo (USD 198) – Day 2 of 5

    - by pinaldave
    August 2011 we ran a contest where every day we give away one book for an entire month. The contest had extreme success. Lots of people participated and lots of give away. I have received lots of questions if we are doing something similar this month. Absolutely, instead of running a contest a month long we are doing something more interesting. We are giving away USD 198 worth gift every day for this week. We are giving away Joes 2 Pros 5 Volumes (BOOK) SQL 2008 Development Certification Training Kit every day. One copy in India and One in USA. Total 2 of the giveaway (worth USD 198). All the gifts are sponsored from the Koenig Training Solution and Joes 2 Pros. The books are available here Amazon | Flipkart | Indiaplaza How to Win: Read the Question Read the Hints Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India residents only) 2 Winners will be randomly selected announced on August 20th. Question of the Day: Which of the following statement is incorrect? a) Identity value can be negative. b) Identity value can have negative interval. c) Identity value can be of datatype VARCHAR d) Identity value can have increment interval larger than 1 Query Hints: BIG HINT POST A simple way to determine if a table contains an identity field is to use the SSMS Object Explorer Design Interface. Navigate to the table, then right-click it and choose Design from the pop-up window. When your design tab opens, select the first field in the table to view its list of properties in the lower pane of the tab (In this case the field is ProductID). Look to see if the Identity Specification property in the lower pane is set to either yes or no. SQL Server will allow you to utilize IDENTITY_INSERT with just one table at a time. After you’ve completed the needed work, it’s very important to reset the IDENTITY_INSERT back to OFF. Additional Hints: I have previously discussed various concepts from SQL Server Joes 2 Pros Volume 2. SQL Joes 2 Pros Development Series – Output Clause in Simple Examples SQL Joes 2 Pros Development Series – Ranking Functions – Advanced NTILE in Detail SQL Joes 2 Pros Development Series – Ranking Functions – RANK( ), DENSE_RANK( ), and ROW_NUMBER( ) SQL Joes 2 Pros Development Series – Advanced Aggregates with the Over Clause SQL Joes 2 Pros Development Series – Aggregates with the Over Clause SQL Joes 2 Pros Development Series – Overriding Identity Fields – Tricks and Tips of Identity Fields SQL Joes 2 Pros Development Series – Many to Many Relationships Next Step: Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India) Bonus Winner Leave a comment with your favorite article from the “additional hints” section and you may be eligible for surprise gift. There is no country restriction for this Bonus Contest. Do mention why you liked it any particular blog post and I will announce the winner of the same along with the main contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Extend Your Applications Your Way: Oracle OpenWorld Live Poll Results

    - by Applications User Experience
    Lydia Naylor, Oracle Applications User Experience Manager At OpenWorld 2012, I attended one of our team’s very exciting sessions: “Extend Your Applications, Your Way”. It was clear that customers were engaged by the topics presented. Not only did we see many heads enthusiastically nodding in agreement during the presentation, and witness a large crowd surround our speakers Killian Evers, Kristin Desmond and Greg Nerpouni afterwards, but we can prove it…with data! Figure 1. Killian Evers, Kristin Desmond, and Greg Nerpouni of Oracle at the OOW 2012 session. At the beginning of our OOW 2012 journey, Greg Nerpouni, Fusion HCM Principal Product Manager, told me he really wanted to get feedback from the audience on our extensibility direction. Initially, we were thinking of doing a group activity at the OOW UX labs events that we hold every year, but Greg was adamant- he wanted “real-time” feedback. So, after a little tinkering, we came up with a way to use an online survey tool, a simple QR code (Quick Response code: a matrix barcode that can include information like URLs and can be read by mobile device cameras), and the audience’s mobile devices to do just that. Figure 2. Actual QR Code for survey Prior to the session, we developed a short survey in Vovici (an online survey tool), with questions to gather feedback on certain points in the presentation, as well as demographic data from our participants. We used Vovici’s feature to generate a mobile HTML version of the survey. At the session, attendees accessed the survey by simply scanning a QR code or typing in a TinyURL (a shorthand web address that is easily accessible through mobile devices). Killian, Kristin and Greg paused at certain points during the session and asked participants to answer a few survey questions about what they just presented. Figure 3. Session survey deployed on a mobile phone The nice thing about Vovici’s survey tool is that you can see the data real-time as participants are responding to questions - so we knew during the session that not only was our direction on track but we were hitting the mark and fulfilling Greg’s request. We planned on showing the live polling results to the audience at the end of the presentation but it ran just a little over time, and we were gently nudged out of the room by the session attendants. We’ve included a quick summary below and this link to the full results for your enjoyment. Figure 4. Most important extensions to Fusion Applications So what did participants think of our direction for extensibility? A total of 94% agreed that it was an improvement upon their current process. The vast majority, 80%, concurred that the extensibility model accounts for the major roles involved: end user, business systems analyst and programmer. Attendees suggested a few supporting roles such as systems administrator, data architect and integrator. Customers and partners in the audience verified that Oracle‘s Fusion Composers allow them to make changes in the most common areas they need to: user interface, business processes, reporting and analytics. Integrations were also suggested. All top 10 things customers can do on a page rated highly in importance, with all but two getting an average rating above 4.4 on a 5 point scale. The kinds of layout changes our composers allow customers to make align well with customers’ needs. The most common were adding columns to a table (94%) and resizing regions and drag and drop content (both selected by 88% of participants). We want to thank the attendees of the session for allowing us another great opportunity to gather valuable feedback from our customers! If you didn’t have a chance to attend the session, we will provide a link to the OOW presentation when it becomes available.

    Read the article

  • Silverlight Cream for January 04, 2011 -- #1022

    - by Dave Campbell
    In this Issue: Dennis Doomen, Doug Holland, Kunal Chowdhury, Sacha Barber, Paul Sheriff, Mike Snow(-2-), Peter Kuhn(-2-), and Mike Ormond. Above the Fold: Silverlight: "Silverlight: Fixing the BookShelf Sample" Peter Kuhn WP7: "Searching the Windows Phone 7 Marketplace Programmatically" Doug Holland Prism/Cinch: "PRISM 4 Custom Transitioning Region" Sacha Barber Shoutouts: Sacha Barber the author of Cinch asks for some advice from users: Cinch V2 : Question For The Reader Michael Crump introduces us to SnippetManager as a way to organize your Silverlight snippets... I'm thinking any snippet: A better way to organize your Silverlight Code Snippets. Andy Beaulieu announced an update of Physics Helper 4.2 using Farseer 3.2 ... check out the breaking changes though! Dennis Doomen blogged about a new release of his Fluent Assertions: A new year with a new release of Fluent Assertions, with a blog post about it below From SilverlightCream.com: Verifying PropertyChanged events in Silverlight using Fluent Assertions Dennis Doomen release his latest Fluent Assertions for .NET and Silverlight and wrote up a big post about the new event monitoring syntax. Searching the Windows Phone 7 Marketplace Programmatically Doug Holland has a post up on MSDN blogs talking about searching the WP7 Marketplace programmatically... ya know you should be able to do it... here's how. Beginners Guide to Visual Studio LightSwitch (Part - 5) Kunal Chowdhury has Part 5 of a tutorial series on Lightswitch up at SilverlightShow... working with custom validation this time, and for the first time in this series so far actually writes some code! PRISM 4 Custom Transitioning Region Sacha Barber took time to look at Prism4/MEF and Cinch2 and found things to be fine then wrote a custom PRISM region adaptor that uses a TransitionalElement from the Microsoft Transitionals project... code available, blog post to come. Get Application Title from Windows Phone Paul Sheriff has a cool chunk of code up... getting the Application's title programmatically... and other attributes as well, if you were wondering why you might wanna do that. Detecting Users Win7 Mobile Theme Color Mike Snow has a couple as well... first up is how to detect your user's theme... obviously useful if you wanna match it. Selecting an Item in a ComboBox after Adding Items Second for Mike Snow is a general Silverlight issue... setting the selected item on a ComboBox after filling it... if you haven't stumbled across this yet, you will... A Simplified Grid Markup Reloaded Peter Kuhn has a pair of posts up since last time... this first is an extension of Colin Eberhardt's simplified Grid markup system, but it's only useful if you don't plan on using Blend... can we get a show of hands? :) Silverlight: Fixing the BookShelf Sample Next Peter Kuhn has some changes to the Bookshelf code, but more importantly has some excelling tips about shader effects, Effects on Visual Elements and how to make best use of all the above. Displaying HTML Content in Windows Phone 7 Mike Ormond has a WP7 post up describing problems a customer had early on displaying rich text and an attempt to use the WebBrowser control to pull it off and the problems that caused... check out the resultant code, and read the comments as well. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Implementing Database Settings Using Policy Based Management

    - by Ashish Kumar Mehta
    Introduction Database Administrators have always had a tough time to ensuring that all the SQL Servers administered by them are configured according to the policies and standards of organization. Using SQL Server’s  Policy Based Management feature DBAs can now manage one or more instances of SQL Server 2008 and check for policy compliance issues. In this article we will utilize Policy Based Management (aka Declarative Management Framework or DMF) feature of SQL Server to implement and verify database settings on all production databases. It is best practice to enforce the below settings on each Production database. However, it can be tedious to go through each database and then check whether the below database settings are implemented across databases. In this article I will explain it to you how to utilize the Policy Based Management Feature of SQL Server 2008 to create a policy to verify these settings on all databases and in cases of non-complaince how to bring them back into complaince. Database setting to enforce on each user database : Auto Close and Auto Shrink Properties of database set to False Auto Create Statistics and Auto Update Statistics set to True Compatibility Level of all the user database set as 100 Page Verify set as CHECKSUM Recovery Model of all user database set to Full Restrict Access set as MULTI_USER Configure a Policy to Verify Database Settings 1. Connect to SQL Server 2008 Instance using SQL Server Management Studio 2. In the Object Explorer, Click on Management > Policy Management and you will be able to see Policies, Conditions & Facets as child nodes 3. Right click Policies and then select New Policy…. from the drop down list as shown in the snippet below to open the  Create New Policy Popup window. 4. In the Create New Policy popup window you need to provide the name of the policy as “Implementing and Verify Database Settings for Production Databases” and then click the drop down list under Check Condition. As highlighted in the snippet below click on the New Condition… option to open up the Create New Condition window. 5. In the Create New Condition popup window you need to provide the name of the condition as “Verify and Change Database Settings”. In the Facet drop down list you need to choose the Facet as Database Options as shown in the snippet below. Under Expression you need to select Field value as @AutoClose and then choose Operator value as ‘ = ‘ and finally choose Value as False. Now that you have successfully added the first field you can now go ahead and add rest of the fields as shown in the snippet below. Once you have successfully added all the above shown fields of Database Options Facet, click OK to save the changes and to return to the parent Create New Policy – Implementing and Verify Database Settings for Production Database windows where you will see that the newly created condition “Verify and Change Database Settings” is selected by default. Continues…

    Read the article

  • SQL SERVER – Expanding Views – Contest Win Joes 2 Pros Combo (USD 198) – Day 4 of 5

    - by pinaldave
    August 2011 we ran a contest where every day we give away one book for an entire month. The contest had extreme success. Lots of people participated and lots of give away. I have received lots of questions if we are doing something similar this month. Absolutely, instead of running a contest a month long we are doing something more interesting. We are giving away USD 198 worth gift every day for this week. We are giving away Joes 2 Pros 5 Volumes (BOOK) SQL 2008 Development Certification Training Kit every day. One copy in India and One in USA. Total 2 of the giveaway (worth USD 198). All the gifts are sponsored from the Koenig Training Solution and Joes 2 Pros. The books are available here Amazon | Flipkart | Indiaplaza How to Win: Read the Question Read the Hints Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India residents only) 2 Winners will be randomly selected announced on August 20th. Question of the Day: Which of the following key word will force the query to use indexes created on views? a) ENCRYPTION b) SCHEMABINDING c) NOEXPAND d) CHECK OPTION Query Hints: BIG HINT POST Usually, the assumption is that Index on the table will use Index on the table and Index on view will be used by view. However, that is the misconception. It does not happen this way. In fact, if you notice the image, you will find the both of them (table and view) use both the index created on the table. The index created on the view is not used. The reason for the same as listed in BOL. The cost of using the indexed view may exceed the cost of getting the data from the base tables, or the query is so simple that a query against the base tables is fast and easy to find. This often happens when the indexed view is defined on small tables. You can use the NOEXPAND hint if you want to force the query processor to use the indexed view. This may require you to rewrite your query if you don’t initially reference the view explicitly. You can get the actual cost of the query with NOEXPAND and compare it to the actual cost of the query plan that doesn’t reference the view. If they are close, this may give you the confidence that the decision of whether or not to use the indexed view doesn’t matter. Additional Hints: I have previously discussed various concepts from SQL Server Joes 2 Pros Volume 4. SQL Joes 2 Pros Development Series – Structured Error Handling SQL Joes 2 Pros Development Series – SQL Server Error Messages SQL Joes 2 Pros Development Series – Table-Valued Functions SQL Joes 2 Pros Development Series – Table-Valued Store Procedure Parameters SQL Joes 2 Pros Development Series – Easy Introduction to CHECK Options SQL Joes 2 Pros Development Series – Introduction to Views SQL Joes 2 Pros Development Series – All about SQL Constraints Next Step: Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India) Bonus Winner Leave a comment with your favorite article from the “additional hints” section and you may be eligible for surprise gift. There is no country restriction for this Bonus Contest. Do mention why you liked it any particular blog post and I will announce the winner of the same along with the main contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Error:couldn't read file-Kernel panic

    - by Thanos
    I have just installed ubuntu 12.04.1. To be honest I had to run installation several times until it was finished fine. When I finally managed to install it properly, I power on the laptop and the grub shoed up! I selected ubuntu generic. It takes some time to load and when it does I get an error message stating that error: couldn't read file Press any key to continue If I press any button nothing happens. If a leave it there, in a short while there is a black screen loading which gives some weird messages [0.946710] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block (0,0) [0.946755] Pid: 1, comm: swapper/0 Not tainted 3.2.0-29-generic #46-Ubuntu [0.946792] Call Trace: [0.946831] [<ffffffff81640ec8>] panic+0x91/0x1a4 [0.946869] [<ffffffff81cfc01e>] mount_block_root+0xdc/0x18e [0.946909] [<ffffffff81002930>] ? populate_rootfs_wait+0x300/0x9d0 [0.946947] [<ffffffff81cfc257>] mount_root+0x54/0x59 [0.946982] [<ffffffff81cfcec9>] prepare_namespace+0x16d/0x1a6 [0.947019] [<ffffffff81cfbd63>] kernel_init+0x153/0x158 [0.947094] [<ffffffff81cfbc10>] ? start_kernel+0x3bd/0x3bd [0.947129] [<ffffffff81664030>] ? gs_change+0x13/0x13 The thing is that the laptop isn't mine. A friend tried to dual boot ubuntu alongside windows 7 but he didn't succeed. Ubuntu option was in grub, but when you tried to boot it rebooted from the start. So from a Live CD I erased ubuntu, started windows to check if something went wrong, and fortunately everything was OK. Windows started normally! So I tried to install ubuntu. Before installation was completed the installer crashed! I was afraid that he would lost windows, something that was true... At that point I tried to install windows but whichever distro(XP, 7{home, proffessional, ultimate},8) I tried it could never reach the end. So I tried to reinstall ubuntu but I was facing those weird messages. What can I do to move on? ______________________________________________________________________ EDIT1: I tried to check and fix(if possible) with GParted it took a lot of hours,although gparted displays only 01:14, I restarted the system and now I get not exactly the same messages. Numbers in braces [ ] are different [0.818189] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block (0,0) [0.818235] Pid: 1, comm: swapper/0 Not tainted 3.2.0-29-generic #46-Ubuntu [0.818272] Call Trace: [0.818312] [<ffffffff81640ec8>] panic+0x91/0x1a4 [0.818351] [<ffffffff81cfc01e>] mount_block_root+0xdc/0x18e [0.818391] [<ffffffff81002930>] ? populate_rootfs_wait+0x300/0x9d0 [0.818428] [<ffffffff81cfc257>] mount_root+0x54/0x59 [0.818464] [<ffffffff81cfcec9>] prepare_namespace+0x16d/0x1a6 [0.818501] [<ffffffff81cfbd63>] kernel_init+0x153/0x158 [0.818574] [<ffffffff81cfbc10>] ? start_kernel+0x3bd/0x3bd [0.818610] [<ffffffff81664030>] ? gs_change+0x13/0x13 What on earth is going on? ______________________________________________________________________ EDIT2: I forgot to mention that my friend gave a punch to his laptop during a game. After that his cooler became to make a weird noise so I checked and it is a bit tortuous but it is working. What I beleive must be wrong is that his HDD makes a weird noise while trying to load ubuntu, which means he might need a new HDD. Could that be true?

    Read the article

  • STOP PRESS: FY15 Q1 Oracle ZS3 Contest for Partners

    - by Cinzia Mascanzoni
    04 JUNE 2014 Oracle EMEA Partners Stop Press Stay Connected Oracle Media Network   OPN on PartnerCast   STOP PRESS: FY15 Q1 Oracle ZS3 Contest for PartnersShare an unforgettable experience at the Teatro Alla Scala in Milan Dear valued Partner, We are pleased to launch a partner contest exclusive to our partners dedicated to promoting and selling Oracle Systems! You are essential to the success of Oracle and we want to recognize your contribution and effort in driving Oracle Storage to the market. To show our appreciation we are delighted to announce a contest, giving the winners the opportunity to attend a roundtable chaired by Senior Oracle Executives and spend an unforgettable evening at the magnificent Teatro Alla Scala in Milan, followed by a stay at the Grand Hotel et de Milan, courtesy of Oracle. Recognition will be given to 12 partner companies (10 VARs & 2 VADs) who will be recognized for their ZFS storage booking achievement in the broad market between June 1st and July 18th 2014. Criteria of Eligibility A minimum deal value of $30k is required for qualification Partners who are wholly or partially owned by a public sector organization are not eligible for participation Winners The winning VARs will be: The highest ZS3 or ZBA bookings achievers by COB on July 18th, 2014 in each Oracle EMEA region (1) The highest Oracle on Oracle (2) ZS3 or ZBA bookings achievers by COB on July 18th, 2014 in each Oracle EMEA region The winning VADs (3) will be: The highest ZS3 or ZBA bookings achiever by COB on July 18th 2014 in EMEA The highest Oracle on Oracle (2) ZS3 or ZBA bookings achiever by COB on July 18th 2014 in EMEA (1) Two VAR winners for each EMEA region – Eastern Europe & CIS, Middle East & Africa, South Europe, North Europe, UK/Ireland & Israel - as per the criteria outlined above(2) Oracle on Oracle, in this instance, means ZS3 or ZBA storage attached to DB or DB options, Engineered Systems or Sparc servers sold to the same customer by the same partner within the contest timelines.(3) Two VAD winners, one for each of the criteria outlined above, will be selected from across EMEA. Oracle shall be the final arbiter in selecting the winners. All winners will be notified via their Oracle account manager. Full details about the contest, expenses covered by Oracle and timetable of events can be found on the Oracle EMEA Hardware (Servers & Storage) Partner Community workspace (FY15 Q1 ZFS Partner Contest). Access to the community workspace requires membership. If you are not a member please register here. The Prize Winners will be invited to participate to a roundtable chaired by Oracle on Monday September 8th 2014 in Milan and to be guests of Oracle in the evening of September 8th, 2014 at the Teatro Alla Scala. The evening will comprise of a private tour of the Scala museum, cocktail reception at the elegant museum rooms and attending the performance by the renowned Soprano, Maria Agresta. Our guests will then retire for the evening to the Grand Hotel et de Milan, courtesy of Oracle. Good Luck!! For more information, please contact Sasan Moaveni. Regards, Olivier TordoSenior Director - Systems Business DevelopmentOracle EMEA Alliances & Channels Resources EMEA Hardware Partner Community EMEA Oracle Partner Days Find Partner Events EMEA Partner News Blog EMEA Partner Enablement Blog Oracle PartnerNetwork Copyright © 2014, Oracle and/or its affiliates.All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • How To Quickly Reboot Directly from Windows 7 to XP, Vista, or Ubuntu

    - by The Geek
    One of the biggest annoyances with a dual-boot system is having to wait for your PC to reboot to select the operating system you want to switch to, but there’s a simple piece of software that can make this process easier. This guest article was written by Ryan Dozier from the Doztech tech blog. With a small piece of software called iReboot we can skip the above step all together and instantly reboot into the operating system we want right from Windows. Their description says: “Instead of pressing restart, waiting for Windows to shut down, waiting for your BIOS to post, then selecting the operating system you want to boot into (within the bootloader time-limit!); you just select that entry from iReboot and let it do the rest!” Don’t worry about iReboot reconfiguring  your bootloader or any dual boot configuration you have. iReboot will only boot the selected operating system once and go back to your default settings. Using iReboot iReboot is quick and easy to install. Just download it, link below, run through the setup and select the default configuration. iReboot will automatically figure out what operating systems you have installed and appear in the taskbar. Go over to the taskbar and right click on the iReboot icon and select which operating system you want to reboot into. This method will add a check mark on the operating system you want to boot into. On your next reboot the system will automatically load your choice and skip the Windows Boot Manager. If you want to reboot automatically just select “Reboot on Selection” in the iReboot menu.   To be even more productive, you can install iReboot into each Windows operating system to quickly access the others with a few simple clicks.   iReboot does not work in Linux so you will have to reboot manually. Then wait for the Windows Boot Manager to load and select your operating system.   Conclusion iReboot works on  Windows XP, Windows Vista,  and Windows 7 as well as 64 bit versions of these operating systems. Unfortunately iReboot is only available for Windows but you can still use its functionality in Windows to quickly boot up your Linux machine. A simple reboot in Linux will take you back to Windows Boot Manager. Download iReboot from neosmart.net Editor’s note: We’ve not personally tested this software over at How-To Geek, but Neosmart, the author of the software, generally makes quality stuff. Still, you might want to test it out on a test machine first. If you’ve got any experience with this software, please be sure to let your fellow readers know in the comments. Similar Articles Productive Geek Tips Restart the Ubuntu Gnome User Interface QuicklyKeyboard Ninja: 21 Keyboard Shortcut ArticlesTest Your Computer’s Memory Using Windows Vista Memory Diagnostic ToolEnable or Disable UAC From the Windows 7 / Vista Command LineSet Windows as Default OS when Dual Booting Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon

    Read the article

  • SQL SERVER – Query Hint – Contest Win Joes 2 Pros Combo (USD 198) – Day 1 of 5

    - by pinaldave
    August 2011 we ran a contest where every day we give away one book for an entire month. The contest had extreme success. Lots of people participated and lots of give away. I have received lots of questions if we are doing something similar this month. Absolutely, instead of running a contest a month long we are doing something more interesting. We are giving away USD 198 worth gift every day for this week. We are giving away Joes 2 Pros 5 Volumes (BOOK) SQL 2008 Development Certification Training Kit every day. One copy in India and One in USA. Total 2 of the giveaway (worth USD 198). All the gifts are sponsored from the Koenig Training Solution and Joes 2 Pros. The books are available here Amazon | Flipkart | Indiaplaza How to Win: Read the Question Read the Hints Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India residents only) 2 Winners will be randomly selected announced on August 20th. Question of the Day: Which of the following queries will return dirty data? a) SELECT * FROM Table1 (READUNCOMMITED) b) SELECT * FROM Table1 (NOLOCK) c) SELECT * FROM Table1 (DIRTYREAD) d) SELECT * FROM Table1 (MYLOCK) Query Hints: BIG HINT POST Most SQL people know what a “Dirty Record” is. You might also call that an “Intermediate record”. In case this is new to you here is a very quick explanation. The simplest way to describe the steps of a transaction is to use an example of updating an existing record into a table. When the insert runs, SQL Server gets the data from storage, such as a hard drive, and loads it into memory and your CPU. The data in memory is changed and then saved to the storage device. Finally, a message is sent confirming the rows that were affected. For a very short period of time the update takes the data and puts it into memory (an intermediate state), not a permanent state. For every data change to a table there is a brief moment where the change is made in the intermediate state, but is not committed. During this time, any other DML statement needing that data waits until the lock is released. This is a safety feature so that SQL Server evaluates only official data. For every data change to a table there is a brief moment where the change is made in this intermediate state, but is not committed. During this time, any other DML statement (SELECT, INSERT, DELETE, UPDATE) needing that data must wait until the lock is released. This is a safety feature put in place so that SQL Server evaluates only official data. Additional Hints: I have previously discussed various concepts from SQL Server Joes 2 Pros Volume 1. SQL Joes 2 Pros Development Series – Dirty Records and Table Hints SQL Joes 2 Pros Development Series – Row Constructors SQL Joes 2 Pros Development Series – Finding un-matching Records SQL Joes 2 Pros Development Series – Efficient Query Writing Strategy SQL Joes 2 Pros Development Series – Finding Apostrophes in String and Text SQL Joes 2 Pros Development Series – Wildcard – Querying Special Characters SQL Joes 2 Pros Development Series – Wildcard Basics Recap Next Step: Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India) Bonus Winner Leave a comment with your favorite article from the “additional hints” section and you may be eligible for surprise gift. There is no country restriction for this Bonus Contest. Do mention why you liked it any particular blog post and I will announce the winner of the same along with the main contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Employee Info Starter Kit: Project Mission

    - by Mohammad Ashraful Alam
    Employee Info Starter Kit is an open source ASP.NET project template that is intended to address different types of real world challenges faced by web application developers when performing common CRUD operations. Using a single database table ‘Employee’, it illustrates how to utilize Microsoft ASP.NET 4.0, Entity Framework 4.0 and Visual Studio 2010 effectively in that context. Employee Info Starter Kit is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule. where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. User Stories The user end functionalities of this starter kit are pretty simple and straight forward that are focused in to perform CRUD operation on employee records as described below. Creating a new employee record Read existing employee record Update an existing employee record Delete existing employee records Key Technology Areas ASP.NET 4.0 Entity Framework 4.0 T-4 Template Visual Studio 2010 Architectural Objective There is no universal architecture which can be considered as the best for all sorts of applications around the world. Based on requirements, constraints, environment, application architecture can differ from one to another. Trade-off factors are one of the important considerations while deciding a particular architectural solution. Employee Info Starter Kit is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule, where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. “Productivity” as the architectural objective typically also includes other trade-off factors as well as, such as testability, flexibility, performance etc. Fortunately Microsoft .NET Framework 4.0 and Visual Studio 2010 includes lots of great features that have been implemented cleverly in this project to reduce these trade-off factors in the minimum level. Why Employee Info Starter Kit is Not a Framework? Application frameworks are really great for productivity, some of which are really unavoidable in this modern age. However relying too many frameworks may overkill a project, as frameworks are typically designed to serve wide range of different usage and are less customizable or editable. On the other hand having implementation patterns can be useful for developers, as it enables them to adjust application on demand. Employee Info Starter Kit provides hundreds of “connected” snippets and implementation patterns to demonstrate problem solutions in actual production environment. It also includes Visual Studio T-4 templates that generate thousands lines of data access and business logic layer repetitive codes in literally few seconds on the fly, which are fully mock testable due to language support for partial methods and latest support for mock testing in Entity Framework. Why Employee Info Starter Kit is Different than Other Open-source Web Applications? Software development is one of the rapid growing industries around the globe, where the technology is being updated very frequently to adapt greater challenges over time. There are literally thousands of community web sites, blogs and forums that are dedicated to provide support to adapt new technologies. While some are really great to enable learning new technologies quickly, in most cases they are either too “simple and brief” to be used in real world scenarios or too “complex and detailed” which are typically focused to achieve a product goal (such as CMS, e-Commerce etc) from "end user" perspective and have a long duration learning curve with respect to the corresponding technology. Employee Info Starter Kit, as a web project, is basically "developer" oriented which actually considers a hybrid approach as “simple and detailed”, where a simple domain has been considered to intentionally illustrate most of the architectural and implementation challenges faced by web application developers so that anyone can dive into deep into the corresponding new technology or concept quickly. Roadmap Since its first release by 2008 in MSDN Code Gallery, Employee Info Starter Kit gained a huge popularity in ASP.NET community and had 1, 50,000+ downloads afterwards. Being encouraged with this great response, we have a strong commitment for the community to provide support for it with respect to latest technologies continuously. Currently hosted in Codeplex, this community driven project is planned to have a wide range of individual editions, each of which will be focused on a selected application architecture, framework or platform, such as ASP.NET Webform, ASP.NET Dynamic Data, ASP.NET MVC, jQuery Ajax (RIA), Silverlight (RIA), Azure Service Platform (Cloud), Visual Studio Automated Test etc. See here for full list of current and future editions.

    Read the article

  • How do I deal with a third party application that has embedded hints that result in a sub-optimal execution plan in my environment?

    - by Maria Colgan
    I have gotten many variations on this question recently as folks begin to upgrade to Oracle Database 11g and there have been several posts on this blog and on others describing how to use SQL Plan Management (SPM) so that a non-hinted SQL statement can use a plan generated with hints. But what if the hint is supplied in the third party application and is causing performance regressions on your system? You can actually use a very similar technique to the ones shown before but this time capture the un-hinted plan and have the hinted SQL statement use that plan instead. Below is an example that demonstrates the necessary steps. 1. We will begin by running the hinted statement 2. After examining the execution plan we can see it is suboptimal because of a bad join order. 3. In order to use SPM to correct the problem we must create a SQL plan baseline for the statement. In order to create a baseline we will need the SQL_ID for the hinted statement. Easy place to get it is in V$SQL. 4. A SQL plan baseline can be created using a SQL_ID and DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE. This will capture the existing plan for this SQL_ID from the shared pool and store in the SQL plan baseline. 5. We can check the SQL plan baseline got created successfully by querying DBA_SQL_PLAN_BASELINES. 6. When you manually create a SQL plan baseline the first plan added is automatically accepted and enabled. We know that the hinted plan is poorly performing plan so we will disable it using DBMS_SPM.ALTER_SQL_PLAN_BASELINE. Disabling the plan tells the optimizer that this plan not a good plan, however since there is no alternative plan at this point the optimizer will still continue to use this plan until we provide a better one. 7. Now let's run the statement without the hint. 8. Looking at the execution plan we can see that the join order is different. The plan without the hint also has a lower cost (3X lower), which indicates it should perform better. 9. In order to map the un-hinted plan to the hinted SQL statement we need to add the plan to the SQL plan baseline for the hinted statement. We can do this using DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE but we will need the SQL_ID and  PLAN_HASH_VALUE for the non-hinted statement, which we can find in V$SQL. 10. Now we can add the non-hinted plan to the SQL plan baseline of the hinted SQL statement using DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE. This time we need to pass a few more arguments. We will use the SQL_ID and PLAN_HASH_VALUE of the non-hinted statement but the SQL_HANDLE of the hinted statement. 11. The SQL plan baseline for our statement now has two plans. But only the newly added plan (SQL_PLAN_gbpcg3f67pc788a6d8911) is enabled and accepted. This tells the Optimizer that this is the plan it should use for this statement. We can confirm that the correct plan (non-hinted) will be selected for the statement from now on by re-executing the hinted statement and checking its execution plan.

    Read the article

  • Software Architecture verses Software Design

    Recently, I was asked what the differences between software architecture and software design are. At a very superficial level both architecture and design seem to mean relatively the same thing. However, if we examine both of these terms further we will find that they are in fact very different due to the level of details they encompass. Software Architecture can be defined as the essence of an application because it deals with high level concepts that do not include any details as to how they will be implemented. To me this gives stakeholders a view of a system or application as if someone was viewing the earth from outer space. At this distance only very basic elements of the earth can be detected like land, weather and water. As the viewer comes closer to earth the details in this view start to become more defined. Details about the earth’s surface will start to actually take form as well as mane made structures will be detected. The process of transitioning a view from outer space to inside our earth’s atmosphere is similar to how an architectural concept is transformed to an architectural design. From this vantage point stakeholders can start to see buildings and other structures as if they were looking out of a small plane window. This distance is still high enough to see a large area of the earth’s surface while still being able to see some details about the surface. This viewing point is very similar to the actual design process of an application in that it takes the very high level architectural concept or concepts and applies concrete design details to form a software design that encompasses the actual implementation details in the form of responsibilities and functions. Examples of these details include: interfaces, components, data, and connections. In review, software architecture deals with high level concepts without regard to any implementation details. Software design on the other hand takes high level concepts and applies concrete details so that software can be implemented. As part of the transition between software architecture to the creation of software design an evaluation on the architecture is recommended. There are several benefits to including this step as part of the transition process. It allows for projects to ensure that they are on the correct path as to meeting the stakeholder’s requirement goals, identifies possible cost savings and can be used to find missing or nonspecific requirements that cause ambiguity in a design. In the book “Evaluating Software Architectures: Methods and Case Studies”, they define key benefits to adding an architectural review process to ensure that an architecture is ready to move on to the design phase. Benefits to evaluating software architecture: Gathers all stakeholders to communicate about the project Goals are clearly defined in regards to the creation or validation of specific requirements Goals are prioritized so that when conflicts occur decisions will be made based on goal priority Defines a clear expectation of the architecture so that all stakeholders have a keen understanding of the project Ensures high quality documentation of the architecture Enables discoveries of architectural reuse  Increases the quality of architecture practices. I can remember a few projects that I worked on that could have really used an architectural review prior to being passed on to developers. This project was to create some new advertising space on the company’s website in order to sell space based on the location and some other criteria. I was one of the developer selected to lead this project and I was given a high level design concept and a long list of ever changing requirements due to the fact that sales department had no clear direction as to what exactly the project was going to do or how they were going to bill the clients once they actually agreed to purchase the Ad space. In my personal opinion IT should have pushed back to have the requirements further articulated instead of forcing programmers to code blindly attempting to build such an ambiguous project.  Unfortunately, we had to suffer with this project for about 4 months when it should have only taken 1.5 to complete due to the constantly changing and unclear requirements. References  Clements, P., Kazman, R., & Klein, M. (2002). Evaluating Software Architectures. Westford, Massachusetts: Courier Westford. 

    Read the article

  • How Can I Improve This Card-Game AI?

    - by James Burgess
    Let me get this out there before anything else: this is a learning exercise for me. I am not a game developer by trade or hobby (at least, not seriously) and am purely delving into some AI- and 3D-related topics to broaden my horizons a bit. As part of the learning experience, I thought I'd have a go at developing a basic card game AI. I selected Pit as the card game I was going to attempt to emulate (specifically, the 'bull and bear' variation of the game as mentioned in the link above). Unfortunately, the rule-set that I'm used to playing with (an older version of the game) isn't described. The basics of it are: The number of commodities played with is equal to the number of players. The bull and bear cards are included. All but two players receive 8 cards, two receive 9 cards. A player can win the round with 7 + bull, 8, or 8 + bull (receiving double points). The bear is a penalty card. You can trade up to a maximum of 4 cards at a time. They must all be of the same type, but can optionally include the bull or bear (so, you could trade A, A, A, Bull - but not A, B, A, Bull). For those who have played the card game, it will probably have been as obvious to you as it was to me that given the nature of the game, gameplay would seem to resemble a greedy algorithm. With this in mind, I thought it might simplify my AI experience somewhat. So, here's what I've come up with for a basic AI player to play Pit... and I'd really just like any form of suggestion (from improvements to reading materials) relating to it. Here it is in something vaguely pseudo-code-ish ;) While AI does not hold 7 similar + bull, 8 similar, or 8 similar + bull, do: 1. Establish 'target' hand, by seeing which card AI holds the most of. 2. Prepare to trade next-most-numerous card type in a trade (max. held, or 4, whichever is fewer) 3. If holding the bear, add to (if trading <=3 cards) or replace in (if trading 4 cards) hand. 4. Offer cards for trade. 5. If cards are accepted for trade within X turns, continue (clearing 'failed card types'). Otherwise: a. If only one card remains in the trade, go to #6. Otherwise: i. Remove one non-penalty card from the trade. ii. Return to #5. 6. Add card type to temporary list of failed card types. 7. Repeat from #2 (excluding 'failed card types'). I'm aware this is likely to be a sub-optimal way of solving the problem, but that's why I'm posting this question. Are there any AI- or algorithm-related concepts that I've missed and should be incorporating to make a better AI? Additionally, what are the flaws with my AI at present (I'm well aware it's probably far from complete)? Thanks in advance!

    Read the article

  • Database – Beginning with Cloud Database As A Service

    - by Pinal Dave
    I love my weekend projects. Everybody does different activities in their weekend – like traveling, reading or just nothing. Every weekend I try to do something creative and different in the database world. The goal is I learn something new and if I enjoy my learning experience I share with the world. This weekend, I decided to explore Cloud Database As A Service – Morpheus. In my career I have managed many databases in the cloud and I have good experience in managing them. I should highlight that today’s applications use multiple databases from SQL for transactions and analytics, NoSQL for documents, In-Memory for caching to Indexing for search.  Provisioning and deploying these databases often require extensive expertise and time.  Often these databases are also not deployed on the same infrastructure and can create unnecessary latency between the application layer and the databases.  Not to mention the different quality of service based on the infrastructure and the service provider where they are deployed. Moreover, there are additional problems that I have experienced with traditional database setup when hosted in the cloud: Database provisioning & orchestration Slow speed due to hardware issues Poor Monitoring Tools High network latency Now if you have a great software and expert network engineer, you can continuously work on above problems and overcome them. However, not every organization have the luxury to have top notch experts in the field. Now above issues are related to infrastructure, but there are a few more problems which are related to software/application as well. Here are the top three things which can be problems if you do not have application expert: Replication and Clustering Simple provisioning of the hard drive space Automatic Sharding Well, Morpheus looks like a product build by experts who have faced similar situation in the past. The product pretty much addresses all the pain points of developers and database administrators. What is different about Morpheus is that it offers a variety of databases from MySQL, MongoDB, ElasticSearch to Reddis as a service.  Thus users can pick and chose any combination of these databases.  All of them can be provisioned in a matter of minutes with a simple and intuitive point and click user interface.  The Morpheus cloud is built on Solid State Drives (SSD) and is designed for high-speed database transactions.  In addition it offers a direct link to Amazon Web Services to minimize latency between the application layer and the databases. Here are the few steps on how one can get started with Morpheus. Follow along with me.  First go to http://www.gomorpheus.com and register for a new and free account. Step 1: Signup It is very simple to signup for Morpheus. Step 2: Select your database   I use MySQL for my daily routine, so I have selected MySQL. Upon clicking on the big red button to add Instance, it prompted a dialogue of creating a new instance.   Step 3: Create User Now we just have to create a user in our portal which we will use to connect to a database hosted at Morpheus. Click on your database instance and it will bring you to User Screen. Over here you will notice once again a big red button to create a new user. I created a user with my first name.   Step 4: Configure your MySQL client I used MySQL workbench and connected to MySQL instance, which I had created with an IP address and user.   That’s it! You are connecting to MySQL instance. Now you can create your objects just like you would create on your local box. You will have all the features of the Morpheus when you are working with your database. Dashboard While working with Morpheus, I was most impressed with its dashboard. In future blog posts, I will write more about this feature.  Also with Morpheus you use the same process for provisioning and connecting with other databases: MongoDB, ElasticSearch and Reddis. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • MvcExtensions - ActionFilter

    - by kazimanzurrashid
    One of the thing that people often complains is dependency injection in Action Filters. Since the standard way of applying action filters is to either decorate the Controller or the Action methods, there is no way you can inject dependencies in the action filter constructors. There are quite a few posts on this subject, which shows the property injection with a custom action invoker, but all of them suffers from the same small bug (you will find the BuildUp is called more than once if the filter implements multiple interface e.g. both IActionFilter and IResultFilter). The MvcExtensions supports both property injection as well as fluent filter configuration api. There are a number of benefits of this fluent filter configuration api over the regular attribute based filter decoration. You can pass your dependencies in the constructor rather than property. Lets say, you want to create an action filter which will update the User Last Activity Date, you can create a filter like the following: public class UpdateUserLastActivityAttribute : FilterAttribute, IResultFilter { public UpdateUserLastActivityAttribute(IUserService userService) { Check.Argument.IsNotNull(userService, "userService"); UserService = userService; } public IUserService UserService { get; private set; } public void OnResultExecuting(ResultExecutingContext filterContext) { // Do nothing, just sleep. } public void OnResultExecuted(ResultExecutedContext filterContext) { Check.Argument.IsNotNull(filterContext, "filterContext"); string userName = filterContext.HttpContext.User.Identity.IsAuthenticated ? filterContext.HttpContext.User.Identity.Name : null; if (!string.IsNullOrEmpty(userName)) { UserService.UpdateLastActivity(userName); } } } As you can see, it is nothing different than a regular filter except that we are passing the dependency in the constructor. Next, we have to configure this filter for which Controller/Action methods will execute: public class ConfigureFilters : ConfigureFiltersBase { protected override void Configure(IFilterRegistry registry) { registry.Register<HomeController, UpdateUserLastActivityAttribute>(); } } You can register more than one filter for the same Controller/Action Methods: registry.Register<HomeController, UpdateUserLastActivityAttribute, CompressAttribute>(); You can register the filters for a specific Action method instead of the whole controller: registry.Register<HomeController, UpdateUserLastActivityAttribute, CompressAttribute>(c => c.Index()); You can even set various properties of the filter: registry.Register<ControlPanelController, CustomAuthorizeAttribute>( attribute => { attribute.AllowedRole = Role.Administrator; }); The Fluent Filter registration also reduces the number of base controllers in your application. It is very common that we create a base controller and decorate it with action filters and then we create concrete controller(s) so that the base controllers action filters are also executed in the concrete controller. You can do the  same with a single line statement with the fluent filter registration: Registering the Filters for All Controllers: registry.Register<ElmahHandleErrorAttribute>(new TypeCatalogBuilder().Add(GetType().Assembly).Include(type => typeof(Controller).IsAssignableFrom(type))); Registering Filters for selected Controllers: registry.Register<ElmahHandleErrorAttribute>(new TypeCatalogBuilder().Add(GetType().Assembly).Include(type => typeof(Controller).IsAssignableFrom(type) && (type.Name.StartsWith("Home") || type.Name.StartsWith("Post")))); You can also use the built-in filters in the fluent registration, for example: registry.Register<HomeController, OutputCacheAttribute>(attribute => { attribute.Duration = 60; }); With the fluent filter configuration you can even apply filters to controllers that source code is not available to you (may be the controller is a part of a third part component). That’s it for today, in the next post we will discuss about the Model binding support in MvcExtensions. So stay tuned.

    Read the article

  • Visual Studio Exceptions dialogs

    - by Daniel Moth
    Previously I covered step 1 of live debugging with start and attach. Once the debugger is attached, you want to go to step 2 of live debugging, which is to break. One way to break under the debugger is to do nothing, and just wait for an exception to occur in your code. This is true for all types of code that you debug in Visual Studio, and let's consider the following piece of C# code:3: static void Main() 4: { 5: try 6: { 7: int i = 0; 8: int r = 5 / i; 9: } 10: catch (System.DivideByZeroException) {/*gulp. sue me.*/} 11: System.Console.ReadLine(); 12: } If you run this under the debugger do you expect an exception on line 8? It is a trick question: you have to know whether I have configured the debugger to break when exceptions are thrown (first-chance exceptions) or only when they are unhandled. The place you do that is in the Exceptions dialog which is accessible from the Debug->Exceptions menu and on my installation looks like this: Note that I have checked all CLR exceptions. I could have expanded (like shown for the C++ case in my screenshot) and selected specific exceptions. To read more about this dialog, please read the corresponding Exception Handling debugging msdn topic and all its subtopics. So, for the code above, the debugger will break execution due to the thrown exception (exactly as if the try..catch was not there), so I see the following Exception Thrown dialog: Note the following: I can hit continue (or hit break and then later continue) and the program will continue fine since I have a catch handler. If this was an unhandled exception, then that is what the dialog would say (instead of first chance exception) and continuing would crash the app. That hyperlinked text ("Open Exception Settings") opens the Exceptions dialog I described further up. The coolest thing to note is the checkbox - this is new in this latest release of Visual Studio: it is a shortcut to the checkbox in the Exceptions dialog, so you don't have to open it to change this setting for this specific exception - you can toggle that option right from this dialog. Finally, if you try the code above on your system, you may observe a couple of differences from my screenshots. The first is that you may have an additional column of checkboxes in the Exceptions dialog. The second is that the last dialog I shared may look different to you. It all depends on the Debug->Options settings, and the two relevant settings are in this screenshot: The Exception assistant is what configures the look of the UI when the debugger wants to indicate exception to you, and the Just My Code setting controls the extra column in the Exception dialog. You can read more about those options on MSDN: How to break on User-Unhandled exceptions (plus Gregg’s post) and Exception Assistant. Before I leave you to go play with this stuff a bit more, please note that this level of debugging is now available for JavaScript too, and if you are looking at the Exceptions dialog and wondering what the "GPU Memory Access Exceptions" node is about, stay tuned on the C++ AMP blog ;-) Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Overview of Microsoft SQL Server 2008 Upgrade Advisor

    - by Akshay Deep Lamba
    Problem Like most organizations, we are planning to upgrade our database server from SQL Server 2005 to SQL Server 2008. I would like to know is there an easy way to know in advance what kind of issues one may encounter when upgrading to a newer version of SQL Server? One way of doing this is to use the Microsoft SQL Server 2008 Upgrade Advisor to plan for upgrades from SQL Server 2000 or SQL Server 2005. In this tip we will take a look at how one can use the SQL Server 2008 Upgrade Advisor to identify potential issues before the upgrade. Solution SQL Server 2008 Upgrade Advisor is a free tool designed by Microsoft to identify potential issues before upgrading your environment to a newer version of SQL Server. Below are prerequisites which need to be installed before installing the Microsoft SQL Server 2008 Upgrade Advisor. Prerequisites for Microsoft SQL Server 2008 Upgrade Advisor .Net Framework 2.0 or a higher version Windows Installer 4.5 or a higher version Windows Server 2003 SP 1 or a higher version, Windows Server 2008, Windows XP SP2 or a higher version, Windows Vista Download SQL Server 2008 Upgrade Advisor You can download SQL Server 2008 Upgrade Advisor from the following link. Once you have successfully installed Upgrade Advisor follow the below steps to see how you can use this tool to identify potential issues before upgrading your environment. 1. Click Start -> Programs -> Microsoft SQL Server 2008 -> SQL Server 2008 Upgrade Advisor. 2. Click Launch Upgrade Advisor Analysis Wizard as highlighted below to open the wizard. 2. On the wizard welcome screen click Next to continue. 3. In SQL Server Components screen, enter the Server Name and click the Detect button to identify components which need to be analyzed and then click Next to continue with the wizard. 4. In Connection Parameters screen choose Instance Name, Authentication and then click Next to continue with the wizard. 5. In SQL Server Parameters wizard screen select the Databases which you want to analysis, trace files if any and SQL batch files if any.  Then click Next to continue with the wizard. 6. In Reporting Services Parameters screen you can specify the Reporting Server Instance name and then click next to continue with the wizard. 7. In Analysis Services Parameters screen you can specify an Analysis Server Instance name and then click Next to continue with the wizard. 8. In Confirm Upgrade Advisor Settings screen you will be able to see a quick summary of the options which you have selected so far. Click Run to start the analysis. 9. In Upgrade Advisor Progress screen you will be able to see the progress of the analysis. Basically, the upgrade advisor runs predefined rules which will help to identify potential issues that can affect your environment once you upgrade your server from a lower version of SQL Server to SQL Server 2008. 10. In the below snippet you can see that Upgrade Advisor has completed the analysis of SQL Server, Analysis Services and Reporting Services. To see the output click the Launch Report button at the bottom of the wizard screen. 11. In View Report screen you can see a summary of issues which can affect you once you upgrade. To learn more about each issue you can expand the issue and read the detailed description as shown in the below snippet.

    Read the article

  • SQL SERVER – Read Only Files and SQL Server Management Studio (SSMS)

    - by pinaldave
    Just like any other Developer or DBA SQL Server Management Studio is my favorite application. Any any moment of the time I have multiple instances of the same application are open and I am working on it. Recently, I have come across a very interesting feature in SSMS related to “Read Only” files. I believe it is a little unknown feature as well so decided to write a blog about the same. First create a read only SQL file. You can make any file read by Right Click >> Properties >> Select Attribute Read Only. Now open the same file in SQL Server Management Studio. You will find that besides the file name there is a small ‘lock’ icon. This small icon indicates that the file is read only. Now let us attempt to edit the read only file. It will let us edit the file any way we want, however when we attempt to save it, it gives following pop-up value. The options in the pop-up are self explanatory and I liked it. The goal of the read only file is to prevent users to make un-intended changes. However, when a user should have complete control over the user file. User should be aware that the file is read only but if he wants to edit the file or save as a new file the choices should be present in front of it and the pop-up menu precisely captures the same. Now let us check option related to this feature in SSMS. Go to Menu >> Options >> Environment >> Documents You will find the third option which is “Allow editing of read-only files; warn when attempt to save”. In the above scenario it was already checked. Let us uncheck the same and do the same exercise which we have done earlier. I closed all the earlier window to avoid confusion. With the new option selected when I attempt to even modify the Read Only file, it gives me totally different pop up screen. It gives me an option like “Edit In-Memory”, “Make Writeable” etc. When you select “Edit In-Memory” it allows you to edit the file and later you can save as new file – just like the earlier scenario which we have discussed. . If clicked on the Make Writeable it will remove the restriction of the Read Only and file can be edited as pleased. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Application Performance Episode 2: Announcing the Judges!

    - by Michaela Murray
    The story so far… We’re writing a new book for ASP.NET developers, and we want you to be a part of it! If you work with ASP.NET applications, and have top tips, hard-won lessons, or sage advice for avoiding, finding, and fixing performance problems, we want to hear from you! And if your app uses SQL Server, even better – interaction with the database is critical to application performance, so we’re looking for database top tips too. There’s a Microsoft Surface apiece for the person who comes up with the best tip for SQL Server and the best tip for .NET. Of course, if your suggestion is selected for the book, you’ll get full credit, by name, Twitter handle, GitHub repository, or whatever you like. To get involved, just email your nuggets of performance wisdom to [email protected] – there are examples of what we’re looking for and full competition details at Application Performance: The Best of the Web. Enter the judges… As mentioned in my last blogpost, we have a mystery panel of celebrity judges lined up to select the prize-winning performance pointers. We’re now ready to reveal their secret identities! Judging your ASP.NET  tips will be: Jean-Phillippe Gouigoux, MCTS/MCPD Enterprise Architect and MVP Connected System Developer. He’s a board member at French software company MGDIS, and teaches algorithms, security, software tests, and ALM at the Université de Bretagne Sud. Jean-Philippe also lectures at IT conferences and writes articles for programming magazines. His book Practical Performance Profiling is published by Simple-Talk. Nik Molnar,  a New Yorker, ASP Insider, and co-founder of Glimpse, an open source ASP.NET diagnostics and debugging tool. Originally from Florida, Nik specializes in web development, building scalable, client-centric solutions. In his spare time, Nik can be found cooking up a storm in the kitchen, hanging with his wife, speaking at conferences, and working on other open source projects. Mitchel Sellers, Microsoft C# and DotNetNuke MVP. Mitchel is an experienced software architect, business leader, public speaker, and educator. He works with companies across the globe, as CEO of IowaComputerGurus Inc. Mitchel writes technical articles for online and print publications and is the author of Professional DotNetNuke Module Programming. He frequently answers questions on StackOverflow and MSDN and is an active participant in the .NET and DotNetNuke communities. Clive Tong, Software Engineer at Red Gate. In previous roles, Clive spent a lot of time working with Common LISP and enthusing about functional languages, and he’s worked with managed languages since before his first real job (which was a long time ago). Long convinced of the productivity benefits of managed languages, Clive is very interested in getting good runtime performance to keep managed languages practical for real-world development. And our trio of SQL Server specialists, ready to select your top suggestion, are (drumroll): Rodney Landrum, a SQL Server MVP who writes regularly about Integration Services, Analysis Services, and Reporting Services. He’s authored SQL Server Tacklebox, three Reporting Services books, and contributes regularly to SQLServerCentral, SQL Server Magazine, and Simple–Talk. His day job involves overseeing a large SQL Server infrastructure in Orlando. Grant Fritchey, Product Evangelist at Red Gate and SQL Server MVP. In an IT career spanning more than 20 years, Grant has written VB, VB.NET, C#, and Java. He’s been working with SQL Server since version 6.0. Grant volunteers with the Editorial Committee at PASS and has written books for Apress and Simple-Talk. Jonathan Allen, leader and founder of the PASS SQL South West user group. He’s been working with SQL Server since 1999 and enjoys performance tuning, development, and using SQL Server for business solutions. He’s spoken at SQLBits and SQL in the City, as well as local user groups across the UK. He’s also a moderator at ask.sqlservercentral.com.

    Read the article

  • SQL SERVER – Learn SQL Server 2014 Online in a Day – My Latest Pluralsight Course

    - by Pinal Dave
    Click here watch SQL Server 2014 Administration New Features.  SQL Server 2014 was released earlier this year and it has been extremely popular in Microsoft world. Here is the announcement for everyone, who have been asking me to build a tutorial around SQL Server 2014. I have authored latest Pluralsight courses on the subject of SQL Server 2014. This course is 4 hours and 17 minutes long, but the best part is that this course contains all the latest features of SQL Server 2014. I have build this course with the assumption that DBA is familiar with earlier versions of SQL Server and wants to explore and learn new features of SQL Server 2014. The Challenge I Faced The biggest challenge I faced was how to come up with the outline for the course. The reason is that there are so many different features introduced in SQL Server 2014 that is will be difficult to cover each of the features in a single course. I wanted to cover the topics which are the most relevant and useful to developers, but in addition I also wanted to cover the topics which may be useful to develop if they know that they exists in the product. I finally decided to depend on blog readers and few of the SQL Experts. I reached out to selected 20 people via email and gave them a list of the topics which I should be covering in this course. They all work in different organizations and have a good understanding about the need of the DBA and Developers. Based on their feedback, I was able to come up with a very good outline which is currently very popular with Pluralsight library. Lots of people have asked me how was I able to come up with a course content outline so accurately. The credit for the same goes to the developers and DBA, who have voted in the topics and have helped me to build a very solid outline for the course. Outline of the Course Here is a quick outline for the course: Introduction Backup Enhancements Security Enhancements Columnstore Enhancements Online Data Operations Enhancements Enhancements with Microsoft Azure SSD Buffer Pool Extensions Resource Governor IO Miscellaneous Features Online Index Rebuilding Live Plans for Long Running Queries Transaction Durability Cardinality Estimation In Memory OLTP Optimization Well, I had a great fun working on the topics which I have mentioned in the outline. I am very confident that once you start with the course, you will indeed understand how each of the topics builds and presented. I have made sure that each of the topic has a vivid and clear story to begin with. I first explain the story and right after that I explain the concept. Who Should Attend This Course Everyone who has basic knowledge of SQL Server and wants to update themselves with SQL Server 2014. They should attend this course. One thing I have made sure that this course is easy to understand and I have decided complex subject into multiple parts. This way the learning is progressive and anyone with a poor knowledge of the subject can have enough time to understand the presented concept. Screenshot of the Course Here are few of the screenshot of the courses. How to Watch Video Course This course is available at Pluralsight, and you will need a valid login to Pluralsight. If you do not have Pluralsight login, you can quickly sign up for the FREE Trial. Click here watch SQL Server 2014 Administration New Features.  Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Video

    Read the article

  • Customizing UPK outputs (Part 1)

    - by [email protected]
    If you are familiar with Oracle's User Productivity Kit, you are aware that UPK is a great product for rapidly developing application training. Did you know that you can also customize the UPK outputs to incorporate your company's logo, colors, and preferred styles? There are several areas that support customization: Logo - Within the developer, you can change the logo for all outputs at one time. Player - The player output uses a style sheet that can be updated to change colors, graphics and other visual branding. Documentation - The print documentation uses a Word-based template that can be modified to match your corporate standards. I'll discuss the first one today, and we'll cover the others in subsequent blogs. Before you begin: If you are working in a multi-user environment, ensure that you have "Modify" permissions for the Styles directory under the Publishing folder. Make a copy of the current styles. This recommendation is for backup purposes. If something goes wrong, you will have a way to recover. Consider creating your own category by creating a new folder under the Styles directory, and then copying the styles into your new folder. When you upgrade to future versions, the system will overwrite the standard styles with any new feature additions and updates that have been made. With your own category, all of your customizations will remain intact. To update the logos in all outputs: From the Tools Menu, choose Customize Logo. Select the category if necessary. Browse to select your logo. You can use any size logo, in any graphic format (*.bmp, *.gif, *.jpeg, *.jpg, *.png, or *.tif). The system will make a copy of your logo and add it to each of the publishing styles. Choose OK, and the update process begins. It may take a few minutes. Helpful hints: The logo you select is used "as is" - no resizing or cropping occurs during this process. The Customize Logo process automates replacing all the logo graphics for online deployment (small_logo.gif and large_logo.gif) and the headers in the documentation outputs. You can manually replace these graphics on an individual style basis if you prefer. The recommended logo size is 230 pixels wide x 44 pixels high. Prior to updating the logos, the system will display the size of the selected logo. If you use a logo that is much larger than the recommended size, the heading area will resize to fit the new logo, but that will impact the space available for your training material. If you are using a multi-user environment, the system will check out the publishing styles to you for the logo updates. After you review the styles, remember to check them in so the rest of your team can access the new changes. I'd be interested in hearing (or seeing) how you brand your UPK. Feel free to share in the comments! --Maria Cozzolino, Manager of Requirements & UI for UPK Product Development PS. For those of you who want to customize the player and documentation NOW, check out the detailed instructions in the Publishing Content chapter of the Content Development Guide.

    Read the article

  • Oracle Fusion CRM Implementation Bootcamp for EMEA Systems Integrators - Paris July 24-26th

    - by Richard Lefebvre
    To support partner success and increase win potential with Fusion CRM, we are organizing a unique bootcamp on Fusion CRM intended for Oracle EMEA partners on July 24th to 26th. Join us for this outstanding Bootcamp and learn from Oracle Corporation in-depth know-how on Fusion CRM. The official announcement will be forthcoming, yet we wanted you to determine the appropriate candidate to attend this workshop. Further to this we will send the actual invitation to the selected candidate. Due to the limited number of seats, we will be limiting the number of registrations per SI company and will be selecting the participants. If you are interested to have one or more representatives of your company to attend this bootcamp, please send an email to [email protected] by June 18th indicating the name and email address of the participants you would like to nominate, ranked by priority. What will we cover: This Bootcamp presents the fundamental concepts of the Oracle Fusion CRM applications. It introduces you to each functional area of the product, how it is used, and what you need to consider when implementing it for an organization. While we do examine implementation considerations, we do not address the detailed steps of implementation. Instead, we direct you to the relevant resources to learn more. Topics covered: Fusion CRM Introduction Fusion CRM Security Introduction Fusion Functional Setup Manager Introduction Customer Model Introduction Customer Center Introduction Customer Data Management Introduction Marketing & Campaigns Introduction Lead Management Introduction Territory Management Introduction Territory Modeling Introduction with Exercise Opportunity Management Introduction Forecasting Introduction Analytics Introduction CRM For Microsoft Outlook Introduction Customizing with Composers Introduction Roundtable Discussions, and time for hands-on labs (day 2, 3, 4) Next Steps, available resources, ongoing learning path, partner environments, keeping in touch and feedback Bootcamp Goals: Enable a new Fusion CRM implementation team member to: Describe the scope of Oracle Fusion CRM applications Describe the basic security model Describe the customer model Perform common sales and marketing user transactions Access and navigate the Functional Setup Manager Model territories in Fusion CRM using sample business requirements Do necessary planning before implementing the offerings and options Describe the analytics available with the Fusion CRM product Describe the basic page customizations that can be done to meet business requirements Find documentation and other courses to assist in performing setup tasks Expectations: This Bootcamp program should prime the SI organization implementation consultants to attain the basic skills necessary to support a consulting practice in the delivery, scoping, pricing, and planning of your Fusion CRM Implementations. Oracle University will begin to offer additional deep skill training, starting this summer, designed to follow the Introduction Bootcamp. Participants will be expected to participate in labs, exercises, workshops and roundtable discussions with the Oracle Product Managers. Who should attend: This class is designed for your lead CRM Implementation consultants, those who will support your Fusion CRM consulting practice as it grows. These individuals may be members of a centre of excellence, or skills leadership office. The individual who is attending the bootcamp must have prior experience implementing a CRM solution. Intended Audience: Oracle Diamond, Platinum and Gold Level SIs (Top SIs) with specialization in Oracle Applications CRM implementations, with a commitment to achieving Fusion CRM Implementation Specialization. Commitment expressed through an investment in a Center of Excellence/Innovation Center for Fusion CRM Applications. Individuals who will support the implementation practice as it is forming and will deliver Fusion CRM On Premise and Cloud Services implementations. Functional practice leaders, the future Fusion Application Wizards within the SI's organization. This Bootcamp is designed for people who: Will deliver Fusion CRM implementations Have had little or no exposure to Fusion CRM applications Are familiar with at least one other CRM application Have a business analyst level of technical background Prerequisites: Please note, that participants will be asked to take self-service-trainings (video format) and pass the related assessments prior to joining the Bootcamp. Fees: This event is FREE of charge for Oracle partners. When: 24 July – 26 July, 2012 (8:30 - 18:00 each day, including the last day; with recommended but optional evening events on all three days from 18:00 - 20:00 hrs) Where: Paris, France (Location to be defined) Travel: To make your travel hassel free, we kindly suggest you to plan your arrival to Paris on July 23rd and your departure on the 27th. Agenda: The final agenda and registration details will be issued closer to the event date.  

    Read the article

  • Ubuntu 11.04 and 10.04 hang with black screen while installing from USB disk

    - by Bill
    I've been trying to install Ubuntu 11.04 from a USB flash stick and each time I try to boot from the USB key one of two things happen: A) The screen that asks you what you would like to do (e.g. run Ubuntu from the USB key or install it) shows up and the countdown to the default option starts to count down but as soon as I either touch the keyboard (sometimes I press enter or the arrow keys to select an option) or the countdown gets to zero the screen just locks up and nothing happens no matter how long I wait. B) When I boot from the USB key the screen will flicker for a second and then go black with a flashing white underscore at the top left corner of the screen. Again it doesn't matter how long I wait, nothing happens and pressing keys doesn't do a thing. The very first time I tried to install it I got a terminal-like screen that said something about a directory called 'casper' having an error of some sort. I have tried installing from USB using both 11.04 and 10.10. I'm about to try 10.04. I have read tons of forum posts about this but so far I haven't seen anything in the solutions that apply to me. My intention is to dual boot Windows 7 and Ubuntu. I must keep Windows as I am required to use Visual Studio for one of my college courses. Right now I'm using Wubi but I really want a full install. I can't use LVPM because it doesn't work with the version of Wubi I used. So now I'm thinking my best bet is to try to get a clean install working. I'd also convert Wubi to a full install too but there's no solution as far as I've read. So could someone tell me a reason why this is happening or if there's something I can do to get around the problem? I'm using a Gateway LT2802u netbook with and Intel Atom N455 processor, 1GB RAM, Intel Graphics Media Accelerator 3150 graphics card, and a 250GB HDD. I don't have anything on my current Wubi install that I can't replace so keep in mind when answering that I don't care if I lose my current settings and files from Wubi. Thanks everyone! UPDATE I just answered my own question so in case anyone else is having this same problem using similar hardware, do the following: When I first tried installing 11.04 I used the recommended universal installer tool to create the USB live/installation disk. That caused the original problem. Note that I had already downloaded the 11.04 ISO and did not use the included downloader from the USB creator. After that failed I used the same USB creator but had it download 10.10 for me. It also failed with the same issue. I repeated this process with unetbootin as well for both versions. Finally, I downloaded the Ubuntu 10.04 ISO and used the recommended USB creator once again. There was an error while creating the USB live install so I reformatted the USB key as FAT32 and tried again. It created the USB key. I then booted from the USB flash drive and selected "Install Ubuntu" (exact wording was different). It worked! It took me through the process that you see shown in pictures on the Ubuntu website. I let it create the appropriate partitions for me and it simply worked. I did get a few errors while the system tried to restart after it installed. It hung on a terminal-like screen but I pressed ENTER and it restarted. I booted into Windows 7, it checked the disks as it sensed that I messed with a partition, then it booted into Windows normally. Now I'm going to uninstall Wubi and update my new full install of Ubuntu! I'm excited to get the benefits of a full install now. So in the end, hopefully someone can learn from what I did.

    Read the article

  • Whether to use UNION or OR in SQL Server Queries

    - by Dinesh Asanka
    Recently I came across with an article on DB2 about using Union instead of OR. So I thought of carrying out a research on SQL Server on what scenarios UNION is optimal in and which scenarios OR would be best. I will analyze this with a few scenarios using samples taken  from the AdventureWorks database Sales.SalesOrderDetail table. Scenario 1: Selecting all columns So we are going to select all columns and you have a non-clustered index on the ProductID column. --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 874 So query 1 is using OR and the later is using UNION. Let us analyze the execution plans for these queries. Query 1 Query 2 As expected Query 1 will use Clustered Index Scan but Query 2, uses all sorts of things. In this case, since it is using multiple CPUs you might have CX_PACKET waits as well. Let’s look at the profiler results for these two queries: CPU Reads Duration Row Counts OR 78 1252 389 3854 UNION 250 7495 660 3854 You can see from the above table the UNION query is not performing well as the  OR query though both are retuning same no of rows (3854).These results indicate that, for the above scenario UNION should be used. Scenario 2: Non-Clustered and Clustered Index Columns only --Query 1 : OR SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 GO --Query 2 : UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 874 GO So this time, we will be selecting only index columns, which means these queries will avoid a data page lookup. As in the previous case we will analyze the execution plans: Query 1 Query 2 Again, Query 2 is more complex than Query 1. Let us look at the profile analysis: CPU Reads Duration Row Counts OR 0 24 208 3854 UNION 0 38 193 3854 In this analyzis, there is only slight difference between OR and UNION. Scenario 3: Selecting all columns for different fields Up to now, we were using only one column (ProductID) in the where clause.  What if we have two columns for where clauses and let us assume both are covered by non-clustered indexes? --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2: As we can see, the query plan for the second query has improved. Let us see the profiler results. CPU Reads Duration Row Counts OR 47 1278 443 1228 UNION 31 1334 400 1228 So in this case too, there is little difference between OR and UNION. Scenario 4: Selecting Clustered index columns for different fields Now let us go only with clustered indexes: --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2 Now both execution plans are almost identical except is an additional Stream Aggregate is used in the first query. This means UNION has advantage over OR in this scenario. Let us see profiler results for these queries again. CPU Reads Duration Row Counts OR 0 319 366 1228 UNION 0 50 193 1228 Now see the differences, in this scenario UNION has somewhat of an advantage over OR. Conclusion Using UNION or OR depends on the scenario you are faced with. So you need to do your analyzing before selecting the appropriate method. Also, above the four scenarios are not all an exhaustive list of scenarios, I selected those for the broad description purposes only.

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >