Search Results

Search found 23568 results on 943 pages for 'select'.

Page 270/943 | < Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >

  • Javascript in php

    - by user506539
    I have a form where user enters category, it gets inserted into db and it has to go to other page to select an image, after selection image id has to get into category table. I am doing some thing like this <?php error_reporting(E_ALL ^ E_NOTICE); $con = mysql_connect("localhost","root",""); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("mydocair", $con); $ThirdPartyCategoryName =$_POST['ThirdPartyCategoryName']; $checkformembers = mysql_query("SELECT * FROM thirdpartycategorymaster WHERE ThirdPartyCategoryName = '$ThirdPartyCategoryName'"); if(mysql_num_rows($checkformembers) != 0) { header('location:newcat.php?msg=category exists'); } else { $sql="INSERT INTO thirdpartycategorymaster (ThirdPartyCategoryID, ThirdPartyCategoryName) VALUES ('$_POST[ThirdPartyCategoryID]', '$_POST[ThirdPartyCategoryName]' )"; } if (!mysql_query($sql,$con)) { die('Error: ' . mysql_error()); } mysql_close($con); ?> <?php $res = mysql_query("SELECT ThirdPartyCategoryID FROM thirdpartycategorymaster"); while($row = mysql_fetch_array($res)) { "<script type='text/javascript'> if(window.confirm("Do you want to insert image for category?")) { document.location = "imgcat.php?id="<?php echo row2['ThirdPartyCategoryID']; ?>"" } </script>"; } ?>

    Read the article

  • MySQL Check and compare and if necessary create data.

    - by user2979677
    Hello I have three different tables Talk ID type_id YEAR NUM NUM_LETTER DATE series_id TALK speaker_id SCRIPREF DONE DOUBLE_SID DOUBLE_TYP LOCAL_CODE MISSING RESTRICTED WHY_RESTRI SR_S_1 SR_E_1 BCV_1 SR_S_2 SR_E_2 BCV_2 SR_S_3 SR_E_3 BCV_3 SR_S_4 SR_E_4 BCV_4 QTY_IV organisation_id recommended topic_id thumbnail mp3_file_size duration version Product_component id product_id talk_id position version Product id created product_type_id last_modified last_modified_by num_sold current_stock min_stock max_stock comment organisation_id series_id name subscription_type_id recipient_id discount_start discount_amount discount_desc discount_finish discount_percent voucher_amount audio_points talk_id price_override product_desc instant_download_status_id downloads instant_downloads promote_start promote_finish promote_desc restricted from_tape discontinued discontinued_reason discontinued_date external_url version I want to create a procedure that will check if select the id from talk and compare it with the id of product to see if there is a product id in the table and if there isn't then create it however my problem is that my tables talk and product can't talk, as id in talk is related to talk_id in product_component and id in product is related to product_id in product_component. Are there any ways for this to be done? I tried this, CREATE DEFINER=`sthelensmedia`@`localhost` PROCEDURE `CreateSingleCDProducts`() BEGIN DECLARE t_id INT; DECLARE t_restricted BOOLEAN; DECLARE t_talk CHAR(255); DECLARE t_series_id INT; DECLARE p_id INT; DECLARE done INT DEFAULT 0; DECLARE cur1 CURSOR FOR SELECT id,restricted,talk,series_id FROM talk WHERE organisation_id=2; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done=1; UPDATE product SET restricted=TRUE WHERE product_type_id=3; OPEN cur1; create_loop: LOOP FETCH cur1 INTO t_id, t_restricted, t_talk, t_series_id; IF done=1 THEN LEAVE create_loop; END IF; INSERT INTO product (created,product_type_id,last_modified,organisation_id,series_id,name,restricted) VALUES (NOW(),3,NOW(),2,t_series_id,t_talk,t_restricted); SELECT LAST_INSERT_ID() INTO p_id; INSERT INTO product_component (product_id,talk_id,position) VALUES (p_id,t_id,0); END LOOP create_loop; CLOSE cur1; END Just wondering if anyone could help me.

    Read the article

  • How to use JOIN using Hibernate's session.createSQLQuery()

    - by javauser71
    Hi All, I have two Entity (tables) - Employee & Project. An Employee can have multiple Projects. Project table's CREATOR_ID field refers to Employee table's ID field. Employee entity maintains a list of Project. Using EntityManager following query works fine - "entityManager.createQuery("select e from EmployeeDTO e, ProjectDTO p where p.id = ?1 and p.creator.id=e.id"); But since I have the LAZY association relationship, I get error: "Could not initialize proxy - no Session" if I try to access Project info from Employee entity. This is expected and so I am using Hibernate's Session to create query as shown below. Session session = HibernateUtil.getSessionFactory().openSession(); org.hibernate.Query q = session.createSQLQuery("SELECT E FROM EMPLOYEE_TAB E, PROJECT_TAB P WHERE P.ID = " + projectId + " AND P.CREATOR_ID = E.ID") .addEntity("EmployeeDTO ", EmployeeDTO.class) .addEntity("ProjectDTO", ProjectDTO.class); But I get error like: "Column 'E' is either not in any table in the FROM list or appears within a join specification and is outside the scope of the join specification..." Can anyone suggest what will be the right JOIN syntax for such case? If I use ("SELECT * FROM EMPLOYEE_TAB E, ........") - it gives other error: "java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast to com.im.server.dto.EmployeeDTO". Thanks in advance.

    Read the article

  • problem by displaying images stored in a MySQL database?

    - by user1400
    i have images in my blob field in database i use blow colde for displaying image but it does not show any pic , someone could tell me where is my wrong? this is my model: public function getListUser() { $select = $this->select() ->order('lastname DESC'); $adapter = new Zend_Paginator_Adapter_DbTableSelect ($select); return $adapter; } this my controller: $userModel = new Admin_Model_User(); $adapter = $userModel->getListUser(); $paginator = new Zend_Paginator ($adapter); $paginator->setItemCountPerPage(1); $page = $this->_request->getParam('page', 1); $paginator->setCurrentPageNumber($page); $this->view->paginator = $paginator; this my view code: <td style="width: 20%;"> <?php echo $this->lastname ?> </td> <td style="width: 20%;"><?php header("Content-type: image/gif"); print $this->image; ?></td>

    Read the article

  • Replace beginning words

    - by Newbie
    I have the below tables. tblInput Id WordPosition Words -- ----------- ----- 1 1 Hi 1 2 How 1 3 are 1 4 you 2 1 Ok 2 2 This 2 3 is 2 4 me tblReplacement Id ReplacementWords --- ---------------- 1 Hi 2 are 3 Ok 4 This The tblInput holds the list of words while the tblReplacement hold the words that we need to search in the tblInput and if a match is found then we need to replace those. But the problem is that, we need to replace those words if any match is found at the beginning. i.e. in the tblInput, in case of ID 1, the words that will be replaced is only 'Hi' and not 'are' since before 'are', 'How' is there and it is not in the tblReplacement list. in case of Id 2, the words that will be replaced are 'Ok' & 'This'. Since these both words are present in the tblReplacement table and after the first word i.e. 'Ok' is replaced, the second word which is 'This' here comes first in the list of ID category 2 . Since it is available in the tblReplacement, and is the first word now, so this will also be replaced. So the desired output will be Id NewWordsAfterReplacement --- ------------------------ 1 How 1 are 1 you 2 is 2 me My approach so far: ;With Cte1 As( Select t1.Id ,t1.Words ,t2.ReplacementWords From tblInput t1 Cross Join tblReplacement t2) ,Cte2 As( Select Id, NewWordsAfterReplacement = REPLACE(Words,ReplacementWords,'') From Cte1) Select * from Cte2 where NewWordsAfterReplacement <> '' But I am not getting the desired output. It is replacing all the matching words. Urgent help needed*.( SET BASED )* I am using SQL Server 2005. Thanks

    Read the article

  • # how to split the column values using stored procedure

    - by user1444281
    I have two tables table 1 is SELECT * FROM dbo.TBL_WD_WEB_DECK WD_ID WD_TITLE ------------------ 1 2 and data in 2nd table is WS_ID WS_WEBPAGE_ID WS_SPONSORS_ID WS_STATUS ----------------------------------------------------- 1 1 1,2,3,4 Y I wrote the following stored procedure to insert the data into both the tables catching the identity of main table dbo.TBL_WD_WEB_DECK. WD_ID is related with WS_WEBPAGE_ID. I wrote the cursor in update action to split the WS_SPONSORS_ID column calling the split function in the cursor. But it is not working. The stored procedure is: ALTER procedure [dbo].[SP_Example_SPLIT] ( @action char(1), @wd_id int out, @wd_title varchar(50), @ws_webpage_id int, @ws_sponsors_id varchar(250), @ws_status char(1) ) as begin set nocount on IF (@action = 'A') BEGIN INSERT INTO dbo.TBL_WD_WEB_DECK(WD_TITLE)VALUES(@WD_TITLE) DECLARE @X INT SET @X = @@IDENTITY INSERT INTO dbo.TBL_WD_SPONSORS(WS_WEBPAGE_ID,WS_SPONSORS_ID,WS_STATUS) VALUES (@ws_webpage_id,@ws_sponsors_id,@ws_status) END ELSE IF (@action = 'U') BEGIN UPDATE dbo.TBL_WD_WEB_DECK SET WD_TITLE = @wd_title WHERE WD_ID = @WD_ID UPDATE dbo.TBL_WD_SPONSORS SET WS_STATUS = 'N' where WS_SPONSORS_ID not in (@ws_sponsors_id) and ws_webpage_id = @wd_id BEGIN /* Declaring Cursor to split the value in ws_sponsors_id column */ Declare @var int Declare splt cursor for /* used the split function and calling the parameter in that split function */ select * from iter_simple_intlist_to_tbl(@WS_SPONSORS_ID) OPEN splt FETCH NEXT FROM splt INTO @var WHILE (@@FETCH_STATUS = 0) begin if not exists(select * from dbo.TBL_WD_SPONSORS where WS_WEBPAGE_ID = @wd_id and WS_SPONSORS_ID = @var) begin insert into dbo.TBL_WD_SPONSORS (WS_WEBPAGE_ID,WS_SPONSORS_ID) values(@wd_id,@var) end end CLOSE SPONSOR DEALLOCATE SPONSOR END END END The result I want is if I insert the data in WD_ID and IN WS_SPONSORS_ID column the data in the WS_SPONSORS_ID column should split and I need to compare it with WD_ID. The result I need is: WD_ID WD_TITLE --------------------- 1 TEST 2 TEST1 3 WS_ID WS_WEBPAGE_ID WS_SPONSORS_ID WS_STATUS -------------------------------------------------------- 1 1 1 Y 2 1 2 N 3 1 3 Y If I pass the string in WS_SPONSORS_ID as 1,2,3 it has to split like the above. Can you help?

    Read the article

  • Parallel MySQL queries for HTML table - WHILE(x or y)?

    - by Beti Chode
    I'm trying to create a table using PHP. What I need is a table with two columns. So I have an SQL table with 4 fields - primary key id, language, word and definition. The language for each is either Arabic or Russian. I want a table that does the following: | defintion | |____________________| | | | rus1 | arab1 | | rus2 | arab2 | | rus3 | arab3 | | rus4 | | So it divides the list by English word, creates a for each English word, then lists Russian equivalents in the left column and Arabic in the right. However there are often not the same number for both. What I am doing right now is running a WHILE loop in a WHILE loop. The outer loop is running fine but I think I am doing the inner loop wrong. Here is the bulk of the code: $definitions=mysql_query("SELECT DISTINCT definition FROM words") WHILE($row=mysql_fetch_array($definitions) { ECHO '<tr><th colspan="2">' . $row['definition'] . '</th></tr>'; $russian="SELECT * FROM words WHERE language='Russian' AND definition='".$row['definition']."'"; $arabic="SELECT * FROM words WHERE language='Arabic' AND definition='".$row['definition']."'"; WHILE($rus=mysql_fetch_array($russian) or $arb=mysql_fetch_array($arabic)) { ECHO '<tr><td>'.$rus['word'].'</td><td>'.$arb['word'].'</td></tr>'; } } Sadly I am getting soemthing like this: | defintion | |____________________| | | | rus1 | | | rus2 | | | rus3 | | | rus4 | | | | arab1 | | | arab2 | | | arab3 | Not sure what other way I can do this? I tried changing the or to || thinking the different precedence would cause another outcome, but then I get ONLY the Russian column. I'm out of ideas, you guys are my only hope!

    Read the article

  • LINQ Many to Many With In or Contains Clause (and a twist)

    - by Chris
    I have a many to many table structure called PropertyPets. It contains a dual primary key consisting of a PropertyID (from a Property table) and one or more PetIDs (from a Pet table). Next I have a search screen where people can multiple select pets from a jquery multiple select dropdown. Let's say somebody selects Dogs and Cats. Now, I want to be able to return all properties that contain BOTH dogs and cats in the many to many table, PropertyPets. I'm trying to do this with Linq to Sql. I've looked at the Contains clause, but it doesn't seem to work for my requirement: var result = properties.Where(p => search.PetType.Contains(p.PropertyPets)); Here, search.PetType is an int[] array of the Id's for Dog and Cat (which were selected in the multiple select drop down). The problem is first, Contains requires a string not an IEnumerable of type PropertyPet. And second, I need to find the properties that have BOTH dogs and cats and not just simply containing one or the other. Thank you for any pointers.

    Read the article

  • Javascript onchange event doesn't change div content

    - by Eli_jun
    I want to replace the content of my <div id="score_here">10</div> when I selecta value from my dropdown <select id="ansRate" onchange="javascript:add_score(this);">. I have setup my javascript function shown below but it doesn't work, what is the problem with this? Thanks for the help. Here are my codes. <div> <select id="ansRate" onchange="javascript:add_score(this);"> <option value="0">Answer Rate</option> <option value="1">1</option> <option value="2">2</option> <option value="3">3</option> <option value="4">4</option> <option value="5">5</option> <option value="6">6</option> <option value="7">7</option> <option value="8">8</option> <option value="9">9</option> <option value="10">10</option> </select> </div> <div id="score_here" style="width:100px;height:100px;background-color:yellow;"> 81 </div> <script type="text/javascript"> function add_score(a){ var string = $("#questionScore").text(); var thenum = string.replace( /^\D+/g, ''); var add = $(a).val(); var total = parseInt(thenum) + parseInt(add); document.getElementByID('score_here').innerHTML = total; } </script>

    Read the article

  • Integrating JavaScript Unit Tests with Visual Studio

    - by Stephen Walther
    Modern ASP.NET web applications take full advantage of client-side JavaScript to provide better interactivity and responsiveness. If you are building an ASP.NET application in the right way, you quickly end up with lots and lots of JavaScript code. When writing server code, you should be writing unit tests. One big advantage of unit tests is that they provide you with a safety net that enable you to safely modify your existing code – for example, fix bugs, add new features, and make performance enhancements -- without breaking your existing code. Every time you modify your code, you can execute your unit tests to verify that you have not broken anything. For the same reason that you should write unit tests for your server code, you should write unit tests for your client code. JavaScript is just as susceptible to bugs as C#. There is no shortage of unit testing frameworks for JavaScript. Each of the major JavaScript libraries has its own unit testing framework. For example, jQuery has QUnit, Prototype has UnitTestJS, YUI has YUI Test, and Dojo has Dojo Objective Harness (DOH). The challenge is integrating a JavaScript unit testing framework with Visual Studio. Visual Studio and Visual Studio ALM provide fantastic support for server-side unit tests. You can easily view the results of running your unit tests in the Visual Studio Test Results window. You can set up a check-in policy which requires that all unit tests pass before your source code can be committed to the source code repository. In addition, you can set up Team Build to execute your unit tests automatically. Unfortunately, Visual Studio does not provide “out-of-the-box” support for JavaScript unit tests. MS Test, the unit testing framework included in Visual Studio, does not support JavaScript unit tests. As soon as you leave the server world, you are left on your own. The goal of this blog entry is to describe one approach to integrating JavaScript unit tests with MS Test so that you can execute your JavaScript unit tests side-by-side with your C# unit tests. The goal is to enable you to execute JavaScript unit tests in exactly the same way as server-side unit tests. You can download the source code described by this project by scrolling to the end of this blog entry. Rejected Approach: Browser Launchers One popular approach to executing JavaScript unit tests is to use a browser as a test-driver. When you use a browser as a test-driver, you open up a browser window to execute and view the results of executing your JavaScript unit tests. For example, QUnit – the unit testing framework for jQuery – takes this approach. The following HTML page illustrates how you can use QUnit to create a unit test for a function named addNumbers(). <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Using QUnit</title> <link rel="stylesheet" href="http://github.com/jquery/qunit/raw/master/qunit/qunit.css" type="text/css" /> </head> <body> <h1 id="qunit-header">QUnit example</h1> <h2 id="qunit-banner"></h2> <div id="qunit-testrunner-toolbar"></div> <h2 id="qunit-userAgent"></h2> <ol id="qunit-tests"></ol> <div id="qunit-fixture">test markup, will be hidden</div> <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript" src="http://github.com/jquery/qunit/raw/master/qunit/qunit.js"></script> <script type="text/javascript"> // The function to test function addNumbers(a, b) { return a+b; } // The unit test test("Test of addNumbers", function () { equals(4, addNumbers(1,3), "1+3 should be 4"); }); </script> </body> </html> This test verifies that calling addNumbers(1,3) returns the expected value 4. When you open this page in a browser, you can see that this test does, in fact, pass. The idea is that you can quickly refresh this QUnit HTML JavaScript test driver page in your browser whenever you modify your JavaScript code. In other words, you can keep a browser window open and keep refreshing it over and over while you are developing your application. That way, you can know very quickly whenever you have broken your JavaScript code. While easy to setup, there are several big disadvantages to this approach to executing JavaScript unit tests: You must view your JavaScript unit test results in a different location than your server unit test results. The JavaScript unit test results appear in the browser and the server unit test results appear in the Visual Studio Test Results window. Because all of your unit test results don’t appear in a single location, you are more likely to introduce bugs into your code without noticing it. Because your unit tests are not integrated with Visual Studio – in particular, MS Test -- you cannot easily include your JavaScript unit tests when setting up check-in policies or when performing automated builds with Team Build. A more sophisticated approach to using a browser as a test-driver is to automate the web browser. Instead of launching the browser and loading the test code yourself, you use a framework to automate this process. There are several different testing frameworks that support this approach: · Selenium – Selenium is a very powerful framework for automating browser tests. You can create your tests by recording a Firefox session or by writing the test driver code in server code such as C#. You can learn more about Selenium at http://seleniumhq.org/. LTAF – The ASP.NET team uses the Lightweight Test Automation Framework to test JavaScript code in the ASP.NET framework. You can learn more about LTAF by visiting the project home at CodePlex: http://aspnet.codeplex.com/releases/view/35501 jsTestDriver – This framework uses Java to automate the browser. jsTestDriver creates a server which can be used to automate multiple browsers simultaneously. This project is located at http://code.google.com/p/js-test-driver/ TestSwam – This framework, created by John Resig, uses PHP to automate the browser. Like jsTestDriver, the framework creates a test server. You can open multiple browsers that are automated by the test server. Learn more about TestSwarm by visiting the following address: https://github.com/jeresig/testswarm/wiki Yeti – This is the framework introduced by Yahoo for automating browser tests. Yeti uses server-side JavaScript and depends on Node.js. Learn more about Yeti at http://www.yuiblog.com/blog/2010/08/25/introducing-yeti-the-yui-easy-testing-interface/ All of these frameworks are great for integration tests – however, they are not the best frameworks to use for unit tests. In one way or another, all of these frameworks depend on executing tests within the context of a “living and breathing” browser. If you create an ASP.NET Unit Test then Visual Studio will launch a web server before executing the unit test. Why is launching a web server so bad? It is not the worst thing in the world. However, it does introduce dependencies that prevent your code from being tested in isolation. One of the defining features of a unit test -- versus an integration test – is that a unit test tests code in isolation. Another problem with launching a web server when performing unit tests is that launching a web server can be slow. If you cannot execute your unit tests quickly, you are less likely to execute your unit tests each and every time you make a code change. You are much more likely to fall into the pit of failure. Launching a browser when performing a JavaScript unit test has all of the same disadvantages as launching a web server when performing an ASP.NET unit test. Instead of testing a unit of JavaScript code in isolation, you are testing JavaScript code within the context of a particular browser. Using the frameworks listed above for integration tests makes perfect sense. However, I want to consider a different approach for creating unit tests for JavaScript code. Using Server-Side JavaScript for JavaScript Unit Tests A completely different approach to executing JavaScript unit tests is to perform the tests outside of any browser. If you really want to test JavaScript then you should test JavaScript and leave the browser out of the testing process. There are several ways that you can execute JavaScript on the server outside the context of any browser: Rhino – Rhino is an implementation of JavaScript written in Java. The Rhino project is maintained by the Mozilla project. Learn more about Rhino at http://www.mozilla.org/rhino/ V8 – V8 is the open-source Google JavaScript engine written in C++. This is the JavaScript engine used by the Chrome web browser. You can download V8 and embed it in your project by visiting http://code.google.com/p/v8/ JScript – JScript is the JavaScript Script Engine used by Internet Explorer (up to but not including Internet Explorer 9), Windows Script Host, and Active Server Pages. Internet Explorer is still the most popular web browser. Therefore, I decided to focus on using the JScript Script Engine to execute JavaScript unit tests. Using the Microsoft Script Control There are two basic ways that you can pass JavaScript to the JScript Script Engine and execute the code: use the Microsoft Windows Script Interfaces or use the Microsoft Script Control. The difficult and proper way to execute JavaScript using the JScript Script Engine is to use the Microsoft Windows Script Interfaces. You can learn more about the Script Interfaces by visiting http://msdn.microsoft.com/en-us/library/t9d4xf28(VS.85).aspx The main disadvantage of using the Script Interfaces is that they are difficult to use from .NET. There is a great series of articles on using the Script Interfaces from C# located at http://www.drdobbs.com/184406028. I picked the easier alternative and used the Microsoft Script Control. The Microsoft Script Control is an ActiveX control that provides a higher level abstraction over the Window Script Interfaces. You can download the Microsoft Script Control from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac After you download the Microsoft Script Control, you need to add a reference to it to your project. Select the Visual Studio menu option Project, Add Reference to open the Add Reference dialog. Select the COM tab and add the Microsoft Script Control 1.0. Using the Script Control is easy. You call the Script Control AddCode() method to add JavaScript code to the Script Engine. Next, you call the Script Control Run() method to run a particular JavaScript function. The reference documentation for the Microsoft Script Control is located at the MSDN website: http://msdn.microsoft.com/en-us/library/aa227633%28v=vs.60%29.aspx Creating the JavaScript Code to Test To keep things simple, let’s imagine that you want to test the following JavaScript function named addNumbers() which simply adds two numbers together: MvcApplication1\Scripts\Math.js function addNumbers(a, b) { return 5; } Notice that the addNumbers() method always returns the value 5. Right-now, it will not pass a good unit test. Create this file and save it in your project with the name Math.js in your MVC project’s Scripts folder (Save the file in your actual MVC application and not your MVC test application). Creating the JavaScript Test Helper Class To make it easier to use the Microsoft Script Control in unit tests, we can create a helper class. This class contains two methods: LoadFile() – Loads a JavaScript file. Use this method to load the JavaScript file being tested or the JavaScript file containing the unit tests. ExecuteTest() – Executes the JavaScript code. Use this method to execute a JavaScript unit test. Here’s the code for the JavaScriptTestHelper class: JavaScriptTestHelper.cs   using System; using System.IO; using Microsoft.VisualStudio.TestTools.UnitTesting; using MSScriptControl; namespace MvcApplication1.Tests { public class JavaScriptTestHelper : IDisposable { private ScriptControl _sc; private TestContext _context; /// <summary> /// You need to use this helper with Unit Tests and not /// Basic Unit Tests because you need a Test Context /// </summary> /// <param name="testContext">Unit Test Test Context</param> public JavaScriptTestHelper(TestContext testContext) { if (testContext == null) { throw new ArgumentNullException("TestContext"); } _context = testContext; _sc = new ScriptControl(); _sc.Language = "JScript"; _sc.AllowUI = false; } /// <summary> /// Load the contents of a JavaScript file into the /// Script Engine. /// </summary> /// <param name="path">Path to JavaScript file</param> public void LoadFile(string path) { var fileContents = File.ReadAllText(path); _sc.AddCode(fileContents); } /// <summary> /// Pass the path of the test that you want to execute. /// </summary> /// <param name="testMethodName">JavaScript function name</param> public void ExecuteTest(string testMethodName) { dynamic result = null; try { result = _sc.Run(testMethodName, new object[] { }); } catch { var error = ((IScriptControl)_sc).Error; if (error != null) { var description = error.Description; var line = error.Line; var column = error.Column; var text = error.Text; var source = error.Source; if (_context != null) { var details = String.Format("{0} \r\nLine: {1} Column: {2}", source, line, column); _context.WriteLine(details); } } throw new AssertFailedException(error.Description); } } public void Dispose() { _sc = null; } } }     Notice that the JavaScriptTestHelper class requires a Test Context to be instantiated. For this reason, you can use the JavaScriptTestHelper only with a Visual Studio Unit Test and not a Basic Unit Test (These are two different types of Visual Studio project items). Add the JavaScriptTestHelper file to your MVC test application (for example, MvcApplication1.Tests). Creating the JavaScript Unit Test Next, we need to create the JavaScript unit test function that we will use to test the addNumbers() function. Create a folder in your MVC test project named JavaScriptTests and add the following JavaScript file to this folder: MvcApplication1.Tests\JavaScriptTests\MathTest.js /// <reference path="JavaScriptUnitTestFramework.js"/> function testAddNumbers() { // Act var result = addNumbers(1, 3); // Assert assert.areEqual(4, result, "addNumbers did not return right value!"); }   The testAddNumbers() function takes advantage of another JavaScript library named JavaScriptUnitTestFramework.js. This library contains all of the code necessary to make assertions. Add the following JavaScriptnitTestFramework.js to the same folder as the MathTest.js file: MvcApplication1.Tests\JavaScriptTests\JavaScriptUnitTestFramework.js var assert = { areEqual: function (expected, actual, message) { if (expected !== actual) { throw new Error("Expected value " + expected + " is not equal to " + actual + ". " + message); } } }; There is only one type of assertion supported by this file: the areEqual() assertion. Most likely, you would want to add additional types of assertions to this file to make it easier to write your JavaScript unit tests. Deploying the JavaScript Test Files This step is non-intuitive. When you use Visual Studio to run unit tests, Visual Studio creates a new folder and executes a copy of the files in your project. After you run your unit tests, your Visual Studio Solution will contain a new folder named TestResults that includes a subfolder for each test run. You need to configure Visual Studio to deploy your JavaScript files to the test run folder or Visual Studio won’t be able to find your JavaScript files when you execute your unit tests. You will get an error that looks something like this when you attempt to execute your unit tests: You can configure Visual Studio to deploy your JavaScript files by adding a Test Settings file to your Visual Studio Solution. It is important to understand that you need to add this file to your Visual Studio Solution and not a particular Visual Studio project. Right-click your Solution in the Solution Explorer window and select the menu option Add, New Item. Select the Test Settings item and click the Add button. After you create a Test Settings file for your solution, you can indicate that you want a particular folder to be deployed whenever you perform a test run. Select the menu option Test, Edit Test Settings to edit your test configuration file. Select the Deployment tab and select your MVC test project’s JavaScriptTest folder to deploy. Click the Apply button and the Close button to save the changes and close the dialog. Creating the Visual Studio Unit Test The very last step is to create the Visual Studio unit test (the MS Test unit test). Add a new unit test to your MVC test project by selecting the menu option Add New Item and selecting the Unit Test project item (Do not select the Basic Unit Test project item): The difference between a Basic Unit Test and a Unit Test is that a Unit Test includes a Test Context. We need this Test Context to use the JavaScriptTestHelper class that we created earlier. Enter the following test method for the new unit test: [TestMethod] public void TestAddNumbers() { var jsHelper = new JavaScriptTestHelper(this.TestContext); // Load JavaScript files jsHelper.LoadFile("JavaScriptUnitTestFramework.js"); jsHelper.LoadFile(@"..\..\..\MvcApplication1\Scripts\Math.js"); jsHelper.LoadFile("MathTest.js"); // Execute JavaScript Test jsHelper.ExecuteTest("testAddNumbers"); } This code uses the JavaScriptTestHelper to load three files: JavaScripUnitTestFramework.js – Contains the assert functions. Math.js – Contains the addNumbers() function from your MVC application which is being tested. MathTest.js – Contains the JavaScript unit test function. Next, the test method calls the JavaScriptTestHelper ExecuteTest() method to execute the testAddNumbers() JavaScript function. Running the Visual Studio JavaScript Unit Test After you complete all of the steps described above, you can execute the JavaScript unit test just like any other unit test. You can use the keyboard combination CTRL-R, CTRL-A to run all of the tests in the current Visual Studio Solution. Alternatively, you can use the buttons in the Visual Studio toolbar to run the tests: (Unfortunately, the Run All Impacted Tests button won’t work correctly because Visual Studio won’t detect that your JavaScript code has changed. Therefore, you should use either the Run Tests in Current Context or Run All Tests in Solution options instead.) The results of running the JavaScript tests appear side-by-side with the results of running the server tests in the Test Results window. For example, if you Run All Tests in Solution then you will get the following results: Notice that the TestAddNumbers() JavaScript test has failed. That is good because our addNumbers() function is hard-coded to always return the value 5. If you double-click the failing JavaScript test, you can view additional details such as the JavaScript error message and the line number of the JavaScript code that failed: Summary The goal of this blog entry was to explain an approach to creating JavaScript unit tests that can be easily integrated with Visual Studio and Visual Studio ALM. I described how you can use the Microsoft Script Control to execute JavaScript on the server. By taking advantage of the Microsoft Script Control, we were able to execute our JavaScript unit tests side-by-side with all of our other unit tests and view the results in the standard Visual Studio Test Results window. You can download the code discussed in this blog entry from here: http://StephenWalther.com/downloads/Blog/JavaScriptUnitTesting/JavaScriptUnitTests.zip Before running this code, you need to first install the Microsoft Script Control which you can download from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac

    Read the article

  • Basic Spatial Data with SQL Server and Entity Framework 5.0

    - by Rick Strahl
    In my most recent project we needed to do a bit of geo-spatial referencing. While spatial features have been in SQL Server for a while using those features inside of .NET applications hasn't been as straight forward as could be, because .NET natively doesn't support spatial types. There are workarounds for this with a few custom project like SharpMap or a hack using the Sql Server specific Geo types found in the Microsoft.SqlTypes assembly that ships with SQL server. While these approaches work for manipulating spatial data from .NET code, they didn't work with database access if you're using Entity Framework. Other ORM vendors have been rolling their own versions of spatial integration. In Entity Framework 5.0 running on .NET 4.5 the Microsoft ORM finally adds support for spatial types as well. In this post I'll describe basic geography features that deal with single location and distance calculations which is probably the most common usage scenario. SQL Server Transact-SQL Syntax for Spatial Data Before we look at how things work with Entity framework, lets take a look at how SQL Server allows you to use spatial data to get an understanding of the underlying semantics. The following SQL examples should work with SQL 2008 and forward. Let's start by creating a test table that includes a Geography field and also a pair of Long/Lat fields that demonstrate how you can work with the geography functions even if you don't have geography/geometry fields in the database. Here's the CREATE command:CREATE TABLE [dbo].[Geo]( [id] [int] IDENTITY(1,1) NOT NULL, [Location] [geography] NULL, [Long] [float] NOT NULL, [Lat] [float] NOT NULL ) Now using plain SQL you can insert data into the table using geography::STGeoFromText SQL CLR function:insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.527200 45.712113)', 4326), -121.527200, 45.712113 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.517265 45.714240)', 4326), -121.517265, 45.714240 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.511536 45.714825)', 4326), -121.511536, 45.714825) The STGeomFromText function accepts a string that points to a geometric item (a point here but can also be a line or path or polygon and many others). You also need to provide an SRID (Spatial Reference System Identifier) which is an integer value that determines the rules for how geography/geometry values are calculated and returned. For mapping/distance functionality you typically want to use 4326 as this is the format used by most mapping software and geo-location libraries like Google and Bing. The spatial data in the Location field is stored in binary format which looks something like this: Once the location data is in the database you can query the data and do simple distance computations very easily. For example to calculate the distance of each of the values in the database to another spatial point is very easy to calculate. Distance calculations compare two points in space using a direct line calculation. For our example I'll compare a new point to all the points in the database. Using the Location field the SQL looks like this:-- create a source point DECLARE @s geography SET @s = geography:: STGeomFromText('POINT(-121.527200 45.712113)' , 4326); --- return the ids select ID, Location as Geo , Location .ToString() as Point , @s.STDistance( Location) as distance from Geo order by distance The code defines a new point which is the base point to compare each of the values to. You can also compare values from the database directly, but typically you'll want to match a location to another location and determine the difference for which you can use the geography::STDistance function. This query produces the following output: The STDistance function returns the straight line distance between the passed in point and the point in the database field. The result for SRID 4326 is always in meters. Notice that the first value passed was the same point so the difference is 0. The other two points are two points here in town in Hood River a little ways away - 808 and 1256 meters respectively. Notice also that you can order the result by the resulting distance, which effectively gives you results that are ordered radially out from closer to further away. This is great for searches of points of interest near a central location (YOU typically!). These geolocation functions are also available to you if you don't use the Geography/Geometry types, but plain float values. It's a little more work, as each point has to be created in the query using the string syntax, but the following code doesn't use a geography field but produces the same result as the previous query.--- using float fields select ID, geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326), geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326). ToString(), @s.STDistance( geography::STGeomFromText ('POINT(' + STR(long ,15, 7) + ' ' + Str(lat ,15, 7) + ')' , 4326)) as distance from geo order by distance Spatial Data in the Entity Framework Prior to Entity Framework 5.0 on .NET 4.5 consuming of the data above required using stored procedures or raw SQL commands to access the spatial data. In Entity Framework 5 however, Microsoft introduced the new DbGeometry and DbGeography types. These immutable location types provide a bunch of functionality for manipulating spatial points using geometry functions which in turn can be used to do common spatial queries like I described in the SQL syntax above. The DbGeography/DbGeometry types are immutable, meaning that you can't write to them once they've been created. They are a bit odd in that you need to use factory methods in order to instantiate them - they have no constructor() and you can't assign to properties like Latitude and Longitude. Creating a Model with Spatial Data Let's start by creating a simple Entity Framework model that includes a Location property of type DbGeography: public class GeoLocationContext : DbContext { public DbSet<GeoLocation> Locations { get; set; } } public class GeoLocation { public int Id { get; set; } public DbGeography Location { get; set; } public string Address { get; set; } } That's all there's to it. When you run this now against SQL Server, you get a Geography field for the Location property, which looks the same as the Location field in the SQL examples earlier. Adding Spatial Data to the Database Next let's add some data to the table that includes some latitude and longitude data. An easy way to find lat/long locations is to use Google Maps to pinpoint your location, then right click and click on What's Here. Click on the green marker to get the GPS coordinates. To add the actual geolocation data create an instance of the GeoLocation type and use the DbGeography.PointFromText() factory method to create a new point to assign to the Location property:[TestMethod] public void AddLocationsToDataBase() { var context = new GeoLocationContext(); // remove all context.Locations.ToList().ForEach( loc => context.Locations.Remove(loc)); context.SaveChanges(); var location = new GeoLocation() { // Create a point using native DbGeography Factory method Location = DbGeography.PointFromText( string.Format("POINT({0} {1})", -121.527200,45.712113) ,4326), Address = "301 15th Street, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.714240, -121.517265), Address = "The Hatchery, Bingen" }; context.Locations.Add(location); location = new GeoLocation() { // Create a point using a helper function (lat/long) Location = CreatePoint(45.708457, -121.514432), Address = "Kaze Sushi, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.722780, -120.209227), Address = "Arlington, OR" }; context.Locations.Add(location); context.SaveChanges(); } As promised, a DbGeography object has to be created with one of the static factory methods provided on the type as the Location.Longitude and Location.Latitude properties are read only. Here I'm using PointFromText() which uses a "Well Known Text" format to specify spatial data. In the first example I'm specifying to create a Point from a longitude and latitude value, using an SRID of 4326 (just like earlier in the SQL examples). You'll probably want to create a helper method to make the creation of Points easier to avoid that string format and instead just pass in a couple of double values. Here's my helper called CreatePoint that's used for all but the first point creation in the sample above:public static DbGeography CreatePoint(double latitude, double longitude) { var text = string.Format(CultureInfo.InvariantCulture.NumberFormat, "POINT({0} {1})", longitude, latitude); // 4326 is most common coordinate system used by GPS/Maps return DbGeography.PointFromText(text, 4326); } Using the helper the syntax becomes a bit cleaner, requiring only a latitude and longitude respectively. Note that my method intentionally swaps the parameters around because Latitude and Longitude is the common format I've seen with mapping libraries (especially Google Mapping/Geolocation APIs with their LatLng type). When the context is changed the data is written into the database using the SQL Geography type which looks the same as in the earlier SQL examples shown. Querying Once you have some location data in the database it's now super easy to query the data and find out the distance between locations. A common query is to ask for a number of locations that are near a fixed point - typically your current location and order it by distance. Using LINQ to Entities a query like this is easy to construct:[TestMethod] public void QueryLocationsTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 kilometers ordered by distance var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) < 5000) .OrderBy( loc=> loc.Location.Distance(sourcePoint) ) .Select( loc=> new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n0} meters)", location.Address, location.Distance); } } This example produces: 301 15th Street, Hood River (0 meters)The Hatchery, Bingen (809 meters)Kaze Sushi, Hood River (1,074 meters)   The first point in the database is the same as my source point I'm comparing against so the distance is 0. The other two are within the 5 mile radius, while the Arlington location which is 65 miles or so out is not returned. The result is ordered by distance from closest to furthest away. In the code, I first create a source point that is the basis for comparison. The LINQ query then selects all locations that are within 5km of the source point using the Location.Distance() function, which takes a source point as a parameter. You can either use a pre-defined value as I'm doing here, or compare against another database DbGeography property (say when you have to points in the same database for things like routes). What's nice about this query syntax is that it's very clean and easy to read and understand. You can calculate the distance and also easily order by the distance to provide a result that shows locations from closest to furthest away which is a common scenario for any application that places a user in the context of several locations. It's now super easy to accomplish this. Meters vs. Miles As with the SQL Server functions, the Distance() method returns data in meters, so if you need to work with miles or feet you need to do some conversion. Here are a couple of helpers that might be useful (can be found in GeoUtils.cs of the sample project):/// <summary> /// Convert meters to miles /// </summary> /// <param name="meters"></param> /// <returns></returns> public static double MetersToMiles(double? meters) { if (meters == null) return 0F; return meters.Value * 0.000621371192; } /// <summary> /// Convert miles to meters /// </summary> /// <param name="miles"></param> /// <returns></returns> public static double MilesToMeters(double? miles) { if (miles == null) return 0; return miles.Value * 1609.344; } Using these two helpers you can query on miles like this:[TestMethod] public void QueryLocationsMilesTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 miles ordered by distance var fiveMiles = GeoUtils.MilesToMeters(5); var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) <= fiveMiles) .OrderBy(loc => loc.Location.Distance(sourcePoint)) .Select(loc => new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n1} miles)", location.Address, GeoUtils.MetersToMiles(location.Distance)); } } which produces: 301 15th Street, Hood River (0.0 miles)The Hatchery, Bingen (0.5 miles)Kaze Sushi, Hood River (0.7 miles) Nice 'n simple. .NET 4.5 Only Note that DbGeography and DbGeometry are exclusive to Entity Framework 5.0 (not 4.4 which ships in the same NuGet package or installer) and requires .NET 4.5. That's because the new DbGeometry and DbGeography (and related) types are defined in the 4.5 version of System.Data.Entity which is a CLR assembly and is only updated by major versions of .NET. Why this decision was made to add these types to System.Data.Entity rather than to the frequently updated EntityFramework assembly that would have possibly made this work in .NET 4.0 is beyond me, especially given that there are no native .NET framework spatial types to begin with. I find it also odd that there is no native CLR spatial type. The DbGeography and DbGeometry types are specific to Entity Framework and live on those assemblies. They will also work for general purpose, non-database spatial data manipulation, but then you are forced into having a dependency on System.Data.Entity, which seems a bit silly. There's also a System.Spatial assembly that's apparently part of WCF Data Services which in turn don't work with Entity framework. Another example of multiple teams at Microsoft not communicating and implementing the same functionality (differently) in several different places. Perplexed as a I may be, for EF specific code the Entity framework specific types are easy to use and work well. Working with pre-.NET 4.5 Entity Framework and Spatial Data If you can't go to .NET 4.5 just yet you can also still use spatial features in Entity Framework, but it's a lot more work as you can't use the DbContext directly to manipulate the location data. You can still run raw SQL statements to write data into the database and retrieve results using the same TSQL syntax I showed earlier using Context.Database.ExecuteSqlCommand(). Here's code that you can use to add location data into the database:[TestMethod] public void RawSqlEfAddTest() { string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT({0} {1})', 4326),@p0 )"; var sql = string.Format(sqlFormat,-121.527200, 45.712113); Console.WriteLine(sql); var context = new GeoLocationContext(); Assert.IsTrue(context.Database.ExecuteSqlCommand(sql,"301 N. 15th Street") > 0); } Here I'm using the STGeomFromText() function to add the location data. Note that I'm using string.Format here, which usually would be a bad practice but is required here. I was unable to use ExecuteSqlCommand() and its named parameter syntax as the longitude and latitude parameters are embedded into a string. Rest assured it's required as the following does not work:string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT(@p0 @p1)', 4326),@p2 )";context.Database.ExecuteSqlCommand(sql, -121.527200, 45.712113, "301 N. 15th Street") Explicitly assigning the point value with string.format works however. There are a number of ways to query location data. You can't get the location data directly, but you can retrieve the point string (which can then be parsed to get Latitude and Longitude) and you can return calculated values like distance. Here's an example of how to retrieve some geo data into a resultset using EF's and SqlQuery method:[TestMethod] public void RawSqlEfQueryTest() { var sqlFormat = @" DECLARE @s geography SET @s = geography:: STGeomFromText('POINT({0} {1})' , 4326); SELECT Address, Location.ToString() as GeoString, @s.STDistance( Location) as Distance FROM GeoLocations ORDER BY Distance"; var sql = string.Format(sqlFormat, -121.527200, 45.712113); var context = new GeoLocationContext(); var locations = context.Database.SqlQuery<ResultData>(sql); Assert.IsTrue(locations.Count() > 0); foreach (var location in locations) { Console.WriteLine(location.Address + " " + location.GeoString + " " + location.Distance); } } public class ResultData { public string GeoString { get; set; } public double Distance { get; set; } public string Address { get; set; } } Hopefully you don't have to resort to this approach as it's fairly limited. Using the new DbGeography/DbGeometry types makes this sort of thing so much easier. When I had to use code like this before I typically ended up retrieving data pks only and then running another query with just the PKs to retrieve the actual underlying DbContext entities. This was very inefficient and tedious but it did work. Summary For the current project I'm working on we actually made the switch to .NET 4.5 purely for the spatial features in EF 5.0. This app heavily relies on spatial queries and it was worth taking a chance with pre-release code to get this ease of integration as opposed to manually falling back to stored procedures or raw SQL string queries to return spatial specific queries. Using native Entity Framework code makes life a lot easier than the alternatives. It might be a late addition to Entity Framework, but it sure makes location calculations and storage easy. Where do you want to go today? ;-) Resources Download Sample Project© Rick Strahl, West Wind Technologies, 2005-2012Posted in ADO.NET  Sql Server  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Good SQL error handling in Strored Procedure

    - by developerit
    When writing SQL procedures, it is really important to handle errors cautiously. Having that in mind will probably save your efforts, time and money. I have been working with MS-SQL 2000 and MS-SQL 2005 (I have not got the opportunity to work with MS-SQL 2008 yet) for many years now and I want to share with you how I handle errors in T-SQL Stored Procedure. This code has been working for many years now without a hitch. N.B.: As antoher "best pratice", I suggest using only ONE level of TRY … CATCH and only ONE level of TRANSACTION encapsulation, as doing otherwise may not be 100% sure. BEGIN TRANSACTION; BEGIN TRY -- Code in transaction go here COMMIT TRANSACTION; END TRY BEGIN CATCH -- Rollback on error ROLLBACK TRANSACTION; -- Raise the error with the appropriate message and error severity DECLARE @ErrMsg nvarchar(4000), @ErrSeverity int; SELECT @ErrMsg = ERROR_MESSAGE(), @ErrSeverity = ERROR_SEVERITY(); RAISERROR(@ErrMsg, @ErrSeverity, 1); END CATCH; In conclusion, I will just mention that I have been using this code with .NET 2.0 and .NET 3.5 and it works like a charm. The .NET TDS parser throws back a SQLException which is ideal to work with.

    Read the article

  • Remotely Schedule and Stream Recorded TV in Windows 7 Media Center

    - by DigitalGeekery
    Have you ever been away from home and suddenly realized you forgot to record your favorite program? Now Windows 7 Media Center, users can schedule recordings remotely from their phones or mobile devices with Remote Potato. How it Works Remote Potato installs server software on the host computer running Windows 7 Media Center. Once the software is installed, we’ll need to do some port forwarding on the router and setup an optional dynamic DNS address. When setup is completed, we will access the application through a web based interface. Silverlight is required for Streaming recorded TV, but scheduling recordings can be done through an HTML interface. Installing Remote Potato Download and install Remote Potato on the Media Center PC. (See download link below) If you plan to stream any Recorded TV, you’ll also want to install the streaming pack located on the same page. It isn’t required to stream all shows, only shows that require the AC3 audio codec. Click Yes to allow Remote Potato to add rules to the Windows Firewall for remote access. You’ll likely need to accept a few UAC prompts. When notified that the rules were added, click OK. Remote Potato will then prompt you to allow administrator privileges to reserve a URL for it’s web server. Click Yes. Remote Potato server will start. Click on the configuration button at the right to to reveal the settings tabs.   One the General tab, you’ll have the option to run Remote Potato on startup and minimized in the System Tray. If you’re running Media Center on a dedicated HTPC, you’ll probably want to enable both startup options. Forwarding Ports on Your Router You’ll need to forward a couple ports on your router. By default, these will be ports 9080 and 9081. In this example we’re using a Linksys WRT54GL router, however, the steps for port forwarding will vary from router to router. On the Linksys configuration page, click on the Applications & Gaming Tab, and then the Port Range Forward tab. Under Application, type in a name of your choosing. In both the Start and End boxes, type the port number 9080. Enter the local IP address of your Media Center computer in the IP address column. Click the check box under Enable. Repeat the process on the next line, but this time use port 9081. When finished, click the Save Settings button. Note: It’s highly recommended that you configure the home computer running Media Center & Remote Potato with a static IP address.   Find your IP Address You’ll need to find the IP address assigned to your router from your ISP. There are many ways to do this but a quick and easy way is to visit a site like checkip.dyndns.org (link available below) The current external IP address of your router will be displayed in the browser.   Dynamic DNS This is an optional step, but  it’s highly recommended. Many routers, such as the Linksys WRT54GL we are using, support Dynamic DNS (DDNS). What Dynamic DNS allows you to do is affiliate your home router’s external IP address to a domain name. Every time your home router is assigned a a new IP address by your ISP, the domain name is updated to point to your new IP address. Remote Potato’s user interface is accessed over the Internet is by connecting to your router’s IP address followed by a colon and the port number. (Ex: XXX.XXX.XXX.XXX:9080) Instead of constantly having to look up and remember an IP address, you can use DDNS along with a 3rd party provider like DynDNS.com, to sign up for a free domain name and configure it to be updated each time your router is assigned a new IP address. Go to the DynDNS.com website (See link at the end of the article) and sign up for a free Domain name. You’ll need to register and confirm by email.   Once you’ve signed in and selected your domain name click Activate Services. You’ll get a confirmation message that your domain name has been activated.    On the Linksys WRT54GL click on the Setup tab an then DDNS. Select DynDNS.org, or TZO.com if you prefer to use their service, from the drop down list.   With DynDNS, you’ll need to fill in your username and password you signed up with at the DynDNS website and the hostname you chose. Note: You can connect over your local network with the IP Address of the computer running Remote Potato followed by a colon and the port number. Ex: 192.168.1.2:9080 Logging in Remote Potato and Recording a Show Once you connect, you’ll see the start page. To view the TV listings, click on TV Guide. You’ll then see your guide listings. There are a few ways to navigate the listings. At the top left, you can click on any of the preset time buttons to jump to  the listings at that time of the day.  Click on the arrows to the right and left of the day and date at the top center to proceed to the previous or next day. Or, jump to a specific day with the date and date buttons at the top right.   To setup a recording, click on a program.   You can choose to record the individual show or the entire series by clicking on Record Show or Record Series.   Remote Potato on Mobile Devices Perhaps the coolest feature of Remote Potato is the ability to schedule recording from your phone or mobile device. Note: For any devices or computers without Silverlight, you will be prompted to view the HTML page. Select Browse Listings. Select your program to record. In the Program Details, select Record Show to record the single episode or Record Series to record all instances of the series. You will then see a red dot on the program listing to indicate that the show is scheduled for recording.   Streaming Recorded TV Click on Recorded TV from the home screen to access your previously recorded TV programs. Click on the selection you wish to stream. Click on Play. If you receive this error message, you’ll need to install the streaming pack for Remote Potato. This is found on the same download page as installation files. (See link below) The Begin from slider allows you to start playback from the start (by default) or a different time of the program by moving the slider. The Quality (bitrate) setting  allows you to choose the quality of the playback. We found the video quality on the Normal setting to be pretty lousy, and Low was just pointless. High was the best overall viewing experience as it provided smooth quality video playback. We experienced significant stuttering during playback using the Ultra High setting.   Click Start when you are ready to begin. When playback begins you’ll see a slider at the top right.   Move the slider left or right to increase or decrease the size of the video. There’s also a button to switch to full screen.   Media Center users who travel frequently or are always on the go will likely find Remote Potato to be a blessing. Since being released earlier this year, updates for Remote Potato have come fast and furious. The latest beta release includes support for streaming music and photos. If you like those nice network TV logos, check out our article on adding TV channel logos to Windows Media Center. Downloads and Links Download Remote Potato and Streaming Pack Find your IP address Sign Up for a Domain Name at DynDNS.com Similar Articles Productive Geek Tips Schedule Updates for Windows Media CenterUsing Netflix Watchnow in Windows Vista Media Center (Gmedia)Add a Sleep Timer to Windows 7 Media CenterStartup Customizations for Media Center in Windows 7Enable Media Streaming in Windows Home Server to Windows Media Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos

    Read the article

  • Adding a hyperlink in a client report definition file (RDLC)

    - by rajbk
    This post shows you how to add a hyperlink to your RDLC report. In a previous post, I showed you how to create an RDLC report. We have been given the requirement to the report we created earlier, the Northwind Product report, to add a column that will contain hyperlinks which are unique per row.  The URLs will be RESTful with the ProductID at the end. Clicking on the URL will take them to a website like so: http://localhost/products/3  where 3 is the primary key of the product row clicked on. To start off, open the RDLC and add a new column to the product table.   Add text to the header (Details) and row (Product Website). Right click on the row (not header) and select “TextBox properties” Select Action – Go to URL. You could hard code a URL here but what we need is a URL that changes based on the ProductID.   Click on the expression button (fx) The expression builder gives you access to several functions and constants including the fields in your dataset. See this reference for more details: Common Expressions for ReportViewer Reports. Add the following expression: = "http://localhost/products/" & Fields!ProductID.Value Click OK to exit the Expression Builder. The report will not render because hyperlinks are disabled by default in the ReportViewer control. To enable it, add the following in your page load event (where rvProducts is the ID of your ReportViewerControl): protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { rvProducts.LocalReport.EnableHyperlinks = true; } } We want our links to open in a new window so set the HyperLinkTarget property of the ReportViewer control to “_blank”   We are done adding hyperlinks to our report. Clicking on the links for each product pops open a new windows. The URL has the ProductID added at the end. Enjoy!

    Read the article

  • Cannot enable network discovery on Windows Server 2008 R2

    - by dariom
    I'm trying to enable the Network Discovery feature on a newly installed Windows Server 2008 R2 instance. The network connection is in the Home or Work profile (it is not domain joined). These are the steps I've followed: Within the Network and Sharing Center I select Change advanced sharing settings Then I select the Turn on network discovery option for the current network profile (Home or Work) I then click Save changes If I then go back to the Advanced sharing settings screen the Turn off network discovery option is selected and the machine is not visible to others within the Network node in Windows Explorer. Things I've checked: I can ping the server and connect to it using the machine name/IP address. The Windows Firewall has exceptions for Network Discovery for both Private and Public networks. File and Printer sharing is enabled and I can transfer files to/from the server by connecting to the server using a UNC path. What am I missing here?

    Read the article

  • ASP.NET MVC Paging/Sorting/Filtering using the MVCContrib Grid and Pager

    - by rajbk
    This post walks you through creating a UI for paging, sorting and filtering a list of data items. It makes use of the excellent MVCContrib Grid and Pager Html UI helpers. A sample project is attached at the bottom. Our UI will eventually look like this. The application will make use of the Northwind database. The top portion of the page has a filter area region. The filter region is enclosed in a form tag. The select lists are wired up with jQuery to auto post back the form. The page has a pager region at the top and bottom of the product list. The product list has a link to display more details about a given product. The column headings are clickable for sorting and an icon shows the sort direction. Strongly Typed View Models The views are written to expect strongly typed objects. We suffix these strongly typed objects with ViewModel since they are designed specifically for passing data down to the view.  The following listing shows the ProductViewModel. This class will be used to hold information about a Product. We use attributes to specify if the property should be hidden and what its heading in the table should be. This metadata will be used by the MvcContrib Grid to render the table. Some of the properties are hidden from the UI ([ScaffoldColumn(false)) but are needed because we will be using those for filtering when writing our LINQ query. public ActionResult Index( string productName, int? supplierID, int? categoryID, GridSortOptions gridSortOptions, int? page) {   var productList = productRepository.GetProductsProjected();   // Set default sort column if (string.IsNullOrWhiteSpace(gridSortOptions.Column)) { gridSortOptions.Column = "ProductID"; }   // Filter on SupplierID if (supplierID.HasValue) { productList = productList.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { productList = productList.Where(a => a.CategoryID == categoryID); }   // Filter on ProductName if (!string.IsNullOrWhiteSpace(productName)) { productList = productList.Where(a => a.ProductName.Contains(productName)); }   // Create all filter data and set current values if any // These values will be used to set the state of the select list and textbox // by sending it back to the view. var productFilterViewModel = new ProductFilterViewModel(); productFilterViewModel.SelectedCategoryID = categoryID ?? -1; productFilterViewModel.SelectedSupplierID = supplierID ?? -1; productFilterViewModel.Fill();   // Order and page the product list var productPagedList = productList .OrderBy(gridSortOptions.Column, gridSortOptions.Direction) .AsPagination(page ?? 1, 10);     var productListContainer = new ProductListContainerViewModel { ProductPagedList = productPagedList, ProductFilterViewModel = productFilterViewModel, GridSortOptions = gridSortOptions };   return View(productListContainer); } The following diagram shows the rest of the key ViewModels in our design. We have a container class called ProductListContainerViewModel which has nested classes. The ProductPagedList is of type IPagination<ProductViewModel>. The MvcContrib expects the IPagination<T> interface to determine the page number and page size of the collection we are working with. You convert any IEnumerable<T> into an IPagination<T> by calling the AsPagination extension method in the MvcContrib library. It also creates a paged set of type ProductViewModel. The ProductFilterViewModel class will hold information about the different select lists and the ProductName being searched on. It will also hold state of any previously selected item in the lists and the previous search criteria (you will recall that this type of state information was stored in Viewstate when working with WebForms). With MVC there is no state storage and so all state has to be fetched and passed back to the view. The GridSortOptions is a type defined in the MvcContrib library and is used by the Grid to determine the current column being sorted on and the current sort direction. The following shows the view and partial views used to render our UI. The Index view expects a type ProductListContainerViewModel which we described earlier. <%Html.RenderPartial("SearchFilters", Model.ProductFilterViewModel); %> <% Html.RenderPartial("Pager", Model.ProductPagedList); %> <% Html.RenderPartial("SearchResults", Model); %> <% Html.RenderPartial("Pager", Model.ProductPagedList); %> The View contains a partial view “SearchFilters” and passes it the ProductViewFilterContainer. The SearchFilter uses this Model to render all the search lists and textbox. The partial view “Pager” uses the ProductPageList which implements the interface IPagination. The “Pager” view contains the MvcContrib Pager helper used to render the paging information. This view is repeated twice since we want the pager UI to be available at the top and bottom of the product list. The Pager partial view is located in the Shared directory so that it can be reused across Views. The partial view “SearchResults” uses the ProductListContainer model. This partial view contains the MvcContrib Grid which needs both the ProdctPagedList and GridSortOptions to render itself. The Controller Action An example of a request like this: /Products?productName=test&supplierId=29&categoryId=4. The application receives this GET request and maps it to the Index method of the ProductController. Within the action we create an IQueryable<ProductViewModel> by calling the GetProductsProjected() method. /// <summary> /// This method takes in a filter list, paging/sort options and applies /// them to an IQueryable of type ProductViewModel /// </summary> /// <returns> /// The return object is a container that holds the sorted/paged list, /// state for the fiters and state about the current sorted column /// </returns> public ActionResult Index( string productName, int? supplierID, int? categoryID, GridSortOptions gridSortOptions, int? page) {   var productList = productRepository.GetProductsProjected();   // Set default sort column if (string.IsNullOrWhiteSpace(gridSortOptions.Column)) { gridSortOptions.Column = "ProductID"; }   // Filter on SupplierID if (supplierID.HasValue) { productList.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { productList = productList.Where(a => a.CategoryID == categoryID); }   // Filter on ProductName if (!string.IsNullOrWhiteSpace(productName)) { productList = productList.Where(a => a.ProductName.Contains(productName)); }   // Create all filter data and set current values if any // These values will be used to set the state of the select list and textbox // by sending it back to the view. var productFilterViewModel = new ProductFilterViewModel(); productFilterViewModel.SelectedCategoryID = categoryID ?? -1; productFilterViewModel.SelectedSupplierID = supplierID ?? -1; productFilterViewModel.Fill();   // Order and page the product list var productPagedList = productList .OrderBy(gridSortOptions.Column, gridSortOptions.Direction) .AsPagination(page ?? 1, 10);     var productListContainer = new ProductListContainerViewModel { ProductPagedList = productPagedList, ProductFilterViewModel = productFilterViewModel, GridSortOptions = gridSortOptions };   return View(productListContainer); } The supplier, category and productname filters are applied to this IQueryable if any are present in the request. The ProductPagedList class is created by applying a sort order and calling the AsPagination method. Finally the ProductListContainerViewModel class is created and returned to the view. You have seen how to use strongly typed views with the MvcContrib Grid and Pager to render a clean lightweight UI with strongly typed views. You also saw how to use partial views to get data from the strongly typed model passed to it from the parent view. The code also shows you how to use jQuery to auto post back. The sample is attached below. Don’t forget to change your connection string to point to the server containing the Northwind database. NorthwindSales_MvcContrib.zip My name is Kobayashi. I work for Keyser Soze.

    Read the article

  • LinqPad with Azure Table Storage

    - by Sarang
    LinqPad as we all know has been a wonderful tool for running ad-hoc queries. With Windows Azure Table storage in picture LinqPad was no longer in picture and we shifted focus to Cloud Storage Studio only to realize the limited and strange querying capabilities of CSS. With some tweaking to Linqpad we can get the comfortable old shoe of ad-hoc queries with LinqPad in the Windows Azure Table storage. Steps: 1. Start LinqPad 2. Right Click in the query window and select “Query Properties” 3. In The Additional References add reference to Microsoft.WindowsAzure.StorageClient, System.Data.Services.Client.dll and the assembly containing the implementation of the DataServiceContext class tied to the Windows Azure table storage. 4. In the additional namespace imports import the same three namespaces mentioned above. 5. Then we need to provide following details. a. Table storage account name and shared key. b. DataServiceContext implementing class in your code. c. A LINQ query. e.x.         var storageAccountName = "myStorageAccount";  // Enter valid storage account name         var storageSharedKey = "mysharedKey"; // Enter valid storage account shared key         var uri = new System.Uri("http://table.core.windows.net/");         var storageAccountInfo = new CloudStorageAccount(new StorageCredentialsAccountKey(storageAccountName, storageSharedKey), false);         var serviceContext = new TweetPollDataServiceContext(storageAccountInfo); // Specify the DataServiceContext implementation         // The query         var query = from row in serviceContext.Table                     select row;         query.Dump(); Thanks LinqPad! Technorati Tags: LinqPad,Azure Table Storage,Linq

    Read the article

  • SQL SERVER – Color Coding SQL Server Management Studio Status Bar – SQL in Sixty Seconds #023 – Video

    - by pinaldave
    I often see developers executing the unplanned code on production server when they actually want to execute on the development server. Developers and DBAs get confused because when they use SQL Server Management Studio (SSMS) they forget to pay attention to the server they are connecting. It is very easy to fix this problem. You can select different color for a different server. Once you have different color for different server in the status bar, it will be easier for developer easily notice the server against which they are about to execute the script. Personally when I work on SQL Server development, here is the color code, which I follow. I keep Green for my development server, blue for my staging server and red for my production server. Honestly color coding does not signify much but different color for different server is the key here. More Tips on SSMS in SQL in Sixty Seconds: Generate Script for Schema and Data in SQL Server – SQL in Sixty Seconds #021  Remove Debug Button in SQL Server Management Studio – SQL in Sixty Seconds #020  Three Tricks to Comment T-SQL in SQL Server Management Studio – SQL in Sixty Seconds #019  Importing CSV into SQL Server – SQL in Sixty Seconds #018   Tricks to Replace SELECT * with Column Names – SQL in Sixty Seconds #017 I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Parallelism in .NET – Part 9, Configuration in PLINQ and TPL

    - by Reed
    Parallel LINQ and the Task Parallel Library contain many options for configuration.  Although the default configuration options are often ideal, there are times when customizing the behavior is desirable.  Both frameworks provide full configuration support. When working with Data Parallelism, there is one primary configuration option we often need to control – the number of threads we want the system to use when parallelizing our routine.  By default, PLINQ and the TPL both use the ThreadPool to schedule tasks.  Given the major improvements in the ThreadPool in CLR 4, this default behavior is often ideal.  However, there are times that the default behavior is not appropriate.  For example, if you are working on multiple threads simultaneously, and want to schedule parallel operations from within both threads, you might want to consider restricting each parallel operation to using a subset of the processing cores of the system.  Not doing this might over-parallelize your routine, which leads to inefficiencies from having too many context switches. In the Task Parallel Library, configuration is handled via the ParallelOptions class.  All of the methods of the Parallel class have an overload which accepts a ParallelOptions argument. We configure the Parallel class by setting the ParallelOptions.MaxDegreeOfParallelism property.  For example, let’s revisit one of the simple data parallel examples from Part 2: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re looping through an image, and calling a method on each pixel in the image.  If this was being done on a separate thread, and we knew another thread within our system was going to be doing a similar operation, we likely would want to restrict this to using half of the cores on the system.  This could be accomplished easily by doing: var options = new ParallelOptions(); options.MaxDegreeOfParallelism = Math.Max(Environment.ProcessorCount / 2, 1); Parallel.For(0, pixelData.GetUpperBound(0), options, row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Now, we’re restricting this routine to using no more than half the cores in our system.  Note that I included a check to prevent a single core system from supplying zero; without this check, we’d potentially cause an exception.  I also did not hard code a specific value for the MaxDegreeOfParallelism property.  One of our goals when parallelizing a routine is allowing it to scale on better hardware.  Specifying a hard-coded value would contradict that goal. Parallel LINQ also supports configuration, and in fact, has quite a few more options for configuring the system.  The main configuration option we most often need is the same as our TPL option: we need to supply the maximum number of processing threads.  In PLINQ, this is done via a new extension method on ParallelQuery<T>: ParallelEnumerable.WithDegreeOfParallelism. Let’s revisit our declarative data parallelism sample from Part 6: double min = collection.AsParallel().Min(item => item.PerformComputation()); Here, we’re performing a computation on each element in the collection, and saving the minimum value of this operation.  If we wanted to restrict this to a limited number of threads, we would add our new extension method: int maxThreads = Math.Max(Environment.ProcessorCount / 2, 1); double min = collection .AsParallel() .WithDegreeOfParallelism(maxThreads) .Min(item => item.PerformComputation()); This automatically restricts the PLINQ query to half of the threads on the system. PLINQ provides some additional configuration options.  By default, PLINQ will occasionally revert to processing a query in parallel.  This occurs because many queries, if parallelized, typically actually cause an overall slowdown compared to a serial processing equivalent.  By analyzing the “shape” of the query, PLINQ often decides to run a query serially instead of in parallel.  This can occur for (taken from MSDN): Queries that contain a Select, indexed Where, indexed SelectMany, or ElementAt clause after an ordering or filtering operator that has removed or rearranged original indices. Queries that contain a Take, TakeWhile, Skip, SkipWhile operator and where indices in the source sequence are not in the original order. Queries that contain Zip or SequenceEquals, unless one of the data sources has an originally ordered index and the other data source is indexable (i.e. an array or IList(T)). Queries that contain Concat, unless it is applied to indexable data sources. Queries that contain Reverse, unless applied to an indexable data source. If the specific query follows these rules, PLINQ will run the query on a single thread.  However, none of these rules look at the specific work being done in the delegates, only at the “shape” of the query.  There are cases where running in parallel may still be beneficial, even if the shape is one where it typically parallelizes poorly.  In these cases, you can override the default behavior by using the WithExecutionMode extension method.  This would be done like so: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .Select(i => i.PerformComputation()) .Reverse(); Here, the default behavior would be to not parallelize the query unless collection implemented IList<T>.  We can force this to run in parallel by adding the WithExecutionMode extension method in the method chain. Finally, PLINQ has the ability to configure how results are returned.  When a query is filtering or selecting an input collection, the results will need to be streamed back into a single IEnumerable<T> result.  For example, the method above returns a new, reversed collection.  In this case, the processing of the collection will be done in parallel, but the results need to be streamed back to the caller serially, so they can be enumerated on a single thread. This streaming introduces overhead.  IEnumerable<T> isn’t designed with thread safety in mind, so the system needs to handle merging the parallel processes back into a single stream, which introduces synchronization issues.  There are two extremes of how this could be accomplished, but both extremes have disadvantages. The system could watch each thread, and whenever a thread produces a result, take that result and send it back to the caller.  This would mean that the calling thread would have access to the data as soon as data is available, which is the benefit of this approach.  However, it also means that every item is introducing synchronization overhead, since each item needs to be merged individually. On the other extreme, the system could wait until all of the results from all of the threads were ready, then push all of the results back to the calling thread in one shot.  The advantage here is that the least amount of synchronization is added to the system, which means the query will, on a whole, run the fastest.  However, the calling thread will have to wait for all elements to be processed, so this could introduce a long delay between when a parallel query begins and when results are returned. The default behavior in PLINQ is actually between these two extremes.  By default, PLINQ maintains an internal buffer, and chooses an optimal buffer size to maintain.  Query results are accumulated into the buffer, then returned in the IEnumerable<T> result in chunks.  This provides reasonably fast access to the results, as well as good overall throughput, in most scenarios. However, if we know the nature of our algorithm, we may decide we would prefer one of the other extremes.  This can be done by using the WithMergeOptions extension method.  For example, if we know that our PerformComputation() routine is very slow, but also variable in runtime, we may want to retrieve results as they are available, with no bufferring.  This can be done by changing our above routine to: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.NotBuffered) .Select(i => i.PerformComputation()) .Reverse(); On the other hand, if are already on a background thread, and we want to allow the system to maximize its speed, we might want to allow the system to fully buffer the results: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.FullyBuffered) .Select(i => i.PerformComputation()) .Reverse(); Notice, also, that you can specify multiple configuration options in a parallel query.  By chaining these extension methods together, we generate a query that will always run in parallel, and will always complete before making the results available in our IEnumerable<T>.

    Read the article

  • SQL SERVER – A Puzzle – Illusion – Confusion – April Fools’ Day

    - by pinaldave
    Today is April 1st and just like every other year, I like to bring something interesting and light for the day. Atleast there should be days in every one’s life when they should feel easy. Here is a quick puzzle for you and I believe it will make you feel extremely smart if you can figure out the result behind the same. Run following in SQL Server Management Studio and observe the output: SELECT 30.0/(-2.0)/5.0; SELECT 30.0/-2.0/5.0; Here are few questions for you: 1) What will be the result of above two queries? 2) Why? If you think you can figure out the result without executing them – I encourage you to execute BOTH of them in SSMS and see if they give you same result or different result. Well, now I am waiting for your answer here – why? I often post similar things on my facebook page http://facebook.com/SQLAuth – you are welcome to play with me there. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Integrate Bing Search API into ASP.Net application

    - by sreejukg
    Couple of months back, I wrote an article about how to integrate Bing Search engine (API 2.0) with ASP.Net website. You can refer the article here http://weblogs.asp.net/sreejukg/archive/2012/04/07/integrate-bing-api-for-search-inside-asp-net-web-application.aspx Things are changing rapidly in the tech world and Bing has also changed! The Bing Search API 2.0 will work until August 1, 2012, after that it will not return results. Shocked? Don’t worry the API has moved to Windows Azure market place and available for you to sign up and continue using it and there is a free version available based on your usage. In this article, I am going to explain how you can integrate the new Bing API that is available in the Windows Azure market place with your website. You can access the Windows Azure market place from the below link https://datamarket.azure.com/ There is lot of applications available for you to subscribe and use. Bing is one of them. You can find the new Bing Search API from the below link https://datamarket.azure.com/dataset/5BA839F1-12CE-4CCE-BF57-A49D98D29A44 To get access to Bing Search API, first you need to register an account with Windows Azure market place. Sign in to the Windows Azure market place site using your windows live account. Once you sign in with your windows live account, you need to register to Windows Azure Market place account. From the Windows Azure market place, you will see the sign in button it the top right of the page. Clicking on the sign in button will take you to the Windows live ID authentication page. You can enter a windows live ID here to login. Once logged in you will see the Registration page for the Windows Azure market place as follows. You can agree or disagree for the email address usage by Microsoft. I believe selecting the check box means you will get email about what is happening in Windows Azure market place. Click on continue button once you are done. In the next page, you should accept the terms of use, it is not optional, you must agree to terms and conditions. Scroll down to the page and select the I agree checkbox and click on Register Button. Now you are a registered member of Windows Azure market place. You can subscribe to data applications. In order to use BING API in your application, you must obtain your account Key, in the previous version of Bing you were required an API key, the current version uses Account Key instead. Once you logged in to the Windows Azure market place, you can see “My Account” in the top menu, from the Top menu; go to “My Account” Section. From the My Account section, you can manage your subscriptions and Account Keys. Account Keys will be used by your applications to access the subscriptions from the market place. Click on My Account link, you can see Account Keys in the left menu and then Add an account key or you can use the default Account key available. Creating account key is very simple process. Also you can remove the account keys you create if necessary. The next step is to subscribe to BING Search API. At this moment, Bing Offers 2 APIs for search. The available options are as follows. 1. Bing Search API - https://datamarket.azure.com/dataset/5ba839f1-12ce-4cce-bf57-a49d98d29a44 2. Bing Search API – Web Results only - https://datamarket.azure.com/dataset/8818f55e-2fe5-4ce3-a617-0b8ba8419f65 The difference is that the later will give you only web results where the other you can specify the source type such as image, video, web, news etc. Carefully choose the API based on your application requirements. In this article, I am going to use Web Results Only API, but the steps will be similar to both. Go to the API page https://datamarket.azure.com/dataset/8818f55e-2fe5-4ce3-a617-0b8ba8419f65, you can see the subscription options in the right side. And in the bottom of the page you can see the free option Since I am going to use the free options, just Click the Sign Up link for that. Just select I agree check box and click on the Sign Up button. You will get a recipt pagethat detail your subscription. Now you are ready Bing Search API – Web results. The next step is to integrate the API into your ASP.Net application. Now if you go to the Search API page (as well as in the Receipt page), you can see a .Net C# Class Library link, click on the link, you will get a code file named “BingSearchContainer.cs”. In the following sections I am going to demonstrate the use of Bing Search API from an ASP.Net application. Create an empty ASP.Net web application. In the solution explorer, the application will looks as follows. Now add the downloaded code file (“BingSearchContainer.cs”) to the project. Right click your project in solution explorer, Add -> existing item, then browse to the downloaded location, select the “BingSearchContainer.cs” file and add it to the project. To build the code file you need to add reference to the following library. System.Data.Services.Client You can find the library in the .Net tab, when you select Add -> Reference Try to build your project now; it should build without any errors. Add an ASP.Net page to the project. I have included a text box and a button, then a Grid View to the page. The idea is to Search the text entered and display the results in the gridview. The page will look in the Visual Studio Designer as follows. The markup of the page is as follows. In the button click event handler for the search button, I have used the following code. Now run your project and enter some text in the text box and click the Search button, you will see the results coming from Bing, cool. I entered the text “Microsoft” in the textbox and clicked on the button and I got the following results. Searching Specific Websites If you want to search a particular website, you pass the site url with site:<site url name> and if you have more sites, use pipe (|). e.g. The following search query site:microsoft.com | site:adobe.com design will search the word design and return the results from Microsoft.com and Adobe.com See the sample code that search only Microsoft.com for the text entered for the above sample. var webResults = bingContainer.Web("site:www.Microsoft.com " + txtSearch.Text, null, null, null, null, null, null); Paging the results returned by the API By default the BING API will return 100 results based on your query. The default code file that you downloaded from BING doesn’t include any option for this. You can modify the downloaded code to perform this paging. The BING API supports two parameters $top (for number of results to return) and $skip (for number of records to skip). So if you want 3rd page of results with page size = 10, you need to pass $top = 10 and $skip=20. Open the BingSearchContainer.cs in the editor. You can see the Web method in it as follows. public DataServiceQuery<WebResult> Web(String Query, String Market, String Adult, Double? Latitude, Double? Longitude, String WebFileType, String Options) {  In the method signature, I have added two more parameters public DataServiceQuery<WebResult> Web(String Query, String Market, String Adult, Double? Latitude, Double? Longitude, String WebFileType, String Options, int resultCount, int pageNo) { and in the method, you need to pass the parameters to the query variable. query = query.AddQueryOption("$top", resultCount); query = query.AddQueryOption("$skip", (pageNo -1)*resultCount); return query; Note that I didn’t perform any validation, but you need to check conditions such as resultCount and pageCount should be greater than or equal to 1. If the parameters are not valid, the Bing Search API will throw the error. The modified method is as follows. The changes are highlighted. Now see the following code in the SearchPage.aspx.cs file protected void btnSearch_Click(object sender, EventArgs e) {     var bingContainer = new Bing.BingSearchContainer(new Uri(https://api.datamarket.azure.com/Bing/SearchWeb/));     // replace this value with your account key     var accountKey = "your key";     // the next line configures the bingContainer to use your credentials.     bingContainer.Credentials = new NetworkCredential(accountKey, accountKey);     var webResults = bingContainer.Web("site:microsoft.com" +txtSearch.Text , null, null, null, null, null, null,3,2);     lstResults.DataSource = webResults;     lstResults.DataBind(); } The following code will return 3 results starting from second page (by skipping first 3 results). See the result page as follows. Bing provides complete integration to its offerings. When you develop search based applications, you can use the power of Bing to perform the search. Integrating Bing Search API to ASP.Net application is a simple process and without investing much time, you can develop a good search based application. Make sure you read the terms of use before designing the application and decide which API usage is suitable for you. Further readings BING API Migration Guide http://go.microsoft.com/fwlink/?LinkID=248077 Bing API FAQ http://go.microsoft.com/fwlink/?LinkID=252146 Bing API Schema Guide http://go.microsoft.com/fwlink/?LinkID=252151

    Read the article

  • SQL SERVER – Solution – Challenge – Puzzle – Usage of FAST Hint

    - by pinaldave
    Earlier I had posted quick puzzle and I had received wonderful response to the same from Brad Schulz. Today we will go over the solution. The puzzle was posted here: SQL SERVER – Challenge – Puzzle – Usage of FAST Hint The question was in what condition the hint FAST will be useful. In the response to this puzzle blog post here is what SQL Server Expert Brad Schulz has pointed me to his blog post where he explain how FAST hint can be useful. I strongly recommend to read his blog post over here. With the permission of the Brad, I am reproducing following queries here. He has come up with example where FAST hint improves the performance. USE AdventureWorks GO DECLARE @DesiredDateAtMidnight DATETIME = '20010709' DECLARE @NextDateAtMidnight DATETIME = DATEADD(DAY,1,@DesiredDateAtMidnight) -- Query without FAST SELECT OrderID=h.SalesOrderID ,h.OrderDate ,h.TerritoryID ,TerritoryName=t.Name ,c.CardType ,c.CardNumber ,CardExpire=RIGHT(STR(100+ExpMonth),2)+'/'+STR(ExpYear,4) ,h.TotalDue FROM Sales.SalesOrderHeader h LEFT JOIN Sales.SalesTerritory t ON h.TerritoryID=t.TerritoryID LEFT JOIN Sales.CreditCard c ON h.CreditCardID=c.CreditCardID WHERE OrderDate>=@DesiredDateAtMidnight AND OrderDate<@NextDateAtMidnight ORDER BY h.SalesOrderID; -- Query with FAST(10) SELECT OrderID=h.SalesOrderID ,h.OrderDate ,h.TerritoryID ,TerritoryName=t.Name ,c.CardType ,c.CardNumber ,CardExpire=RIGHT(STR(100+ExpMonth),2)+'/'+STR(ExpYear,4) ,h.TotalDue FROM Sales.SalesOrderHeader h LEFT JOIN Sales.SalesTerritory t ON h.TerritoryID=t.TerritoryID LEFT JOIN Sales.CreditCard c ON h.CreditCardID=c.CreditCardID WHERE OrderDate>=@DesiredDateAtMidnight AND OrderDate<@NextDateAtMidnight ORDER BY h.SalesOrderID OPTION(FAST 10) Now when you check the execution plan for the same, you will find following visible difference. You will find query with FAST returns results with much lower cost. Thank you Brad for excellent post and teaching us something. I request all of you to read original blog post written by Brad for much more information. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • OBIEE 11.1.1 - Disable Wrap Data Types in WebLogic Server 10.3.x

    - by Ahmed Awan
    By default, JDBC data type’s objects are wrapped with a WebLogic wrapper. This allows for features like debugging output and track connection usage to be done by the server. The wrapping can be turned off by setting this value to false. This improves performance, in some cases significantly, and allows for the application to use the native driver objects directly. Tip: How to Disable Wrapping in WLS Administration Console You can use the Administration Console to disable data type wrapping for following JDBC data sources in bifoundation_domain domain: Data Source Name bip_datasource mds-owsm EPMSystemRegistry   To disable wrapping for each JDBC data source (as stated in above table): 1.     If you have not already done so, in the Change Center of the Administration Console, click Lock & Edit. 2.     In the Domain Structure tree, expand Services, then select Data Sources. 3.     On the Summary of Data Sources page, click the data source name for example “mds-owsm”. 4.     Select the Configuration: Connection Pool tab. 5.     Scroll down and click Advanced to show the advanced connection pool options. 6.     In Wrap Data Types, deselect the checkbox to disable wrapping. 7.     Click Save. 8.     To activate these changes, in the Change Center of the Administration Console, click Activate Changes. Important Note: This change does not take effect immediately—it requires the server be restarted.

    Read the article

  • Entity Framework version 1- Brief Synopsis and Tips &ndash; Part 1

    - by Rohit Gupta
    To Do Eager loading use Projections (for e.g. from c in context.Contacts select c, c.Addresses)  or use Include Query Builder Methods (Include(“Addresses”)) If there is multi-level hierarchical Data then to eager load all the relationships use Include Query Builder methods like customers.Include("Order.OrderDetail") to include Order and OrderDetail collections or use customers.Include("Order.OrderDetail.Location") to include all Order, OrderDetail and location collections with a single include statement =========================================================================== If the query uses Joins then Include() Query Builder method will be ignored, use Nested Queries instead If the query does projections then Include() Query Builder method will be ignored Use Address.ContactReference.Load() OR Contact.Addresses.Load() if you need to Deferred Load Specific Entity – This will result in extra round trips to the database ObjectQuery<> cannot return anonymous types... it will return a ObjectQuery<DBDataRecord> Only Include method can be added to Linq Query Methods Any Linq Query method can be added to Query Builder methods. If you need to append a Query Builder Method (other than Include) after a LINQ method  then cast the IQueryable<Contact> to ObjectQuery<Contact> and then append the Query Builder method to it =========================================================================== Query Builder methods are Select, Where, Include Methods which use Entity SQL as parameters e.g. "it.StartDate, it.EndDate" When Query Builder methods do projection then they return ObjectQuery<DBDataRecord>, thus to iterate over this collection use contact.Item[“Name”].ToString() When Linq To Entities methods do projection, they return collection of anonymous types --- thus the collection is strongly typed and supports Intellisense EF Object Context can track changes only on Entities, not on Anonymous types. If you use a Defining Query for a EntitySet then the EntitySet becomes readonly since a Defining Query is the same as a View (which is treated as a ReadOnly by default). However if you want to use this EntitySet for insert/update/deletes then we need to map stored procs (as created in the DB) to the insert/update/delete functions of the Entity in the Designer You can use either Execute method or ToList() method to bind data to datasources/bindingsources If you use the Execute Method then remember that you can traverse through the ObjectResult<> collection (returned by Execute) only ONCE. In WPF use ObservableCollection to bind to data sources , for keeping track of changes and letting EF send updates to the DB automatically. Use Extension Methods to add logic to Entities. For e.g. create extension methods for the EntityObject class. Create a method in ObjectContext Partial class and pass the entity as a parameter, then call this method as desired from within each entity. ================================================================ DefiningQueries and Stored Procedures: For Custom Entities, one can use DefiningQuery or Stored Procedures. Thus the Custom Entity Collection will be populated using the DefiningQuery (of the EntitySet) or the Sproc. If you use Sproc to populate the EntityCollection then the query execution is immediate and this execution happens on the Server side and any filters applied will be applied in the Client App. If we use a DefiningQuery then these queries are composable, meaning the filters (if applied to the entityset) will all be sent together as a single query to the DB, returning only filtered results. If the sproc returns results that cannot be mapped to existing entity, then we first create the Entity/EntitySet in the CSDL using Designer, then create a dummy Entity/EntitySet using XML in the SSDL. When creating a EntitySet in the SSDL for this dummy entity, use a TSQL that does not return any results, but does return the relevant columns e.g. select ContactID, FirstName, LastName from dbo.Contact where 1=2 Also insure that the Entity created in the SSDL uses the SQL DataTypes and not .NET DataTypes. If you are unable to open the EDMX file in the designer then note the Errors ... they will give precise info on what is wrong. The Thrid option is to simply create a Native Query in the SSDL using <Function Name="PaymentsforContact" IsComposable="false">   <CommandText>SELECT ActivityId, Activity AS ActivityName, ImagePath, Category FROM dbo.Activities </CommandText></FuncTion> Then map this Function to a existing Entity. This is a quick way to get a custom Entity which is regular Entity with renamed columns or additional columns (which are computed columns). The disadvantage to using this is that It will return all the rows from the Defining query and any filter (if defined) will be applied only at the Client side (after getting all the rows from DB). If you you DefiningQuery instead then we can use that as a Composable Query. The Fourth option (for mapping a READ stored proc results to a non-existent Entity) is to create a View in the Database which returns all the fields that the sproc also returns, then update the Model so that the model contains this View as a Entity. Then map the Read Sproc to this View Entity. The other option would be to simply create the View and remove the sproc altogether. ================================================================ To Execute a SProc that does not return a entity, use a EntityCommand to execute that proc. You cannot call a sproc FunctionImport that does not return Entities From Code, the only way is to use SSDL function calls using EntityCommand.  This changes with EntityFramework Version 4 where you can return Scalar Types, Complex Types, Entities or NonQuery ================================================================ UDF when created as a Function in SSDL, we need to set the Name & IsComposable properties for the Function element. IsComposable is always false for Sprocs, for UDF's set this to true. You cannot call UDF "Function" from within code since you cannot import a UDF Function into the CSDL Model (with Version 1 of EF). only stored procedures can be imported and then mapped to a entity ================================================================ Entity Framework requires properties that are involved in association mappings to be mapped in all of the function mappings for the entity (Insert, Update and Delete). Because Payment has an association to Reservation... hence we need to pass both the paymentId and reservationId to the Delete sproc even though just the paymentId is the PK on the Payment Table. ================================================================ When mapping insert, update and delete procs to a Entity, insure that all the three or none are mapped. Further if you have a base class and derived class in the CSDL, then you must map (ins, upd, del) sprocs to all parent and child entities in the inheritance relationship. Note that this limitation that base and derived entity methods must all must be mapped does not apply when you are mapping Read Stored Procedures.... ================================================================ You can write stored procedures SQL directly into the SSDL by creating a Function element in the SSDL and then once created, you can map this Function to a CSDL Entity directly in the designer during Function Import ================================================================ You can do Entity Splitting such that One Entity maps to multiple tables in the DB. For e.g. the Customer Entity currently derives from Contact Entity...in addition it also references the ContactPersonalInfo Entity. One can copy all properties from the ContactPersonalInfo Entity into the Customer Entity and then Delete the CustomerPersonalInfo entity, finall one needs to map the copied properties to the ContactPersonalInfo Table in Table Mapping (by adding another table (ContactPersonalInfo) to the Table Mapping... this is called Entity Splitting. Thus now when you insert a Customer record, it will automatically create SQL to insert records into the Contact, Customers and ContactPersonalInfo tables even though you have a Single Entity called Customer in the CSDL =================================================================== There is Table by Type Inheritance where another EDM Entity can derive from another EDM entity and absorb the inherted entities properties, for example in the Break Away Geek Adventures EDM, the Customer entity derives (inherits) from the Contact Entity and absorbs all the properties of Contact entity. Thus when you create a Customer Entity in Code and then call context.SaveChanges the Object Context will first create the TSQL to insert into the Contact Table followed by a TSQL to insert into the Customer table =================================================================== Then there is the Table per Hierarchy Inheritance..... where different types are created based on a condition (similar applying a condition to filter a Entity to contain filtered records)... the diference being that the filter condition populates a new Entity Type (derived from the base Entity). In the BreakAway sample the example is Lodging Entity which is a Abstract Entity and Then Resort and NonResort Entities which derive from Lodging Entity and records are filtered based on the value of the Resort Boolean field =================================================================== Then there is Table per Concrete Type Hierarchy where we create a concrete Entity for each table in the database. In the BreakAway sample there is a entity for the Reservation table and another Entity for the OldReservation table even though both the table contain the same number of fields. The OldReservation Entity can then inherit from the Reservation Entity and configure the OldReservation Entity to remove all Scalar Properties from the Entity (since it inherits the properties from Reservation and filters based on ReservationDate field) =================================================================== Complex Types (Complex Properties) Entities in EF can also contain Complex Properties (in addition to Scalar Properties) and these Complex Properties reference a ComplexType (not a EntityType) DropdownList, ListBox, RadioButtonList, CheckboxList, Bulletedlist are examples of List server controls (not data bound controls) these controls cannot use Complex properties during databinding, they need Scalar Properties. So if a Entity contains Complex properties and you need to bind those to list server controls then use projections to return Scalar properties and bind them to the control (the disadvantage is that projected collections are not tracked by the Object Context and hence cannot persist changes to the projected collections bound to controls) ObjectDataSource and EntityDataSource do account for Complex properties and one can bind entities with Complex Properties to Data Source controls and they will be tracked for changes... with no additional plumbing needed to persist changes to these collections bound to controls So DataBound controls like GridView, FormView need to use EntityDataSource or ObjectDataSource as a datasource for entities that contain Complex properties so that changes to the datasource done using the GridView can be persisted to the DB (enabling the controls for updates)....if you cannot use the EntityDataSource you need to flatten the ComplexType Properties using projections With EF Version 4 ComplexTypes are supported by the Designer and can add/remove/compose Complex Types directly using the Designer =================================================================== Conditional Mapping ... is like Table per Hierarchy Inheritance where Entities inherit from a base class and then used conditions to populate the EntitySet (called conditional Mapping). Conditional Mapping has limitations since you can only use =, is null and IS NOT NULL Conditions to do conditional mapping. If you need more operators for filtering/mapping conditionally then use QueryView(or possibly Defining Query) to create a readonly entity. QueryView are readonly by default... the EntitySet created by the QueryView is enabled for change tracking by the ObjectContext, however the ObjectContext cannot create insert/update/delete TSQL statements for these Entities when SaveChanges is called since it is QueryView. One way to get around this limitation is to map stored procedures for the insert/update/delete operations in the Designer. =================================================================== Difference between QueryView and Defining Query : QueryView is defined in the (MSL) Mapping File/section of the EDM XML, whereas the DefiningQuery is defined in the store schema (SSDL). QueryView is written using Entity SQL and is this database agnostic and can be used against any database/Data Layer. DefiningQuery is written using Database Lanaguage i.e. TSQL or PSQL thus you have more control =================================================================== Performance: Lazy loading is deferred loading done automatically. lazy loading is supported with EF version4 and is on by default. If you need to turn it off then use context.ContextOptions.lazyLoadingEnabled = false To improve Performance consider PreCompiling the ObjectQuery using the CompiledQuery.Compile method

    Read the article

  • The HTG Guide to Using a Bluetooth Keyboard with Your Android Device

    - by Matt Klein
    Android devices aren’t usually associated with physical keyboards. But, since Google is now bundling their QuickOffice app with the newly-released Kit-Kat, it appears inevitable that at least some Android tablets (particularly 10-inch models) will take on more productivity roles. In recent years, physical keyboards have been rendered obsolete by swipe style input methods such as Swype and Google Keyboard. Physical keyboards tend to make phones thick and plump, and that won’t fly today when thin (and even flexible and curved) is in vogue. So, you’ll be hard-pressed to find smartphone manufacturers launching new models with physical keyboards, thus rendering sliders to a past chapter in mobile phone evolution. It makes sense to ditch the clunky keyboard phone in favor of a lighter, thinner model. You’re going to carry around in your pocket or purse all day, why have that extra bulk and weight? That said, there is sound logic behind pairing tablets with keyboards. Microsoft continues to plod forward with its Surface models, and while critics continue to lavish praise on the iPad, its functionality is obviously enhanced and extended when you add a physical keyboard. Apple even has an entire page devoted specifically to iPad-compatible keyboards. But an Android tablet and a keyboard? Does such a thing even exist? They do actually. There are docking keyboards and keyboard/case combinations, there’s the Asus Transformer family, Logitech markets a Windows 8 keyboard that speaks “Android”, and these are just to name a few. So we know that keyboard products that are designed to work with Android exist, but what about an everyday Bluetooth keyboard you might use with Windows or OS X? How-To Geek wanted look at how viable it is to use such a keyboard with Android. We conducted some research and examined some lists of Android keyboard shortcuts. Most of what we found was long outdated. Many of the shortcuts don’t even apply anymore, while others just didn’t work. Regardless, after a little experimentation and a dash of customization, it turns out using a keyboard with Android is kind of fun, and who knows, maybe it will catch on. Setting things up Setting up a Bluetooth keyboard with Android is very easy. First, you’ll need a Bluetooth keyboard and of course an Android device, preferably running version 4.1 (Jelly Bean) or higher. For our test, we paired a second-generation Google Nexus 7 running Android 4.3 with a Samsung Series 7 keyboard. In Android, enable Bluetooth if it isn’t already on. We’d like to note that if you don’t normally use Bluetooth accessories and peripherals with your Android device (or any device really), it’s best practice to leave Bluetooth off because, like GPS, it drains the device’s battery more quickly. To enable Bluetooth, simply go to “Settings” -> “Bluetooth” and tap the slider button to “On”. To set up the keyboard, make sure it is on and then tap “Bluetooth” in the Android settings. On the resulting screen, your Android device should automatically search for and hopefully find your keyboard. If you don’t get it right the first time, simply turn the keyboard on again and then tap “Search for Devices” to try again. If it still doesn’t work, make sure you have fresh batteries and the keyboard isn’t paired to another device. If it is, you will need to unpair it before it will work with your Android device (consult your keyboard manufacturer’s documentation or Google if you don’t know how to do this). When Android finds your keyboard, select it under “Available Devices” … … and you should be prompted to type in a code: If successful, you will see that device is now “Connected” and you’re ready to go. If you want to test things out, try pressing the “Windows” key (“Apple” or “Command”) + ESC, and you will be whisked to your Home screen. So, what can you do? Traditional Mac and Windows users know there’s usually a keyboard shortcut for just about everything (and if there isn’t, there’s all kinds of ways to remap keys to do a variety of commands, tasks, and functions). So where does Android fall in terms of baked-in keyboard commands? There answer to that is kind of enough, but not too much. There are definitely established combos you can use to get around, but they aren’t clear and there doesn’t appear to be any one authority on what they are. Still, there is enough keyboard functionality in Android to make it a viable option, if only for those times when you need to get something done (long e-mail or important document) and an on-screen keyboard simply won’t do. It’s important to remember that Android is, and likely always will be a touch-first interface. That said, it does make some concessions to physical keyboards. In other words, you can get around Android fairly well without having to lift your hands off the keys, but you will still have to tap the screen regularly, unless you add a mouse. For example, you can wake your device by tapping a key rather than pressing its power button. However, if your device is slide or pattern-locked, then you’ll have to use the touchscreen to unlock it – a password or PIN however, works seamlessly with a keyboard – other things like widgets and app controls and features, have to be tapped. You get the idea. Keyboard shortcuts and navigation As we said, baked-in keyboard shortcut combos aren’t necessarily abundant nor apparent. The one thing you can always do is search. Any time you want to Google something, start typing from the Home screen and the search screen will automatically open and begin displaying results. Other than that, here is what we were able to figure out: ESC = go back CTRL + ESC = menu CTRL + ALT + DEL = restart (no questions asked) ALT + SPACE = search page (say “OK Google” to voice search) ALT + TAB (ALT + SHIFT + TAB) = switch tasks Also, if you have designated volume function keys, those will probably work too. There’s also some dedicated app shortcuts like calculator, Gmail, and a few others: CMD + A = calculator CMD + C = contacts CMD + E = e-mail CMD + G = Gmail CMD + L = Calendar CMD + P = Play Music CMD + Y = YouTube Overall, it’s not a long comprehensive list and there’s no dedicated keyboard combos for the full array of Google’s products. Granted, it’s hard to imagine getting a lot of mileage out of a keyboard with Maps but with something like Keep, you could type out long, detailed lists on your tablet, and then view them on your smartphone when you go out shopping. You can also use the arrow keys to navigate your Home screen over shortcuts and open the app drawer. When something on the screen is selected, it will be highlighted in blue. Press “Enter” to open your selection. Additionally, if an app has its own set of shortcuts, e.g. Gmail has quite a few unique shortcuts to it, as does Chrome, some – though not many – will work in Android (not for YouTube though). Also, many “universal” shortcuts such as Copy (CTRL + C), Cut (CTRL + X), Paste (CTRL + V), and Select All (CTRL + A) work where needed – such as in instant messaging, e-mail, social media apps, etc. Creating custom application shortcuts What about custom shortcuts? When we were researching this article, we were under the impression that it was possible to assign keyboard combinations to specific apps, such as you could do on older Android versions such as Gingerbread. This no long seems to be the case and nowhere in “Settings” could we find a way to assign hotkey combos to any of our favorite, oft-used apps or functions. If you do want custom keyboard shortcuts, what can you do? Luckily, there’s an app on Google Play that allows you to, among other things, create custom app shortcuts. It is called External Keyboard Helper (EKH) and while there is a free demo version, the pay version is only a few bucks. We decided to give EKH a whirl and through a little experimentation and finally reading the developer’s how-to, we found we could map custom keyboard combos to just about anything. To do this, first open the application and you’ll see the main app screen. Don’t worry about choosing a custom layout or anything like that, you want to go straight to the “Advanced settings”: In the “Advanced settings” select “Application shortcuts” to continue: You can have up to 16 custom application shortcuts. We are going to create a custom shortcut to the Facebook app. We choose “A0”, and from the resulting list, Facebook. You can do this for any number of apps, services, and settings. As you can now see, the Facebook app has now been linked to application-zero (A0): Go back to the “Advanced settings” and choose “Customize keyboard mappings”: You will be prompted to create a custom keyboard layout so we choose “Custom 1”: When you choose to create a custom layout, you can do a great many more things with your keyboard. For example, many keyboards have predefined function (Fn) keys, which you can map to your tablet’s brightness controls, toggle WiFi on/off, and much more. A word of advice, the application automatically remaps certain keys when you create a custom layout. This might mess up some existing keyboard combos. If you simply want to add some functionality to your keyboard, you can go ahead and delete EKH’s default changes and start your custom layout from scratch. To create a new combo, select “Add new key mapping”: For our new shortcut, we are going to assign the Facebook app to open when we key in “ALT + F”. To do this, we press the “F” key while in the “Scancode” field and we see it returns a value of “33”. If we wanted to use a different key, we can press “Change” and scan another key’s numerical value. We now want to assign the “ALT” key to application “A0”, previously designated as the Facebook app. In the “AltGr” field, we enter “A0” and then “Save” our custom combo. And now we see our new application shortcut. Now, as long as we’re using our custom layout, every time we press “ALT + F”, the Facebook app will launch: External Keyboard Helper extends far beyond simple application shortcuts and if you are looking for deeper keyboard customization options, you should definitely check it out. Among other things, EKH also supports dozens of languages, allows you to quickly switch between layouts using a key or combo, add up to 16 custom text shortcuts, and much more! It can be had on Google Play for $2.53 for the full version, but you can try the demo version for free. More extensive documentation on how to use the app is also available. Android? Keyboard? Sure, why not? Unlike traditional desktop operating systems, you don’t need a physical keyboard and mouse to use a mobile operating system. You can buy an iPad or Nexus 10 or Galaxy Note, and never need another accessory or peripheral – they work as intended right out of the box. It’s even possible you can write the next great American novel on one these devices, though that might require a lot of practice and patience. That said, using a keyboard with Android is kind of fun. It’s not revelatory but it does elevate the experience. You don’t even need to add customizations (though they are nice) because there are enough existing keyboard shortcuts in Android to make it usable. Plus, when it comes to inputting text such as in an editor or terminal application, we fully advocate big, physical keyboards. Bottom line, if you’re looking for a way to enhance your Android tablet, give a keyboard a chance. Do you use your Android device for productivity? Is a physical keyboard an important part of your setup? Do you have any shortcuts that we missed? Sound off in the comments and let us know what you think.     

    Read the article

< Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >