Search Results

Search found 19962 results on 799 pages for 'edo post'.

Page 257/799 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • Why can I not echo out the value from an input text?

    - by user3684783
    I am using Wordpress to do an auction website. Here is what the code looks like <form method="post" action="<?php echo ProjectTheme_post_new_with_pid_stuff_thg($pid, '1');?>"> <?php do_action('ProjectTheme_step1_before_title'); ?> <!--////////// Project Title /////////////--> <li> <h2><?php echo __('Your Project Title', 'ProjectTheme'); ?>: <img src="../../images/help-icon.png" width="16" height="16" id="showhelp1"/></h2> <div id="help1">Your Project Title should be informative and brief.</div> <p><input type="text" style="width:90%;" class="do_input" name="project_title" value="Enter an informative & brief title..." onfocus="this.value = this.value=='Enter an informative & brief title...'?'':this.value;" onblur="this.value = this.value==''?'Enter an informative & brief title...':this.value;" /></p> <input type="submit" class="post-button" name="project_submit1" value="<?php _e("Next Step", 'ProjectTheme'); ?> &raquo;" /> </form> I am using a step by step form for users to fill in and I wanted to do a preview page, however I tried to use: if(isset($_POST['project_submit1'])){ $Name = $_POST['project_title']; echo "$Name"; } It shows up nothing.

    Read the article

  • Specific path to java class file when embedded into HTML; Help urgent

    - by Jeevanism
    This is a resonant post to one of my other query due to an error with Java applet embedded into CMS pages. well, I tell issue:- Problem:- I have a website using Concrete 5 CMS, in which I have a page that I have embedded a Java applet class. This applet should show the system information which this applet is working fine in simple single HTML pages. now whenever I access my plugin Test page which created in Concrete 5 CMS (this page I embedded this java applet), it shows a Java error. The error is says incompatible Magic Number. Observation:- After a lot of searching through various tech forums, I finally found that, the issue is happened because the browser cannot load Java class file. The class file path location is wrong. Here below I post the log of server access when I test in my local machine. 127.0.0.1 - - [20/Dec/2012:12:59:28 +0800] "GET /linuxhouse/index.php/techlab/java/testvm.class HTTP/1.1" 200 1896 "-" "Mozilla/4.0 (Linux 2.6.37.6-0.9-desktop) Java/1.6.0_29" 127.0.0.1 - - [20/Dec/2012:12:59:29 +0800] "GET /linuxhouse/index.php/techlab/java/testvm.class HTTP/1.1" 200 1896 "-" "Mozilla/4.0 (Linux 2.6.37.6-0.9-desktop) Java/1.6.0_29" <<<<<<<<<<<<<<<<<<<<< Clearly, the path to Java class file is wrong. But I have no idea how to specify exact path in an embedded code. Is this a CMS specific issue. I have disabled Pretty URL feature of CMS. But still I cannot find the solution. here the referred Page that shows the java error. http://www.linux-house.net/v3/techlab/plugin pls pls give some insight..URGENT SOS

    Read the article

  • Submitting form with Ajax

    - by Sidetracking
    So I'm completely lost on how to submit a form with Ajax. I'm fairly new to Javascript and hopefully I'm not in over my head. When I click submit on my form, nothing happens on my page, not in my SQL database where the info should be stored (double checked process form too). Here's the code if anyone's willing to help Javascript: $(document).ready(function() { $('#form').submit(function() { $.ajax({ url: "../process.php", type: "post", data: $(this).serialize() }); }); } HTML: <form name="contact" method="post" action="" id="form"> <span id="input"> <input type="text" name="first" maxlength="50" size="30" title="First Name" class="textbox"> <input type="text" name="last" maxlength="80" size="30" title="Last Name" class="textbox"> <input type="text" name="email" maxlength="80" size="30" title="Email" class="textbox"> <textarea name="request" maxlength="1000" cols="25" rows="6" title="Request"></textarea> </span> <input type="submit" value="Submit" class="submit"> </form>

    Read the article

  • No route matches [PUT] error in active_admin

    - by Alex
    in active_admin partials created a form input: <%= semantic_nested_form_for @item, :url => admin_items_path(@item) do |f| %> <fieldset class="inputs"> <ol> <%= f.input :category %></br> <%= f.input :title %> <%= f.input :photo1 %> <%= f.input :photo2 %> </ol> </fieldset> <%= f.fields_for :ItemColors do |i| %> <fieldset class="inputs"> <ol> <%= i.input :DetailColor %> <%= i.input :size, :input_html => { :size => "10" } %> <%= i.link_to_remove "remove" %> </ol> </fieldset> <% end %> <%= f.link_to_add "add", :ItemColors %> <%= f.actions %> <% end %> to create a new Item okay creates and throws On the New Item, but if I do update an existing item is routed to an error occurs while such a path exists: No route matches [PUT] "/admin/items.150" #150 is item_id rake routes: batch_action_admin_items POST /admin/items/batch_action(.:format) admin/items#batch_action admin_items GET /admin/items(.:format) admin/items#index POST /admin/items(.:format) admin/items#create new_admin_item GET /admin/items/new(.:format) admin/items#new edit_admin_item GET /admin/items/:id/edit(.:format) admin/items#edit admin_item GET /admin/items/:id(.:format) admin/items#show PUT /admin/items/:id(.:format) admin/items#update DELETE /admin/items/:id(.:format) admin/items#destroy help to solve this problem UPD I found the error, but not yet understood how to fix it the upgrade is a request: PUT "/admin/items" but should: PUT "/admin/items/some_id" any ideas?

    Read the article

  • Why my tracking service freezes when the phone moves?

    - by user2878181
    I have developed a service which includes timer task and runs after every 5 minutes for keeping tracking record of the device, every five minutes it adds a record to the database. My service is working fine when the phone is not moving i.e it gives records after every 5 minutes as it should be. But i have noticed that when the phone is on move it updates the points after 10 or 20 minutes , i.e whenever the user stops in his way whenever he is on the move. Do service freezes on the move, if yes! how is whatsapp messenger managing it?? Please help! i am writing my onstart method. please help @Override public void onStart(Intent intent, int startId) { Toast.makeText(this, "My Service Started", Toast.LENGTH_LONG).show(); Log.d(TAG, "onStart"); mLocationClient.connect(); final Handler handler_service = new Handler(); timer_service = new Timer(); TimerTask thread_service = new TimerTask() { @Override public void run() { handler_service.post(new Runnable() { @Override public void run() { try { some function of tracking } }); } }; timer_service.schedule(thread_service, 1000, service_timing); //sync thread final Handler handler_sync = new Handler(); timer_sync = new Timer(); TimerTask thread_sync = new TimerTask() { @Override public void run() { handler_sync.post(new Runnable() { @Override public void run() { try { //connecting to the central server for updation Connect(); } catch (Exception e) { // TODO Auto-generated catch block } } }); } }; timer_sync.schedule(thread_sync,2000, sync_timing); }

    Read the article

  • Wordpress curl save Images

    - by Jeton Ramadani
    I am working on saving images from external sites into a folder in my wordpress theme. And I was wondering if its Ok to call curl twice or can it be done with one time. Example: $data = get_url('http://www.veoh.com/watch/v19935546Y8hZPgbZ'); // getting the url first curl instance preg_match('/fullHighResImagePath="(.*?)"/', $data, $thumbnail); // find the image from content savePhoto($thumbnail, $post->ID); //2nd instance of curl to save the image function get_url($url) { $user_agent = "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2)"; $ytc = curl_init(); // initialize curl handle curl_setopt($ytc, CURLOPT_URL, $url); // set url to post to curl_setopt($ytc, CURLOPT_FAILONERROR, 1); // Fail on errors curl_setopt($ytc, CURLOPT_FOLLOWLOCATION, 1); // allow redirects curl_setopt($ytc, CURLOPT_RETURNTRANSFER, 1); // return into a variable curl_setopt($ytc, CURLOPT_PORT, 80); //Set the port number curl_setopt($ytc, CURLOPT_TIMEOUT, 15); // times out after 15s curl_setopt($ytc, CURLOPT_HEADER, 1); // include HTTP headers curl_setopt($ytc, CURLOPT_USERAGENT, $user_agent); $source = curl_exec($ytc); curl_close($ytc); $data = trim( $source ); return $data; } function savePhoto($remoteImage, $isbn) { $ch = curl_init(); curl_setopt ($ch, CURLOPT_URL, $remoteImage); curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt ($ch, CURLOPT_CONNECTTIMEOUT, 0); $fileContents = curl_exec($ch); curl_close($ch); if (DIRECTORY_SEPARATOR=='/'){ $absolute_path = dirname(__FILE__).'/'; } else { $absolute_path = str_replace('\\', '/', dirname(__FILE__)).'/'; } $newImg = imagecreatefromstring($fileContents); return imagejpeg($newImg, $absolute_path ."video_images/{$isbn}.jpg",100); }

    Read the article

  • NSUTF8StringEncoding gives me this %0A%20%20%20%20%22http://example.com/example.jpg%22%0A

    - by user1530141
    So I'm trying to load pictures from twitter. If i just use the URL in the json results without encoding, in the dataWithContentsOfURL, I get nil URL argument. If I encode it, I get %0A%20%20%20%20%22http://example.com/example.jpg%22%0A. I know I can use rangeOfString: or stringByReplacingOccurrencesOfString: but can I be sure that it will always be the same, is there another way to handle this, and why is this happening to my twitter response and not my instagram response? i have also tried stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet] and it does nothing. this is the url directly from the json... 2013-11-08 22:09:31:812 JaVu[1839:1547] -[SingleEventTableViewController tableView:cellForRowAtIndexPath:] [Line 406] ( "http://pbs.twimg.com/media/BYWHiq1IYAAwSCR.jpg" ) here is my code if ([post valueForKeyPath:@"entities.media.media_url"]) { NSString *twitterString = [[NSString stringWithFormat:@"%@", [post valueForKeyPath:@"entities.media.media_url"]]stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]; twitterString = [twitterString stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding]; NSLog(@"%@", twitterString); if (twitterString != nil){ NSURL *twitterPhotoUrl = [NSURL URLWithString:twitterString]; NSLog(@"%@", twitterPhotoUrl); dispatch_queue_t queue = kBgQueue; dispatch_async(queue, ^{ NSError *error; NSData* data = [NSData dataWithContentsOfURL:twitterPhotoUrl options:NSDataReadingUncached error:&error]; UIImage *image = [UIImage imageWithData:data]; dispatch_sync(dispatch_get_main_queue(), ^{ [streamPhotoArray replaceObjectAtIndex:indexPath.row withObject:image]; cell.instagramPhoto.image = image; }); }); } }

    Read the article

  • SQL SERVER – Update Statistics are Sampled By Default

    - by pinaldave
    After reading my earlier post SQL SERVER – Create Primary Key with Specific Name when Creating Table on Statistics, I have received another question by a blog reader. The question is as follows: Question: Are the statistics sampled by default? Answer: Yes. The sampling rate can be specified by the user and it can be anywhere between a very low value to 100%. Let us do a small experiment to verify if the auto update on statistics is left on. Also, let’s examine a very large table that is created and statistics by default- whether the statistics are sampled or not. USE [AdventureWorks] GO -- Create Table CREATE TABLE [dbo].[StatsTest]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL, [LastName] [varchar](100) NULL, [City] [varchar](100) NULL, CONSTRAINT [PK_StatsTest] PRIMARY KEY CLUSTERED ([ID] ASC) ) ON [PRIMARY] GO -- Insert 1 Million Rows INSERT INTO [dbo].[StatsTest] (FirstName,LastName,City) SELECT TOP 1000000 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Update the statistics UPDATE STATISTICS [dbo].[StatsTest] GO -- Shows the statistics DBCC SHOW_STATISTICS ("StatsTest"PK_StatsTest) GO -- Clean up DROP TABLE [dbo].[StatsTest] GO Now let us observe the result of the DBCC SHOW_STATISTICS. The result shows that Resultset is for sure sampling for a large dataset. The percentage of sampling is based on data distribution as well as the kind of data in the table. Before dropping the table, let us check first the size of the table. The size of the table is 35 MB. Now, let us run the above code with lesser number of the rows. USE [AdventureWorks] GO -- Create Table CREATE TABLE [dbo].[StatsTest]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL, [LastName] [varchar](100) NULL, [City] [varchar](100) NULL, CONSTRAINT [PK_StatsTest] PRIMARY KEY CLUSTERED ([ID] ASC) ) ON [PRIMARY] GO -- Insert 1 Hundred Thousand Rows INSERT INTO [dbo].[StatsTest] (FirstName,LastName,City) SELECT TOP 100000 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Update the statistics UPDATE STATISTICS [dbo].[StatsTest] GO -- Shows the statistics DBCC SHOW_STATISTICS ("StatsTest"PK_StatsTest) GO -- Clean up DROP TABLE [dbo].[StatsTest] GO You can see that Rows Sampled is just the same as Rows of the table. In this case, the sample rate is 100%. Before dropping the table, let us also check the size of the table. The size of the table is less than 4 MB. Let us compare the Result set just for a valid reference. Test 1: Total Rows: 1000000, Rows Sampled: 255420, Size of the Table: 35.516 MB Test 2: Total Rows: 100000, Rows Sampled: 100000, Size of the Table: 3.555 MB The reason behind the sample in the Test1 is that the data space is larger than 8 MB, and therefore it uses more than 1024 data pages. If the data space is smaller than 8 MB and uses less than 1024 data pages, then the sampling does not happen. Sampling aids in reducing excessive data scan; however, sometimes it reduces the accuracy of the data as well. Please note that this is just a sample test and there is no way it can be claimed as a benchmark test. The result can be dissimilar on different machines. There are lots of other information can be included when talking about this subject. I will write detail post covering all the subject very soon. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics

    Read the article

  • Building Visual Studio Setup Projects with TFS 2010 Team Build

    - by Jakob Ehn
    One of the most common complaints from people starting to use Team Build is that is doesn’t support building Microsoft’s own Setup and Deployment project (*.vdproj). When creating a default build definition that compiles a solution containing a setup project, you’ll get the following warning: The project file "MyProject.vdproj" is not supported by MSBuild and cannot be built.   This is what the problem is all about. MSBuild, that is used for compiling your projects, does not understand the proprietary vdproj format defined by Microsoft quite some time ago. Unfortunately there is no sign that this will change in the near future, in fact the setup projects has barely changed at all since they were introduced. VS 2010 brings no new features or improvements hen it comes to the setup projects. VS 2010 does include a limited version of InstallShield which promises to be more MSBuild friendly and with more or less the same features as VS setup projects. I hope to get a closer look at this installer project type soon. But, how do we go about to build a Visual Studio setup project and produce an MSI as part of a Team Build process? Well, since only one application known to man understands the vdproj projects, we will have to installa copy of Visual Studio on the build server. Sad but true. After doing this, we use the Visual Studio command line interface (devenv) to perform the build. In this post I will show how to do this by using the InvokeProcess activity directly in a build workflow template. You’ll want to run build your setup projects after you have successfully compiled the projects.   Install Visual Studio 2010 on the build server(s)   Open your build process template /remember to branch or copy the xaml file before modifying it!)   Locate the Try to Compile the Project activity   Drop an instance of the InvokeProcess activity from the toolbox onto the designer, after the Run MSBuild for Project activity   Drop an instance of the WriteBuildMessage activity inside the Handle Standard Output section. Set the Importance property to Microsoft.TeamFoundation.Build.Client.BuildMessageImportance.High (NB: This is necessary if you want the output from devenv to show up in the build log when running the build with the default verbosity) Set the Message property to stdOutput   Drop an instance of the WriteBuildError activity to the Handle Error Output section Set the Message property to errOutput   Select the InvokeProcess activity and set the values of the parameters to:     The finished workflow should look like this:     This will generate the MSI files, but they won’t be copied to the drop location. This is because we are using devenv and not MSBuild, so we have to do this explicitly   Drop a Sequence activity somewhere after the Copy to Drop location activity.   Create a variable in the Sequence activity of type IEnumerable<String> and call it GeneratedInstallers   Drop a FindMatchingFiles activity in the sequence activity and set the properties to:     Drop a ForEach<String> activity after the FindMatchingFiles activity. Set the Value property to GeneratedInstallers   Drop an InvokeProcess activity inside the ForEach activity.  FileName: “xcopy.exe” Arguments: String.Format("""{0}"" ""{1}""", item, BuildDetail.DropLocation) The Sequence activity should look like this:     Save the build process template and check it in.   Run the build and verify that the MSI’s is built and copied to the drop location.   Note 1: One of the drawback of using devenv like this in a team build is that since all the output from the default compilations is placed in the Binaries folder, the outputs is not avaialable when devenv is invoked, which causes the whole solution to rebuild again. In TFS 2008, this was pretty simple to fix by using the CustomizableOutDir property. In TFS 2010, the same feature is not avaialble. Jim Lamb blogged about this recently, have a look at it if you have a problem with this: http://blogs.msdn.com/jimlamb/archive/2010/04/13/customizableoutdir-in-tfs-2010.aspx   Note 2: Although the above solution works, a better approach is to wrap this in a custom activity that you can use in your builds. I will come back to this in a future post.

    Read the article

  • SQL SERVER – Introduction to Wait Stats and Wait Types – Wait Type – Day 1 of 28

    - by pinaldave
    I have been working a lot on Wait Stats and Wait Types recently. Last Year, I requested blog readers to send me their respective server’s wait stats. I appreciate their kind response as I have received  Wait stats from my readers. I took each of the results and carefully analyzed them. I provided necessary feedback to the person who sent me his wait stats and wait types. Based on the feedbacks I got, many of the readers have tuned their server. After a while I got further feedbacks on my recommendations and again, I collected wait stats. I recorded the wait stats and my recommendations and did further research. At some point at time, there were more than 10 different round trips of the recommendations and suggestions. Finally, after six month of working my hands on performance tuning, I have collected some real world wisdom because of this. Now I plan to share my findings with all of you over here. Before anything else, please note that all of these are based on my personal observations and opinions. They may or may not match the theory available at other places. Some of the suggestions may not match your situation. Remember, every server is different and consequently, there is more than one solution to a particular problem. However, this series is written with kept wait stats in mind. While I was working on various performance tuning consultations, I did many more things than just tuning wait stats. Today we will discuss how to capture the wait stats. I use the script diagnostic script created by my friend and SQL Server Expert Glenn Berry to collect wait stats. Here is the script to collect the wait stats: -- Isolate top waits for server instance since last restart or statistics clear WITH Waits AS (SELECT wait_type, wait_time_ms / 1000. AS wait_time_s, 100. * wait_time_ms / SUM(wait_time_ms) OVER() AS pct, ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS rn FROM sys.dm_os_wait_stats WHERE wait_type NOT IN ('CLR_SEMAPHORE','LAZYWRITER_SLEEP','RESOURCE_QUEUE','SLEEP_TASK' ,'SLEEP_SYSTEMTASK','SQLTRACE_BUFFER_FLUSH','WAITFOR', 'LOGMGR_QUEUE','CHECKPOINT_QUEUE' ,'REQUEST_FOR_DEADLOCK_SEARCH','XE_TIMER_EVENT','BROKER_TO_FLUSH','BROKER_TASK_STOP','CLR_MANUAL_EVENT' ,'CLR_AUTO_EVENT','DISPATCHER_QUEUE_SEMAPHORE', 'FT_IFTS_SCHEDULER_IDLE_WAIT' ,'XE_DISPATCHER_WAIT', 'XE_DISPATCHER_JOIN', 'SQLTRACE_INCREMENTAL_FLUSH_SLEEP')) SELECT W1.wait_type, CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s, CAST(W1.pct AS DECIMAL(12, 2)) AS pct, CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct FROM Waits AS W1 INNER JOIN Waits AS W2 ON W2.rn <= W1.rn GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct HAVING SUM(W2.pct) - W1.pct < 99 OPTION (RECOMPILE); -- percentage threshold GO This script uses Dynamic Management View sys.dm_os_wait_stats to collect the wait stats. It omits the system-related wait stats which are not useful to diagnose performance-related bottleneck. Additionally, not OPTION (RECOMPILE) at the end of the DMV will ensure that every time the query runs, it retrieves new data and not the cached data. This dynamic management view collects all the information since the time when the SQL Server services have been restarted. You can also manually clear the wait stats using the following command: DBCC SQLPERF('sys.dm_os_wait_stats', CLEAR); Once the wait stats are collected, we can start analysis them and try to see what is causing any particular wait stats to achieve higher percentages than the others. Many waits stats are related to one another. When the CPU pressure is high, all the CPU-related wait stats show up on top. But when that is fixed, all the wait stats related to the CPU start showing reasonable percentages. It is difficult to have a sure solution, but there are good indications and good suggestions on how to solve this. I will keep this blog post updated as I will post more details about wait stats and how I reduce them. The reference to Book On Line is over here. Of course, I have selected February to run this Wait Stats series. I am already cheating by having the smallest month to run this series. :) Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Some VS 2010 RC Updates (including patches for Intellisense and Web Designer fixes)

    - by ScottGu
    [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] We are continuing to make progress on shipping Visual Studio 2010.  I’d like to say a big thank you to everyone who has downloaded and tried out the VS 2010 Release Candidate, and especially to those who have sent us feedback or reported issues with it. This data has been invaluable in helping us find and fix remaining bugs before we ship the final release. Last month I blogged about a patch we released for the VS 2010 RC that fixed a bad intellisense crash issue.  This past week we released two additional patches that you can download and apply to the VS 2010 RC to immediately fix two other common issues we’ve seen people run into: Patch that fixes crashes with Tooltip invocation and when hovering over identifiers The Visual Studio team recently released a second patch that fixes some crashes we’ve seen when tooltips are displayed – most commonly when hovering over an identifier to view a QuickInfo tooltip. You can learn more about this issue from this blog post, and download and apply the patch here. Patch that fixes issues with the Web Forms designer not correctly adding controls to the auto-generated designer files The Visual Web Developer team recently released a patch that fixes issues where web controls are not correctly added to the .designer.cs file associated with the .aspx file – which means they can’t be programmed against in the code-behind file.  This issue is most commonly described as “controls are not being recognized in the code-behind” or “editing existing .aspx files regenerates the .aspx.designer.(vb or cs) file and controls are now missing” or “I can’t embed controls within the Ajax Control Toolkit TabContainer or the <asp:createuserwizard> control”. You can learn more about the issue here, and download the patch that fixes it here. Common Cause of Intellisense and IDE sluggishness on Windows XP, Vista, Win Server 2003/2008 systems Over the last few months we’ve occasionally seen reports of people seeing tremendous slowness when typing and using intellisense within VS 2010 despite running on decent machines.  It took us awhile to track down the cause – but we have found that the common culprit seems to be that these machines don’t have the latest versions of the UIA (Windows Automation) component installed. UIA 3 ships with Windows 7, and is a recommended Windows Update patch on XP and Vista (which is why we didn’t see the problem in our tests – since our machines are patched with all recommended updates).  Many systems (especially on XP) don’t automatically install recommended updates, though, and are running with older versions of UIA. This can cause significant performance slow-downs within the VS 2010 editor when large lists are displayed (for example: with intellisense). If you are running on Windows XP, Vista, or Windows Server 2003 or 2008 and are seeing any performance issues with the editor or IDE, please install the free UIA 3 update that can be downloaded from this page.  If you scroll down the page you’ll find direct links to versions for each OS. Note that we are making improvements to the final release of VS 2010 so that we don’t have big perf issues when UIA 3 isn’t installed – and we are also adding a message within the IDE that will warn you if you don’t have UIA 3 installed and accessibility is activated. Improved Text Rendering with WPF 4 and VS 2010 We recently made some nice changes to WPF 4 which improve the text clarity and text crispness over what was in the VS 2010/.NET 4 Release Candidate.  In particular these changes improve scenarios where you have a dark background with light text. You can learn more about these improvements in this WPF Team blog post.  These changes will be in the final release of VS 2010 and .NET 4. Hope this helps, Scott

    Read the article

  • Blogging: MacJournal & Windows Live Writer

    - by Jeff Julian
    One thing I have learned about using a Mac is that Apple does not produce very many free applications. The ones they do are typically not full featured and to get the full feature you need to buy their upgraded version. For example, when it comes to Photo editing and cataloging, iPhoto is not a solution for large files or RAW processing, you need Aperture which is a couple hundred dollars. I am not complaining because I like it when an application has a product team who generates revenue with it, because the chance of them being around longer seems to be higher. What is my point in all of this? Apple does not produce a product for blogging/journaling like Microsoft does with Windows Live Writer. I love Windows Live Writer. If you are on a Windows box, it is a required tool in your toolbox if you publish to a blog. The cleanness of the interface, integration with most blog APIs and ability to Save Local or Publish as a Draft make capturing your thoughts for publishing now or later a very easy task. My hope is that Microsoft will port it to the Mac, but I don’t believe that will ever happen as it is not a revenue generating product and Microsoft doesn’t often port to a Mac besides Remote Desktop Connection and MSN Messenger. For my configuration I used to use only Boot Camp on my two MacBook Pros I have owned in the past three years because I’m a PC, but after four different rebuilds (not typically due to Windows, but Boot Camp or Parallels) I decided to move off the Boot Camp platform and to VMWare Fusion. This is a complete separate blog post that I should spec out in MacJournal, but I now always boot into the Mac OS and use Fusion for my AJI Software VM or my client’s VMs. It just seems to work better for me and I have a very nice way to backup my Windows environments with VMWare.Needless to say, there was need in my new laptop configuration for a blogging tool that worked natively on a Mac. I don’t like to power up my machine for writing a document or working on an image and need to boot up a VM just so I can use Windows. Some would say why not just use a Windows laptop and put the MBP on eBay? It is just a preference and right now, I like the Mac OS for day to day work. So in comes MacJournal, part of the current MacHeist package for $19.95 (MacJournal is normally $39.95). This product is definitely not WLW, but WLW is missing some features I like in MacJournal. I hope the price point comes down on MacJournal cause I could see paying $19.95 for it, but it is always hard for me to buy a piece of software for $39.95 when I can use something else. But I am a cheapskate when it comes to software packages. I suggest if you are using a Mac to drop what you are doing pick up the MacHeist bundle today before it is over, but if you are reading this later, than download the trial and see if MacJournal is a solution for you. If you have any other suggestions that are as nice or cheaper, please comment.Product LinksMacJournal by Mariners Software $39.95 (part of MacHeist bundle for $19.95 with only one day left)Windows Live Writer by MicrosoftThis post was created using MacJournal.[Update: The joys of formatting. Make sure if you are a Geekswithblogs.net member that you use this configuration to setup the Metablog formatting of paragraphs correctly]

    Read the article

  • ASP.NET AJAX, jQuery and AJAX Control Toolkit&ndash;the roadmap

    - by Harish Ranganathan
    The opinions mentioned herein are solely mine and do not reflect those of my employer Wanted to post this for a long time but couldn’t.  I have been an ASP.NET Developer for quite sometime and have worked with version 1.1, 2.0, 3.5 as well as the latest 4.0. With ASP.NET 2.0 and Visual Studio 2005, came the era of AJAX and rich UI style web applications.  So, ASP.NET AJAX (codenamed “ATLAS”) was released almost an year later.  This was called as ASP.NET 2.0 AJAX Extensions.  This release was supported further with Visual Studio 2005 Service Pack 1. The initial release of ASP.NET AJAX had 3 components ASP.NET AJAX Library – Client library that is used internally by the server controls as well as scripts that can be used to write hand coded ajax style pages ASP.NET AJAX Extensions – Server controls i.e. ScriptManager,Proxy, UpdatePanel, UpdateProgress and Timer server controls.  Works pretty much like other server controls in terms of development and render client side behavior automatically AJAX Control Toolkit – Set of server controls that extend a behavior or a capability.  Ex.- AutoCompleteExtender The AJAX Control Toolkit was a separate download from CodePlex while the first two get installed when you install ASP.NET AJAX Extensions. With Visual Studio 2008, ASP.NET AJAX made its way into the runtime.  So one doesn’t need to separately install the AJAX Extensions.  However, the AJAX Control Toolkit still remained as a community project that can be downloaded from CodePlex.  By then, the toolkit had close to 30 controls. So, the approach was clear viz., client side programming using ASP.NET AJAX Library and server side model using built-in controls (UpdatePanel) and/or AJAX Control Toolkit. However, with Visual Studio 2008 Service Pack 1, we also added support for the ever increasing popular jQuery library.  That is, you can use jQuery along with ASP.NET and would also get intellisense for jQuery in Visual Studio 2008. Some of you who have played with Visual Studio 2010 Beta and .NET Framework 4 Beta, would also have explored the new AJAX Library which had a lot of templates, live bindings etc.,  But, overall, the road map ahead makes it much simplified. For client side programming using JavaScript for implementing AJAX in ASP.NET, the recommendation is to use jQuery which will be shipped along with Visual Studio and provides intellisense as well. For server side programming one you can use the server controls like UpdatePanel etc., and also the AJAX Control Toolkit which has close to 40 controls now.  The AJAX Control Toolkit still remains as a separate download at CodePlex.  You can download the different versions for different versions of ASP.NET at http://ajaxcontroltoolkit.codeplex.com/ The Microsoft AJAX Library will still be available through the CDN (Content Delivery Network) channels.  You can view the CDN resources at http://www.asp.net/ajaxlibrary/CDN.ashx Similarly even jQuery and the toolkit would be available as CDN resources in case you chose not to download and have them as a part of your application. I think this makes AJAX development pretty simple.  Earlier, having Microsoft AJAX Library as well as jQuery for client side scripting was kind of confusing on which one to use.  With this roadmap, it makes it simple and clear. You can read more on this at http://ajax.asp.net I hope this post provided some clarity on the AJAX roadmap as I could decipher from various product teams. Cheers!!!

    Read the article

  • Using SQL Developer to Debug your Anonymous PL/SQL Blocks

    - by JeffS
    Everyone knows that SQL Developer has a PL/SQL debugger – check! Everyone also knows that it’s only setup for debugging standalone PL/SQL objects like Functions, Procedures, and Packages, right? – NO! SQL Developer can also debug your Stored Java Procedures AND it can debug your standalone PLSQL blocks. These bits of PLSQL which do not live in the database are also known as ‘Anonymous Blocks.’ Anonymous PL/SQL blocks can be submitted to interactive tools such as SQL*Plus and Enterprise Manager, or embedded in an Oracle Precompiler or OCI program. At run time, the program sends these blocks to the Oracle database, where they are compiled and executed. Here’s an example of something you might want help debugging: Declare x number := 0; Begin Dbms_Output.Put(Sysdate || ' ' || Systimestamp); For Stuff In 1..100 Loop Dbms_Output.Put_Line('Stuff is equal to ' || Stuff || '.'); x := Stuff; End Loop; End; / With the power of remote debugging and unshared worksheets, we are going to be able to debug this ANON block! The trick – we need to create a dummy stored procedure and call it in our ANON block. Then we’re going to create an unshared worksheet and execute the script from there while the SQL Developer session is listening for remote debug connections. We step through the dummy procedure, and this takes OUT to our calling ANON block. Then we can use watches, breakpoints, and all that fancy debugger stuff! First things first, create this dummy procedure - create or replace procedure do_nothing is begin null; end; Then mouse-right-click on your Connection and select ‘Remote Debug.’ For an in-depth post on how to use the remote debugger, check out Barry’s excellent post on the subject. Open an unshared worksheet using Ctrl+Shift+N. This gives us a dedicated connection for our worksheet and any scripts or commands executed in it. Paste in your ANON block you want to debug. Add in a call to the dummy procedure above to the first line of your BEGIN block like so Begin do_nothing(); ... Then we need to setup the machine for remote debug for the session we have listening – basically we connect to SQL Developer. You can do that via a Environment Variable, or you can just add this line to your script - CALL DBMS_DEBUG_JDWP.CONNECT_TCP( 'localhost', '4000' ); Where ‘localhost’ is the machine where SQL Developer is running and ’4000′ is the port you started the debug listener on. Ok, with that all set, now just RUN the script. Once the PL/SQL call is made, the debugger will be invoked. You’ll end up in the DO_NOTHING() object. Debugging an ANON block from SQL Developer is possible! If you step out to the ANON block, we’ll end up in the script that’s used to call the procedure – which is the script you want to debug. The Anonymous Block is opened in a new SQL Dev page You can now step through the block, using watches and breakpoints as expected. I’m guessing your scripts are going to be a bit more complicated than mine, but this serves as a decent example to get you started. Here’s a screenshot of a watch and breakpoint defined in the anon block being debugged: Breakpoints, watches, and callstacks - oh my! For giggles, I created a breakpoint with a passcount of 90 for the FOR LOOP to see if it works. And of course it does You Might Also EnjoyUsing Pass Counts to Turbo Charge Your PL/SQL BreakpointsSQL Developer Tip: Viewing REFCURSOR OutputThe PL/SQL Debugger Strikes Back: Episode VDebugging PL/SQL with SQL Developer: Episode IVHow to find dependent objects in your PL/SQL Programs using SQL Developer

    Read the article

  • Looking into ASP.Net MVC 4.0 Mobile Development - part 1

    - by nikolaosk
    In this post I will be looking how ASP.Net MVC 4.0 helps us to create web solutions that target mobile devices.We all experience the magic that is the World Wide Web through mobile devices. Millions of people around the world, use tablets and smartphones to view the contents of websites,e-shops and portals.ASP.Net MVC 4.0 includes a new mobile project template and the ability to render a different set of views for different types of devices.There is a new feature that is called browser overriding which allows us to control exactly what a user is going to see from your web application regardless of what type of device he is using.In order to follow along this post you must have Visual Studio 2012 and .Net Framework 4.5 installed in your machine.Download and install VS 2012 using this link.My machine runs on Windows 8 and Visual Studio 2012 works just fine.It will work fine in Windows 7 as well so do not worry if you do not have the latest Microsoft operating system.1) Launch VS 2012 and create a new Web Forms application by going to File - >New Project - > ASP.Net MVC 4 Web Application and then click OKHave a look at the picture below  2) From the available templates select Mobile Application and then click OK.Have a look at the picture below 3) When I run the application I get the mobile view of the page. I would like to show you what a typical ASP.Net MVC 4.0 application looks like. So I will create a new simple ASP.Net MVC 4.0 Web Application. When I run the application I get the normal page view.Have a look at the picture below.On the left is the mobile view and on the right the normal view. As you can see we have more or less the same content in our mobile application (log in,register) compared with the normal ASP.Net MVC 4.0 application but it is optimised for mobile devices. 4) Let me explain how and when the mobile view is selected and finally rendered.There is a feature in MVC 4.0 that is called Display Modes and with this feature the runtime will select a view.If we have 2 views e.g contact.mobile.cshtml and contact.cshtml in our application the Controller at some point will instruct the runtime to select and render a view named contact.The runtime will look at the browser making the request and will determine if it is a mobile browser or a desktop browser. So if there is a request from my IPhone Safari browser for a particular site, if there is a mobile view the MVC 4.0 will select it and render it. If there is not a mobile view, the normal view will be rendered.5) In the  ASP.Net MVC 4.0 (Internet application) I created earlier (not the first project which was a mobile one) I can run it once more and see how it looks on the browser. If I want to view it with a mobile browser I must download one emulator like Opera Mobile.You can download Opera Mobile hereWhen I run the application I get the same view in both the desktop and the mobile browser. That was to be expected. Have a look at the picture below 6) Then I create another version of the _Layout.mobile.cshtml view in the Shared folder.I simply copy and paste the _Layout.cshtml  into the same folder and then rename it to _Layout.mobile.cshtml and then just alter the contents of the _Layout.mobile.cshtml.When I run again the application I get a different view on the desktop browser and a different one on the Opera mobile browser.Have a look at the picture below ?he Controller will instruct the ASP.Net runtime to select and render a view named _Layout.mobile.cshtml when the request will come from a mobile browser.?he runtime knows that a browser is a mobile one through the ASP.Net browser capability provider. Hope it helps!!!

    Read the article

  • Stuck with Apache2

    - by Gundars Meness
    I cant finish Apache2 install, also cannot remove it. It has blocked my dpkg, now I cant get no installations in or out. I even tried distro upgrade, but it did still has broken dpkg.. How to fix this and get normal Apache2 running? Just for the heck of it: gundars@SR528:~$ sudo apt-get remove apache2-common Reading package lists... Done Building dependency tree Reading state information... Done Package 'apache2-common' is not installed, so not removed 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up apache2.2-common (2.2.22-6ubuntu2) ... ERROR: Site default does not exist! dpkg: error processing apache2.2-common (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of apache2-mpm-prefork: apache2-mpm-prefork depends on apache2.2-common (= 2.2.22-6ubuntu2); however: Package apache2.2-common is not configured yet. dpkg: error processing apache2-mpm-prefork (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: apache2.2-common apache2-mpm-prefork E: Sub-process /usr/bin/dpkg returned an error code (1) sudo apt-get -f install apache2 apache2.2-common apache2-mpm-prefork [sudo] password for gundars: Reading package lists... Done Building dependency tree Reading state information... Done apache2 is already the newest version. apache2-mpm-prefork is already the newest version. apache2.2-common is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 4 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up apache2.2-common (2.2.22-6ubuntu2) ... ERROR: Site default does not exist! dpkg: error processing apache2.2-common (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of apache2-mpm-prefork: apache2-mpm-prefork depends on apache2.2-common (= 2.2.22-6ubuntu2); however: Package apache2.2-common is not configured yet. dpkg: error processing apache2-mpm-prefork (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of apache2: apache2 depends on apache2-mpm-worker (= 2.2.22-6ubuntu2) | apache2-mpm-prefork (= 2.2.22-6ubuntu2) | apache2-mpm-event (= 2.2.22-6ubuntu2) | apache2-mpm-itk (= 2.2.22-6ubuntu2); however: Package apache2-mpm-worker is not installed. Package apache2-mpm-prefork is not configured yet. Package apache2-mpm-event is not installed. Package apache2-mpm-itk is not installed. apache2 depends on apache2.2-common (= 2.2.22-6ubuntu2); however: Package apache2.2-common is not configured yet. dpkg: error processing apache2 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libapache2-mod-php5: libapache2-mod-php5 depends on apache2-mpm-prefork (>> 2.0.52) | apache2-mpm-itk; however: Package apache2-mpm-prefork is not configured yet. Package apache2-mpm-itk is not installed. libapache2-mod-php5 depends on apache2.2-common; however: Package apache2.2-common is not configured yet. No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already dpkg: error processing libapache2-mod-php5 (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: apache2.2-common apache2-mpm-prefork apache2 libapache2-mod-php5 E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Dependency Replication with TFS 2010 Build

    - by Jakob Ehn
    Some time ago, I wrote a post about how to implement dependency replication using TFS 2008 Build. We use this for Library builds, where we set up a build definition for a common library, and have the build check the resulting assemblies back into source control. The folder is then branched to the applications that need to reference the common library. See the above post for more details. Of course, we have reimplemented this feature in TFS 2010 Build, which results in a much nicer experience for the developer who wants to setup a new library build. Here is how it looks: There is a separate build process template for library builds registered in all team projects The following properties are used to configure the library build: Deploy Folder in Source Control is the server path where the assemblies should be checked in DeploymentFiles is a list of files and/or extensions to what files to check in. Default here is *.dll;*.pdb which means that all assemblies and debug symbols will be checked in. We can also type for example CommonLibrary.*;SomeOtherAssembly.dll in order to exclude other assemblies You can also see that we are versioning the assemblies as part of the build. This is important, since the resulting assemblies will be deployed together with the referencing application.   When the build executes, it will see of the matching assemblies exist in source control, if not, it will add the files automatically:   After the build has finished, we can see in the history of the TestDeploy folder that the build service account has in fact checked in a new version: Nice!   The implementation of the library build process template is not very complicated, it is a combination of customization of the build process template and some custom activities. We use the generic TFActivity (http://geekswithblogs.net/jakob/archive/2010/11/03/performing-checkins-in-tfs-2010-build.aspx) to check in and out files, but for the part that checks if a file exists and adds it to source control, it was easier to do this in a custom activity:   public sealed class AddFilesToSourceControl : BaseCodeActivity { // Files to add to source control [RequiredArgument] public InArgument<IEnumerable<string>> Files { get; set; } [RequiredArgument] public InArgument<Workspace> Workspace { get; set; } // If your activity returns a value, derive from CodeActivity<TResult> // and return the value from the Execute method. protected override void Execute(CodeActivityContext context) { foreach (var file in Files.Get(context)) { if (!File.Exists(file)) { throw new ApplicationException("Could not locate " + file); } var ws = this.Workspace.Get(context); string serverPath = ws.TryGetServerItemForLocalItem(file); if( !String.IsNullOrEmpty(serverPath)) { if (!ws.VersionControlServer.ServerItemExists(serverPath, ItemType.File)) { TrackMessage(context, "Adding file " + file); ws.PendAdd(file); } else { TrackMessage(context, "File " + file + " already exists in source control"); } } else { TrackMessage(context, "No server path for " + file); } } } } This build template is a very nice tool that makes it easy to do dependency replication with TFS 2010. Next, I will add funtionality for automatically merging the assemblies (using ILMerge) as part of the build, we do this to keep the number of references to a minimum.

    Read the article

  • Visual Studio 2010 Productivity Power Tool Extensions

    - by ScottGu
    Last month I blogged about the Extension Manager that is built-into VS 2010 – as well as about a cool VS 2010 PowerCommands extension that provides some extra features for Visual Studio.  The Visual Studio 2010 Extension Manager provides an easy way for developers to quickly find and install extensions and plugins that enhance the built-in functionality to VS 2010. New VS 2010 Productivity Power Tools Release Earlier this week Jason Zander announced the availability of a new VS 2010 Productivity Power Tools release that includes a bunch of great new VS 2010 extensions that provide a bunch of cool new functionality for you to take advantage of.  You can download and install the release for free here.  Some of the code editor improvements it provides include: Entire Line Highlighting: Makes it easier to track cursor location within the editor Entire Line Selection: Triple Clicking a line in the code editor now selects the entire line (like with MS Word) Code Block Movement: Use Alt+Up/Down Arrow now moves selected code blocks up/down in the editor Consistent Tabs vs. Spaces: Ensure consistent tab vs. space usage across your projects Colorized Parameters: It is now easier to see/identify method parameters Column Guide: You can now add vertical column guidelines to help with text alignment and sizes Align assignments: Makes it easier to line-up multiple variable assignments within your code HTML Clipboard Support: Copy/paste code from VS into an HTML buffer (useful for blogging!) Ctrl + Click Go to Definition: You can now hold down the Ctrl key and click a type to go to its definition It also includes several tab management improvements for managing document tabs within the IDE: Show Close Button in Tab Well: Shows a close button in document well for the active tab (like VS 2008 did) Colored Tabs: You can now select the color of each document tab by project or by regex Pinned Tabs: Enables you to pin tabs to keep them always visible and available Vertical Tabs: You can now show document tabs vertically to fit more tabs than normal Remove Tabs by Usage Order: Better behavior when adding new tabs and one needs to be hidden for space reasons Sort Tabs by Project: Tabs can be sorted by project they belong to, keeping them grouped together Sort Tabs Alphabetically: Tabs can be sorted alphabetically And last – but not least – it includes a new and improved “Add Reference” dialog: This new Add Reference dialog caches assembly information – which means it loads within a second or two (note: the very first time it still loads assembly data – but it then caches it and makes it fast afterwards). The new Add Reference dialog also now includes searching support – making it easier to find the assembly you are looking for. You can read more about all of the above improvements in Jason’s blog post about the release. New Visualization and Modeling Feature Pack Release Earlier this week we also shipped a new feature pack that adds additional modeling and code visualization features to VS 2010 Ultimate.  You can download it here. The Visualization and Modeling Feature Pack includes a bunch of great new capabilities including: Web Site Visualization: New support for generating a DGML visualization for ASP.NET projects C/C++ Native Code Visualization: New support for generating DGML diagrams for C/C++ projects Generate Code from UML Class Diagrams: You can now generate code from your UML diagrams Create UML Class Diagrams from Code: Create UML diagrams from existing code bases Import UML from XML: Import UML class, sequence, and use case elements from XMI 2.1 files Custom Validation Layer Rules: Write custom code to create, modify, and validate layer diagrams Jason’s blog post covers more about these features as well. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • SQL SERVER – Merge Operations – Insert, Update, Delete in Single Execution

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by Jorge Segarra (aka SQLChicken). I have been very active using these Merge operations in my development. However, I have found out from my consultancy work and friends that these amazing operations are not utilized by them most of the time. Here is my attempt to bring the necessity of using the Merge Operation to surface one more time. MERGE is a new feature that provides an efficient way to do multiple DML operations. In earlier versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions; however, at present, by using the MERGE statement, we can include the logic of such data changes in one statement that even checks when the data is matched and then just update it, and similarly, when the data is unmatched, it is inserted. One of the most important advantages of MERGE statement is that the entire data are read and processed only once. In earlier versions, three different statements had to be written to process three different activities (INSERT, UPDATE or DELETE); however, by using MERGE statement, all the update activities can be done in one pass of database table. I have written about these Merge Operations earlier in my blog post over here SQL SERVER – 2008 – Introduction to Merge Statement – One Statement for INSERT, UPDATE, DELETE. I was asked by one of the readers that how do we know that this operator was doing everything in single pass and was not calling this Merge Operator multiple times. Let us run the same example which I have used earlier; I am listing the same here again for convenience. --Let’s create Student Details and StudentTotalMarks and inserted some records. USE tempdb GO CREATE TABLE StudentDetails ( StudentID INTEGER PRIMARY KEY, StudentName VARCHAR(15) ) GO INSERT INTO StudentDetails VALUES(1,'SMITH') INSERT INTO StudentDetails VALUES(2,'ALLEN') INSERT INTO StudentDetails VALUES(3,'JONES') INSERT INTO StudentDetails VALUES(4,'MARTIN') INSERT INTO StudentDetails VALUES(5,'JAMES') GO CREATE TABLE StudentTotalMarks ( StudentID INTEGER REFERENCES StudentDetails, StudentMarks INTEGER ) GO INSERT INTO StudentTotalMarks VALUES(1,230) INSERT INTO StudentTotalMarks VALUES(2,255) INSERT INTO StudentTotalMarks VALUES(3,200) GO -- Select from Table SELECT * FROM StudentDetails GO SELECT * FROM StudentTotalMarks GO -- Merge Statement MERGE StudentTotalMarks AS stm USING (SELECT StudentID,StudentName FROM StudentDetails) AS sd ON stm.StudentID = sd.StudentID WHEN MATCHED AND stm.StudentMarks > 250 THEN DELETE WHEN MATCHED THEN UPDATE SET stm.StudentMarks = stm.StudentMarks + 25 WHEN NOT MATCHED THEN INSERT(StudentID,StudentMarks) VALUES(sd.StudentID,25); GO -- Select from Table SELECT * FROM StudentDetails GO SELECT * FROM StudentTotalMarks GO -- Clean up DROP TABLE StudentDetails GO DROP TABLE StudentTotalMarks GO The Merge Join performs very well and the following result is obtained. Let us check the execution plan for the merge operator. You can click on following image to enlarge it. Let us evaluate the execution plan for the Table Merge Operator only. We can clearly see that the Number of Executions property suggests value 1. Which is quite clear that in a single PASS, the Merge Operation completes the operations of Insert, Update and Delete. I strongly suggest you all to use this operation, if possible, in your development. I have seen this operation implemented in many data warehousing applications. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Joins, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Merge

    Read the article

  • Is HR/Recruitment Really Ready For Innovative Candidates

    - by david.talamelli
    Before I begin this blog post, I want to acknowledge that there are some great HR/Recruitment people out there who are innovative and are leading the way in using new means to successfully attract and connect with talented people. For those of you who fit in this category, please keep thinking outside the square - just because what you do may not be the norm doesn't mean it is bad. Ok, with that acknowledgment out of the way - Earlier this morning (I started this post Friday morning) I came across this online profile via a tweet from Philip Tusing I love the information that Jason has put on his web-pages. From his work Jason clearly demonstrates not only his skills/experience but also I love how he relates his experience and shows how it will help an employer and what the value add of having him on your team is. Looking at Jason's profile makes me think though, is HR/Recruitment in general terms ready to deal with innovative candidates. Sure most Recruiters are online in some form or another, but how many actually have a process that is flexible enough to deal with someone who may not fit into your processes. Is your company's recruitment practice proactive enough to find Jason's web-pages? I am not sure what he is doing in terms of a job search, but if he is not mailing a resume or replying to ads on a Job Board - hopefully Jason comes up on some of the candidate searching you are doing. Once you find this information, would the information Jason provides fit nicely into your Applicant Tracking System or your Database? If not, how much of the intangible information are you losing and potentially not passing on to a Hiring Manager. I think what has worked in the past will not necessarily work in the future. Candidates want to work somewhere they will be challenged and learn and grow. If your HR/Recruitment team displays processes that take don't necessarily convey this message, this potentially could turn people away who were once interested in your company. For example (and I have to admit I still do some of these things myself), once calling up and having a talk to a candidate a company may say: 1) HR Question: Send me in a copy of your resume - Candidate Reply - you actually already have my resume, the web-page is http:// 2) HR Question:Come in for a chat so we can get to know you - Candidate Reply - if this is the basis of a meeting, you already know me and my thoughts by looking at my online links (blog, portfolio, homepage, etc...) These questions if not handled properly could potentially turn a candidate from being interested in your company to not being interested in your company. It potentially could demonstrate that your company is not social media savvy or maybe give the impression of not really being all that innovative. A candidate may think, if this company isn't able to take information I have provided in the public forum and use it, is it really a company I want to work for? I think when liaising with candidates a company should utilise the information the person has provided in the public domain. A candidate may inadvertantly give you answers to many of the questions you are seeking on their online presence and save everyone time instead of having to fill out forms or paperwork. If you build this into your conversations with your candidates it becomes a much more individualised service you are providing and really demonstrates to a candidate you are thinking of them as an individual. Yes I know we need to have processes in place and I am not saying don't work to those processes, but don't let process take away a candidates individuality. Don't let your process inadvertently scare away the top candidates that you may want in your company. This article was originally posted on David Talamelli's Blog - David's Journal on Tap

    Read the article

  • Visual Studio &amp; TFS &ndash; List of addins, extensions, patches and hotfixes &ndash; Latest and Greatest

    - by terje
    This post is a list of the addins and extensions we (I ) recommend for use in Inmeta.  It’s coming up all the time – what to install, where are the download sites, etc etc, and thus I thought it better to post it here and keep it updated. The basics are Visual Studio 2010 connected to a Team Foundation Server 2010.  The edition of Visual Studio I use is the Ultimate Edition, but as many stay with the Premium Edition I’ve marked the extensions which only works with the Ultimate with a . I’ve also split the group into Recommended (which means Required) and Optional (which means Recommended) and Nice to Have (which means Optional) .   The focus is to get a setup which can be used for a complete coding experience for the whole ALM process.  The Code Gallery is found either through the Tools/Extension Manager menu in Visual Studio or through this link. The ones to really download is the Recommended category.  Then consider the Optional based on your needs.  The list of course reflects what I use for my work , so it is by no means complete, and for some of the tools there are equally useful alternatives.  The components directly associated with Visual Studio from Microsoft should be common, see the Microsoft column.     Product Available on Code Gallery Latest Version License Rec/Opt/N2H Applicable to Microsoft TFS Power Tools Sept 2010 Complete setup msi on link, split into parts on CG Sept 2010 Free Recommended TFS integration Yes Productivity Power Tools Yes 10.0.11019.3 Free Recommended Coding Yes Code Contracts No 1.4.30903 Free Recommended Coding & Quality Yes Code Contracts Editor Extensions Yes 1.4.30903 Free Recommended Coding & Quality Yes VSCommands Yes 3.6.4.1 Lite version Free (Good enough) Nice to have Coding No Power Commands Yes 1.0.2.3 Free Recommended Coding Yes FeaturePack 2   No.  MSDN Subscriber download under Visual Studio 2010 FP2 Part of MSDN Subscription Recommended Modeling & Testing Yes ReSharper No (Trial only) 5.1.1 Licensed Recommended Coding & Quality No dotTrace No 4.0.1 Licensed Optional Quality No NDepends No (Trial only) Licensed Optional Quality No tangible T4 editor Yes 1.950 Lite version Free (Good enough) Optional Coding (T4 templates) No Reflector No (Trial of Pro version only) 6.5 Lite version Free (Good enough) Recommended Coding/Investigation No LinqPad No 4.26.2 Licensed Nice to have Coding No Beyond Compare No 3.1.11 Licensed Recommended Coding/Investigation No Pex and Moles No (Moles available alone on CG) . Complete on MSDN Subscriber download under Visual Studio 2010 0.94.51023 Part of MSDN Subscription Optional Coding & Unit Testing Yes ApexSQL No Licensed Nice to have SQL No                 Some important Patches, upgrades and fixes Product Date Information Rec/Opt Applicable to Scrolling context menu KB2345133 and KB2413613 October 2010 Here Recommended Visual Studio MTM Patch October 2010 Here and here  KB2387011 Recommended (if you use MTM) MTM Data warehouse fix June 2010 Iteration dates fails with SQL 2008 R2.  KB2222312. Affects Burndown chart in Agile workbook Only for SQL 2008 R2 Server Upgrade 2008 to 2010 issue and hotfix August 2010 Fixes problems with labels and branches which are lost during upgrade. Apply before upgrade. Note: This has been fixed in the latest re-release of the TFS Server dated Aug 5th 2010. See here. Recommends downloading the latest bits. Only if upgrade from 2008 from earlier bits Server

    Read the article

  • What I don&rsquo;t like about WIF&rsquo;s Claims-based Authorization

    - by Your DisplayName here!
    In my last post I wrote about what I like about WIF’s proposed approach to authorization – I also said that I definitely would build upon that infrastructure for my own systems. But implementing such a system is a little harder as it could be. Here’s why (and that’s purely my perspective): First of all WIF’s authorization comes in two “modes” Per-request authorization. When an ASP.NET/WCF request comes in, the registered authorization manager gets called. For SOAP the SOAP action gets passed in. For HTTP requests (ASP.NET, WCF REST) the URL and verb. Imperative authorization This happens when you explicitly call the claims authorization API from within your code. There you have full control over the values for action and resource. In ASP.NET per-request authorization is optional (depends on if you have added the ClaimsAuthorizationHttpModule). In WCF you always get the per-request checks as soon as you register the authorization manager in configuration. I personally prefer the imperative authorization because first of all I don’t believe in URL based authorization. Especially in the times of MVC and routing tables, URLs can be easily changed – but then you also have to adjust your authorization logic every time. Also – you typically need more knowledge than a simple “if user x is allowed to invoke operation x”. One problem I have is, both the per-request calls as well as the standard WIF imperative authorization APIs wrap actions and resources in the same claim type. This makes it hard to distinguish between the two authorization modes in your authorization manager. But you typically need that feature to structure your authorization policy evaluation in a clean way. The second problem (which is somehow related to the first one) is the standard API for interacting with the claims authorization manager. The API comes as an attribute (ClaimsPrincipalPermissionAttribute) as well as a class to use programmatically (ClaimsPrincipalPermission). Both only allow to pass in simple strings (which results in the wrapping with standard claim types mentioned earlier). Both throw a SecurityException when the check fails. The attribute is a code access permission attribute (like PrincipalPermission). That means it will always be invoked regardless how you call the code. This may be exactly what you want, or not. In a unit testing situation (like an MVC controller) you typically want to test the logic in the function – not the security check. The good news is, the WIF API is flexible enough that you can build your own infrastructure around their core. For my own projects I implemented the following extensions: A way to invoke the registered claims authorization manager with more overloads, e.g. with different claim types or a complete AuthorizationContext. A new CAS attribute (with the same calling semantics as the built-in one) with custom claim types. A MVC authorization attribute with custom claim types. A way to use branching – as opposed to catching a SecurityException. I will post the code for these various extensions here – so stay tuned.

    Read the article

  • Stack Exchange Notifier Chrome Extension [v1.2.9.3 released]

    - by Vladislav Tserman
    About Stack Exchange Notifier is a handy extension for Google Chrome browser that displays your current reputation, badges on Stack Exchange sites and notifies you on reputation's changes. You will now get notified of comments on your own posts (questions and answers) and of any comments that refer to you by @username in a comment, even if you do not own the post (aka mentions). All StackExchange sites are supported. Screenshots Access Install extensions from Google Chrome Extension Gallery Platform Google Chrome browser extension Contact Created by me (Vladislav Tserman). I'm available at: vladjan (at) gmail.com Follow Stack Exchange Notifier on twitter to get notified about news and updates: http://twitter.com/se_notifier Code Written in Java, Google Web Toolkit under Eclipse Helios. Stack Exchange Notifier uses the Stack Exchange API and is powered by Google App Engine for Java. Changelog I will be porting extension to not use app engine back-end due to some limitations. New versions of the extension will be making direct calls to Stack Exchange API right from your browser. Please do not expect new versions of the extension any time soon. Sorry. Read more about limitations here http://stackapps.com/questions/1713 and here http://stackoverflow.com/questions/3949815 Currently, you may sometimes experience some issues using extension, but most users will have no problems. You may notice too many errors in the logs, but there is nothing I can do with this now. Thanks for using my little app, thanks to all of you it still works in spite of many issues with API Version 1.2.9.3 - Thursday, October 14, 2010 - Bug fix release (back-end improvements) Version 1.2.9.2 - Thursday, October 07, 2010 - Bug fix release (high rate of occasional API errors were noticed so some fixes added to handle them were possible) Version 1.2.9.1 - Tuesday, October 05, 2010 - Mostly bug fix release, back-end performance improvements - You will now get notified of comments on your own posts (questions and answers) that are not older than 1 year and of any comments that refer to you by @username in a comment, even if you do not own the post (aka mentions). This is experimental feature, let me know if you like/need it. - New 'All sites' view displays all websites from Stack Exchange network (part of new feature that is not finished yet) Version 1.2.9 - Saturday, September 25, 2010 - Fixes an issue when some users got empty Account view. - When hovering on @Username on account view the title now displays '@Username on @SiteName' to easily understand the site name Version 1.2.7 - Wednesday, September 22, 2010 - Fixed an issue with notifications. - Minor improvements Version 1.2.5 - Tuesday, September 21, 2010 - Fixed an issue where some characters in response payload raised an exception when parsing to JSON. v1.2.3 (Sunday, September 19, 2010) - Support for new OpenID providers was added (Yahoo, MyOpenID, AOL) - UI improvements - Several minor defects were fixed v1.2.2 (Thursday, September 16, 2010) - New types of notifications added. Now extension notifies you on comments that are directed to you. Comments are expandable, so clicking on comment title will expand height to accommodate all available text. - UI and error handling improvements Future Application still in beta stage. I hope you're not having any problems, but if you are, please let me know. Leave your feedback and bug reports in comments. I'm available at: vladjan (at) gmail.com. I'm working on adding new features. I want to hear from the users and incorporate as much feedback as possible into the extension. Any suggestions for improvements/features to add?

    Read the article

  • My new laptop - with a really nice battery option

    - by Rob Farley
    It was about time I got a new laptop, and so I made a phone-call to Dell to discuss my options. I decided not to get an SSD from them, because I’d rather choose one myself – the sales guy tells me that changing the HD doesn’t void my warranty, so that’s good (incidentally, I’d love to hear people’s recommendations for which SSD to get for my laptop). Unfortunately this machine only has one HD slot, but I figure that I’ll put lots of stuff onto external disks anyway. The machine I got was a Dell Studio XPS 16. It’s red (which suits my company), but also has the Intel® Core™ i7-820QM Processor, which is 4 Cores/8 Threads. Makes for a pretty Task Manager, but nothing like the one I saw at SQLBits last year (at 96 cores), or the one that my good friend James Rowland-Jones writes about here. But the reason for this post is actually something in the software that comes with the machine – you know, the stuff that most people uninstall at the earliest opportunity. I had just reinstalled the operating system, and was going through the utilities to get the drivers up-to-date, when I noticed that one of Dell applications included an option to disable battery charging. So I installed it. And sure enough, I can tell the battery not to charge now. Clearly Dell see it as a temporary option, and one that’s designed for when you’re on a plane. But for me, I most often use my laptop with the power plugged in, which means I don’t need to have my battery continually topping itself up. So I really love this option, but I feel like it could go a little further. I’d like “Not Charging” to be the default option, and let me set it when I want to charge it (which should theoretically make my battery last longer). I also intend to work out how this option works, so that I can script it and put it into my StartUp options (so it can be the Default setting). Actually – if someone has already worked this out and can tell me what it does, then please feel free to let me know. Even better would be an external switch. I had a switch on my old laptop (a Dell Latitude) for WiFi, so that I could turn that off before I turned on the computer (this laptop doesn’t give me that option – no physical switch for flight mode). I guess it just means I’ll get used to leaving the WiFi off by default, and turning it on when I want it – might save myself some battery power that way too. Soon I’ll need to take the plunge and sync my iPhone with the new laptop. I’m a little worried that I might lose something – Apple’s messages about how my stuff will be wiped and replaced with what’s on the PC doesn’t fill me with confidence, as it’s a new PC that doesn’t have stuff on it. But having a new machine is definitely a nice experience, and one that I can recommend. I’m sure when I get around to buying an SSD I’ll feel like it’s shiny and new all over again! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQLAuthority News – The Best Quotes of “Who Wrote This?” Contest

    - by pinaldave
    I am a frequent reader of Brent Ozar PLF, it is one of my favorite blogs. A recent post announced a “Who Wrote This?” contest to see if readers could tell their three contributors apart based on some writing samples. Here are my favorite lines from the sample paragraphs, from each of the three “mystery authors.” Topic 1: Working with Bad Managers Mystery Author A – “Working with bad managers means working against my own happiness, and I’ve come to learn that there’s no changing bad managers.” I love this line because, as anyone who has had a bad manager knows, often a lot of self-doubt rises up. We all have to remember that sometimes the problem is out of our control. Mystery Author B – “Mentor your manager just like you would mentor a junior DBA.” Having a bad manager can be extremely depressing, and we often feel out of control. But we all need to remember that our work is a two-way street, and that sometimes we can subtly influence those above us. Mystery Author C – “The trick to working for all bad managers is to remember that they aren’t your parent. Take charge of your career.” We all also need to learn not to play the blame game. Would you rather stay in a place where you are unhappy, or would you rather take charge of your life? I hope most people would pick the latter. Topic 2: Working with Remote Teams Mystery Author A – “Like almost anything else the key is to make sure that everyone on the team has an understanding of how and when communication will occur.” Communication is so important. I cannot over emphasize how much. And this one line captures how I feel and even communicates the idea clearly! Mystery Author B – “The key to remote team success is verifiable trust: feeling confident that invisible team members are doing the right amount of the right thing at the right time.” I think this line not only captures the key aspects of remote work – verifiable work and trust – but there were so many lines that followed that I loved and could not fit here. The whole paragraph is a list for successful remote work. Everyone could benefit from reading it. Mystery Author C – “What seems clear, precise, and specific in one time zone comes across as vague, soupy, and just plain weird in another.” You know what? I just love this description. The author is right – sometimes vague e-mails really do seem soupy and weird! Topic 3: Working with Your Nemesis Mystery Author A – “Every job is temporary, but your reputation stays with you.” Everyone needs to remember this. The workplace is meant to be a professional arena, and many people have the opinion that work is temporary and disposable. No one wants to work with co-worker like that. Mystery Author B – “Unhealthy conflict is going to lead to leaving three week old tuna fish sandwiches in someone’s desk drawer.” Sometimes humor really is the best policy! Mystery Author C – “Oh no, it’s that guy.” This might seem like a weird phrase to choose as my favorite from an entire paragraph. But the whole piece was written in the form of a story of co-workers getting drunk and plotting against a nemesis. It was too funny to overlook, but too long to post here. A must read! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >