Search Results

Search found 19953 results on 799 pages for 'post get'.

Page 105/799 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Array structure returned by Yii's model

    - by user1104955
    I am a Yii beginner and am running into a bit of a wall and hope someone will be able to help me get back onto track. I think this might be a fairly straight forward question to the seasoned Yii user. So here goes... In the controller, let's say I run the following call to the model- $variable = Post::model()->findAll(); All works fine and I pass the variable into the view. Here's where I get pretty stuck. The array that is returned in the above query is far more complex than I anticipated and I'm struggling to make sense of it. Here's a sample- print_r($variable); gives- Array ( [0] => Post Object ( [_md:CActiveRecord:private] => CActiveRecordMetaData Object ( [tableSchema] => CMysqlTableSchema Object ( [schemaName] => [name] => tbl_post [rawName] => `tbl_post` [primaryKey] => id [sequenceName] => [foreignKeys] => Array ( ) [columns] => Array ( [id] => CMysqlColumnSchema Object ( [name] => id [rawName] => `id` [allowNull] => [dbType] => int(11) [type] => integer [defaultValue] => [size] => 11 [precision] => 11 [scale] => [isPrimaryKey] => 1 [isForeignKey] => [autoIncrement] => 1 [_e:CComponent:private] => [_m:CComponent:private] => ) [post] => CMysqlColumnSchema Object ( [name] => post [rawName] => `post` [allowNull] => [dbType] => text [type] => string [defaultValue] => [size] => [precision] => [scale] => [isPrimaryKey] => [isForeignKey] => [autoIncrement] => [_e:CComponent:private] => [_m:CComponent:private] => ) ) [_e:CComponent:private] => [_m:CComponent:private] => ) [columns] => Array ( [id] => CMysqlColumnSchema Object ( [name] => id [rawName] => `id` [allowNull] => [dbType] => int(11) [type] => integer [defaultValue] => [size] => 11 [precision] => 11 [scale] => [isPrimaryKey] => 1 [isForeignKey] => [autoIncrement] => 1 [_e:CComponent:private] => [_m:CComponent:private] => ) [post] => CMysqlColumnSchema Object ( [name] => post [rawName] => `post` [allowNull] => [dbType] => text [type] => string [defaultValue] => [size] => [precision] => [scale] => [isPrimaryKey] => [isForeignKey] => [autoIncrement] => [_e:CComponent:private] => [_m:CComponent:private] => ) ) [relations] => Array ( [responses] => CHasManyRelation Object ( [limit] => -1 [offset] => -1 [index] => [through] => [joinType] => LEFT OUTER JOIN [on] => [alias] => [with] => Array ( ) [together] => [scopes] => [name] => responses [className] => Response [foreignKey] => post_id [select] => * [condition] => [params] => Array ( ) [group] => [join] => [having] => [order] => [_e:CComponent:private] => [_m:CComponent:private] => ) ) [attributeDefaults] => Array ( ) [_model:CActiveRecordMetaData:private] => Post Object ( [_md:CActiveRecord:private] => CActiveRecordMetaData Object *RECURSION* [_new:CActiveRecord:private] => [_attributes:CActiveRecord:private] => Array ( ) [_related:CActiveRecord:private] => Array ( ) [_c:CActiveRecord:private] => [_pk:CActiveRecord:private] => [_alias:CActiveRecord:private] => t [_errors:CModel:private] => Array ( ) [_validators:CModel:private] => [_scenario:CModel:private] => [_e:CComponent:private] => [_m:CComponent:private] => ) ) [_new:CActiveRecord:private] => [_attributes:CActiveRecord:private] => Array ( [id] => 1 [post] => User Post ) [_related:CActiveRecord:private] => Array ( ) [_c:CActiveRecord:private] => [_pk:CActiveRecord:private] => 1 [_alias:CActiveRecord:private] => t [_errors:CModel:private] => Array ( ) [_validators:CModel:private] => [_scenario:CModel:private] => update [_e:CComponent:private] => [_m:CComponent:private] => ) ) [sorry if there's an easier way to show this array, I'm not aware of it] Can anyone explain to me why the model returns such a complex array? It doesn't seem to matter what tables or columns or relations are used in your application, they all seem to me to return this format. Also, can someone explain the structure to me so that I can isolate the variables that I want to recover? Many thanks in advance, Nick

    Read the article

  • How to redirect with .htaccess (keeping legacy links)

    - by Laurent
    Hello, I recently switched CMSes. While using Wordpress, I had this permalink convention: "/year/post". Now, I'd like to have "year/month/post". To keep legacy links, I need to redirect from "http://site.com/2009/sample-post" to "http://site.com/2009/01/sample-post". "01" should be permanent in this case. This is what I've got atm: RewriteEngine on RewriteCond $1 !^(images|system|themes|_|wp-content|mint|assets|favicon\.ico|robots\.txt|index\.php) [NC] RewriteRule ^(.*)$ /index.php?/$1 [L] Thanks in advance!

    Read the article

  • how to get the content of iframe in a php variable? [closed]

    - by Sahil
    My code is somewhat like this: <?php if($_REQUEST['post']) { $title=$_REQUEST['title']; $body=$_REQUEST['body']; echo $title.$body; } ?> <script type="text/javascript" src="texteditor.js"> </script> <form action="" method="post"> Title: <input type="text" name="title"/><br> <a id="bold" class="font-bold"> B </a> <a id="italic" class="italic"> I </a> Post: <iframe id="textEditor" name="body"></iframe> <input type="submit" name="post" value="Post" /> </form> the texteditor.js file code is: $(document).ready(function(){ document.getElementById('textEditor').contentWindow.document.designMode="on"; document.getElementById('textEditor').contentWindow.document.close(); $("#bold").click(function(){ if($(this).hasClass("selected")) { $(this).removeClass("selected"); }else { $(this).addClass("selected"); } boldIt(); }); $("#italic").click(function(){ if($(this).hasClass("selected")) { $(this).removeClass("selected"); }else { $(this).addClass("selected"); } ItalicIt(); }); }); function boldIt(){ var edit = document.getElementById("textEditor").contentWindow; edit.focus(); edit.document.execCommand("bold", false, ""); edit.focus(); } function ItalicIt(){ var edit = document.getElementById("textEditor").contentWindow; edit.focus(); edit.document.execCommand("italic", false, ""); edit.focus(); } function post(){ var iframe = document.getElementById("body").contentWindow; } actualy i want to fetch data from this texteditor (which is created using iframe and javascript) and store it in some other place. i'm not able to fetch the content that is entered in the editor (i.e. iframe here). please help me out of this....

    Read the article

  • Ajax posting to PHP

    - by JQonfused
    Hi guys, I'm testing a jQuery ajax post method on a local Apache 2.2 server with PHP 5.3 (totally new at this). Here are the files, all in the same folder. html body (jQuery library included in head): <form id="postForm" method="post"> <label for="name">Input Name</label> <input type="text" name="name" id="name" /><br /> <label for="age">Input Age</label> <input type="text" name="age" id="age" /><br /> <input type="submit" value="Submit" id="submitBtn" /> </form> <div id="resultDisplay"></div> <script src="queryRequest.js"></script> queryRequest.js $(document).ready(function(){ $('#s').focus(); $('#postForm').submit(function(){ var name = $('#name').val(); var age = $('#age').val(); var URL = "post.php"; $.ajax({ type:'POST', url: URL, datatype:'json', data:{'name': name ,'age': age}, success: function(data){ $('#resultDisplay').append("Value returned.<br />name: "+data.name+" age: "+data.age); }, error: function() { $('resultDisplay').append("ERROR!") } }); }); }); post.php <?php $name = $_POST['name']; $age = $_POST['age']; $return = array('name' => $name, 'age' => $age); echo json_encode($return); ?> After inputting the two fields and pressing 'Submit', the success method is called, text appended, but the values returned from ajax post are undefined. And then after less than a second, the text fields are emptied, and the text appended to the div is gone. Doesn't seem like it's a page refresh, though, since there's no empty page flash. What's going on here? I'm sure it's a silly mistake but Firebug isn't telling me anything.

    Read the article

  • Will the error be displayed?

    - by user281180
    I have an ajax post and in the controller I return nothing. In case there is a failure will the error message displayed with the follwoing code? [AcceptVerbs(HttpVerbs.Post)] public void Edit(Model model) { model.Save(); } $.ajax({ type: "POST", url: '<%=Url.Action("Edit","test") %>', data: JSON.stringify(data), contentType: "application/json; charset=utf-8", dataType: "html", success: function() { }, error: function(request, status, error) { alert("Error: " & request.responseText); } });

    Read the article

  • jquery serialize()+custom?

    - by tazphoenix
    When using $.POST and $.GET in jquery is there any way to add custom vars to the URL and send them too? i tried the following : $.ajax({type:"POST", url:"file.php?CustomVar=data", data:$("#form").serialize()}); And : <input name="CustomVar" type="hidden" value="data" /> $.ajax({type:"POST", url:"file.php", data:$("#form").serialize()}); The first ones problem is that it send the custom as get but i want to receive it as post. The second one well i'm using it right now but there is not any better way?

    Read the article

  • IO::Pipe - close(<handle>) does not set $?

    - by danboo
    My understanding is that closing the handle for an IO::Pipe object should be done with the method ($fh->close) and not the built-in (close($fh)). The other day I goofed and used the built-in out of habit on a IO::Pipe object that was opened to a command that I expected to fail. I was surprised when $? was zero, and my error checking wasn't triggered. I realized my mistake. If I use the built-in, IO:Pipe can't perform the waitpid() and can't set $?. But what I was surprised by was that perl seemed to still close the pipe without setting $? via the core. I worked up a little test script to show what I mean: use 5.012; use warnings; use IO::Pipe; say 'init pipes:'; pipes(); my $fh = IO::Pipe->reader(q(false)); say 'post open pipes:'; pipes(); say 'return: ' . $fh->close; #say 'return: ' . close($fh); say 'status: ' . $?; say q(); say 'post close pipes:'; pipes(); sub pipes { for my $fd ( glob("/proc/self/fd/*") ) { say readlink($fd) if -p $fd; } say q(); } When using the method it shows the pipe being gone after the close and $? is set as I expected: init pipes: post open pipes: pipe:[992006] return: 1 status: 256 post close pipes: And, when using the built-in it also appears to close the pipe, but does not set $?: init pipes: post open pipes: pipe:[952618] return: 1 status: 0 post close pipes: It seems odd to me that the built-in results in the pipe closure, but doesn't set $?. Can anyone help explain the discrepancy? Thanks!

    Read the article

  • Windows 7 inbuilt and 3rd party (de)fragmentation related queries

    - by Karan
    I have a pretty good idea of how files end up getting fragmented. That said, I just copied ~3,200 files of varying sizes (from a few KB to ~20GB) from an external USB HDD to an internal, freshly formatted (under Windows 7 x64), NTFS, 2TB, 5400RPM, WD, SATA, non-system (i.e. secondary) drive, filling it up 57%. Since it should have been very much possible for each file to have been stored in one contiguous block, I expected the drive to be fragmented not more than 1-2% at most after this rather lengthy exercise (unfortunately this older machine doesn't support USB 3.0). Windows 7's inbuilt defrag utility told me after a quick analysis that the drive was fragmented only 1% or so, which dovetailed neatly with my expectations. However, just out of curiosity I downloaded and ran the latest portable x64 version of Piriform's Defraggler, and was shocked to see the drive being reported as being ~85% fragmented! The portable version of Auslogics Disk Defrag also agreed with Defraggler, and both clearly expected to grind away for ~10 hours to completely defragment the drive. 1) How in blazes could the inbuilt and 3rd party defrag utils disagree so badly? I mean, 10-20% variance is probably understandable, but 1% and 85% are miles apart! This Engineering Windows 7 blog post states: In Windows XP, any file that is split into more than one piece is considered fragmented. Not so in Windows Vista if the fragments are large enough – the defragmentation algorithm was changed (from Windows XP) to ignore pieces of a file that are larger than 64MB. As a result, defrag in XP and defrag in Vista will report different amounts of fragmentation on a volume. ... [Please read the entire post so the quote is not taken out of context.] Could it simply be that the 3rd party defrag utils ignore this post-XP change and continue to use analysis algos similar to those XP used? 2) Assuming that the 3rd party utils aren't lying about the real extent of fragmentation (which Windows is downplaying post-XP), how could the files have even got fragmented so badly given they were just copied over afresh to an empty drive? 3) If vastly differing analysis algos explain the yawning gap, which do I believe? I'm no defrag fanatic for sure, but 85% is enough to make me seriously consider spending 10 hours defragging this drive. On the other hand, 1% reported by Windows' own defragger clearly implies that there is no cause for concern and defragging would actually have negative consequences (as per the post). Is Windows' assumption valid and should I just let it be, or will there be any noticeable performance gains after running one of the 3rd party utils for 10 hours straight? 4) I see that out of the box Windows 7 defrag is scheduled to run weekly. Does anyone know whether it defrags every single time, or only if its analysis reveals a fragmentation percentage over a set threshold? If the latter, what is this threshold and can it be changed, maybe via a Registry edit? Thanks for reading through (my first query on this wonderful site!) and for any helpful replies. Also, if you're answering question #3, please keep in mind that any speed increases post defragging with 3rd party utils vis-à-vis Windows' inbuilt program should not include pre-Vista (preferably pre-Win7) examples. Further, examples of programs that made your system boot faster won't help in this case, since this is a non-system drive (although one that'll still be used daily).

    Read the article

  • Parsing SQLIO Output to Excel Charts using Regex in PowerShell

    - by Jonathan Kehayias
    Today Joe Webb ( Blog | Twitter ) blogged about The Power of Regex in Powershell, and in his post he shows how to parse the SQL Server Error Log for events of interest. At the end of his blog post Joe asked about other places where Regular Expressions have been useful in PowerShell so I thought I’d blog my script for parsing SQLIO output using Regex in PowerShell, to populate an Excel worksheet and build charts based on the results automatically. If you’ve never used SQLIO, Brent Ozar ( Blog | Twitter...(read more)

    Read the article

  • Parsing SQLIO Output to Excel Charts using Regex in PowerShell

    - by Jonathan Kehayias
    Today Joe Webb ( Blog | Twitter ) blogged about The Power of Regex in Powershell, and in his post he shows how to parse the SQL Server Error Log for events of interest.  At the end of his blog post Joe asked about other places where Regular Expressions have been useful in PowerShell so I thought I’d blog my script for parsing SQLIO output using Regex in PowerShell, to populate an Excel worksheet and build charts based on the results automatically. If you’ve never used SQLIO, Brent Ozar ( Blog...(read more)

    Read the article

  • Adding Client-Side events to DevExpress ASP.Net controls

    - by nikolaosk
    I have been involved in a ASP.Net project recently and I have implemented it using the awesome DevExpress ASP.Net controls. In this post I would like to show you how to use the client-side events that can make the user experience of your web application for the end user much better.We do avoid unnecessary page flickering and postbacks.All this functionality is possible through the magic of Ajax and Javascript.I am not going to cover Ajax and Javascript on this post. With the DevExpress ASP.net controls...(read more)

    Read the article

  • Using jQuery and OData to Insert a Database Record

    - by Stephen Walther
    In my previous blog entry, I explored two ways of inserting a database record using jQuery. We added a new Movie to the Movie database table by using a generic handler and by using a WCF service. In this blog entry, I want to take a brief look at how you can insert a database record using OData. Introduction to OData The Open Data Protocol (OData) was developed by Microsoft to be an open standard for communicating data across the Internet. Because the protocol is compatible with standards such as REST and JSON, the protocol is particularly well suited for Ajax. OData has undergone several name changes. It was previously referred to as Astoria and ADO.NET Data Services. OData is used by Sharepoint Server 2010, Azure Storage Services, Excel 2010, SQL Server 2008, and project code name “Dallas.” Because OData is being adopted as the public interface of so many important Microsoft technologies, it is a good protocol to learn. You can learn more about OData by visiting the following websites: http://www.odata.org http://msdn.microsoft.com/en-us/data/bb931106.aspx When using the .NET framework, you can easily expose database data through the OData protocol by creating a WCF Data Service. In this blog entry, I will create a WCF Data Service that exposes the Movie database table. Create the Database and Data Model The MoviesDB database is a simple database that contains the following Movies table: You need to create a data model to represent the MoviesDB database. In this blog entry, I use the ADO.NET Entity Framework to create my data model. However, WCF Data Services and OData are not tied to any particular OR/M framework such as the ADO.NET Entity Framework. For details on creating the Entity Framework data model for the MoviesDB database, see the previous blog entry. Create a WCF Data Service You create a new WCF Service by selecting the menu option Project, Add New Item and selecting the WCF Data Service item template (see Figure 1). Name the new WCF Data Service MovieService.svc. Figure 1 – Adding a WCF Data Service Listing 1 contains the default code that you get when you create a new WCF Data Service. There are two things that you need to modify. Listing 1 – New WCF Data Service File using System; using System.Collections.Generic; using System.Data.Services; using System.Data.Services.Common; using System.Linq; using System.ServiceModel.Web; using System.Web; namespace WebApplication1 { public class MovieService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } First, you need to replace the comment /* TODO: put your data source class name here */ with a class that represents the data that you want to expose from the service. In our case, we need to replace the comment with a reference to the MoviesDBEntities class generated by the Entity Framework. Next, you need to configure the security for the WCF Data Service. By default, you cannot query or modify the movie data. We need to update the Entity Set Access Rule to enable us to insert a new database record. The updated MovieService.svc is contained in Listing 2: Listing 2 – MovieService.svc using System.Data.Services; using System.Data.Services.Common; namespace WebApplication1 { public class MovieService : DataService<MoviesDBEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Movies", EntitySetRights.AllWrite); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } That’s all we have to do. We can now insert a new Movie into the Movies database table by posting a new Movie to the following URL: /MovieService.svc/Movies The request must be a POST request. The Movie must be represented as JSON. Using jQuery with OData The HTML page in Listing 3 illustrates how you can use jQuery to insert a new Movie into the Movies database table using the OData protocol. Listing 3 – Default.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>jQuery OData Insert</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input id="title" /> <br /> <label>Director:</label> <input id="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { // Convert the form into an object var data = { Title: $("#title").val(), Director: $("#director").val() }; // JSONify the data var data = JSON.stringify(data); // Post it $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "MovieService.svc/Movies", data: data, dataType: "json", success: insertCallback }); }); function insertCallback(result) { // unwrap result var newMovie = result["d"]; // Show primary key alert("Movie added with primary key " + newMovie.Id); } </script> </body> </html> jQuery does not include a JSON serializer. Therefore, we need to include the JSON2 library to serialize the new Movie that we wish to create. The Movie is serialized by calling the JSON.stringify() method: var data = JSON.stringify(data); You can download the JSON2 library from the following website: http://www.json.org/js.html The jQuery ajax() method is called to insert the new Movie. Notice that both the contentType and dataType are set to use JSON. The jQuery ajax() method is used to perform a POST operation against the URL MovieService.svc/Movies. Because the POST payload contains a JSON representation of a new Movie, a new Movie is added to the database table of Movies. When the POST completes successfully, the insertCallback() method is called. The new Movie is passed to this method. The method simply displays the primary key of the new Movie: Summary The OData protocol (and its enabling technology named WCF Data Services) works very nicely with Ajax. By creating a WCF Data Service, you can quickly expose your database data to an Ajax application by taking advantage of open standards such as REST, JSON, and OData. In the next blog entry, I want to take a closer look at how the OData protocol supports different methods of querying data.

    Read the article

  • Tips on installing Visual Studio 2010 SP1

    - by Jon Galloway
    Visual Studio SP1 went up on MSDN downloads (here) on March 8, and will be released publicly on March 10 here. Release announcements: Soma: Visual Studio 2010 enhancements Jason Zander: Announcing Visual Studio 2010 Service Pack 1 I started on this post with tips on installing VS2010 SP1 when I realized I’ve been writing these up for Visual Studio and .NET framework SP releases for a while (e.g. VS2008 / .NET 3.5 SP1 post, VS2005 SP1 post). Looking back the years of Visual Studio SP installs (and remembering when we’d get up to SP6 for a Visual Studio release), I’m happy to see that it just keeps getting easier. Service Packs are a lot less finicky about requiring beta software to be uninstalled, install more quickly, and are just generally a lot less scary. If I can’t have a jetpack, at least my future provided me faster, easier service packs. Disclaimer: These tips are just general things I've picked up over the years. I don't have any inside knowledge here. If you see anything wrong, be sure to let me know in the comments. You may want to check the readme file before installing - it's short, and it's in that new-fangled HTML format. On with the tips! Before starting, uninstall Visual Studio features you don't use Visual Studio service packs (and other Microsoft service packs as well) install patches for the specific features you’ve got installed. This is a big reason to always do a custom install when you first install Visual Studio, but it’s not difficult to update your existing installation. Here’s the quick way to do that: Tap the windows key and type “add or remove programs” and press enter (or click on the “Add or remove programs” link if you must).   Type “Visual Studio 2010” in the search box in the upper right corner, click on the Visual Studio program (the one with the VS infinity looking logo) and click on Uninstall/Change. Click on Add or Remove Features The next part’s up to you – what features do you actually use? I’ve been doing primarily ASP.NET MVC development in C# lately, so I selected Visual C# and Visual Web Developer. Remember that you can install features later if needed, and can also install the express versions if you want. Selecting everything just because it’s there - or you paid for it – means that you install updates for everything, every time. When you’ve made your changes, click on the Update button to uninstall unused features. Shut down all instances of Visual Studio It probably goes without saying that you should close a program down before installing it, partly to avoid the file-in-use-reboot-after-install horror. Additional "hunch / works on my machine" quality tip: On one computer I saw a note in the setup log about Visual Studio a prompt for user input to close Visual Studio, although I never saw the prompt. Just to  be sure, I'd personally open up Task Manager and kill any devenv.exe processes I saw running, as it couldn't hurt. Use the web installer I use the Web Installers whenever possible. There’s no point in downloading the DVD unless you’re doing multiple installs or won’t have internet access. The DVD IS is 1.5GB, since it needs to be able to service every possible supported installation option on both x86 and x64. The web installer is 776 KB (smaller than calc.exe), so you can start the installation right away. Like other web installers, the real benefit is that it only installs the updates you need (hence the reason for step 1 – uninstalling unused components). Instead of 1.5GB, my download was roughly 530MB. If you’re installing from MSDN (this link takes you right to the Visual Studio installs), select the first one on the list: The first step in the installation process is to analyze the machine configuration and tell you what needs to be installed. Since I've trimmed down my features, that's a pretty short list. The time's not far off where I may not install SQL Server on my dev machines, just using SQL Server Compact - that would shorten the list further. When I hit next, you can see that the download size has shrunk considerably. When I start the install, note that the installation begins while other components are downloading - another benefit of the web install. On my mid-range desktop machine, the install took 25 minutes. What if it takes longer? According to Heath Stewart (Visual Studio installer guru), average SP1 installs take roughly 45 minutes. An installation which takes hours to complete may be a sign of a problem: see his post Visual Studio 2010 Service Pack 1 installing for over 2 hours could be a sign of a problem. Why so long? Yes, even 25 minutes is a while. Heath's got another blog post explaining why the update can take longer than the initial install (see: A patch may take as long or longer to install than the target product) which explains all the additional steps and complexities a patch needs to deal with, as well as some mitigation steps that deployment authors can take to mitigate the impact. Other things to know about Visual Studio 2010 SP1 Installs over Visual Studio 2010 SP1 Beta That's nice. Previous Visual Studio versions did a number of annoying things when you installed SP's over beta's - fail with weird errors, get part way through and tell you needed to cancel and uninstall first, etc. I've installed this on two machines that had random beta stuff installed without tears. That Readme file you didn't read I mentioned the readme file earlier (http://go.microsoft.com/fwlink/?LinkId=210711 ). Some interesting things I picked up in there: 2.1.3. Visual Studio 2010 Service Pack 1 installation may fail when a USB drive or other removeable drive is connected 2.1.4. Visual Studio must be restarted after Visual Studio 2010 SP1 tooling for SQL Server Compact (Compact) 4.0 is installed 2.2.1. If Visual Studio 2010 Service Pack 1 is uninstalled, Visual Studio 2010 must be reinstalled to restore certain components 2.2.2. If Visual Studio 2010 Service Pack 1 is uninstalled, Visual Studio 2010 must be reinstalled before SP1 can be installed again 2.4.3.1. Async CTP If you installed the pre-SP1 version of Async CTP but did not uninstall it before you installed Visual Studio 2010 SP1, then your computer will be in a state in which the version of the C# compiler in the .NET Framework does not match the C# compiler in Visual Studio. To resolve this issue: After you install Visual Studio 2010 SP1, reinstall the SP1 version of the Async CTP from here. Hardware acceleration for Visual Studio is disabled on Windows XP Visual Studio 2010 SP1 disables hardware acceleration when running on Windows XP (only on XP). You can turn it back on in the Visual Studio options, under Environment / General, as shown below. See Jason Zander's post titled Performance Troubleshooting Article and VS2010 SP1 Change.

    Read the article

  • SQL SERVER – Difference Between DATETIME and DATETIME2

    - by pinaldave
    Yesterday I have written a very quick blog post on SQL SERVER – Difference Between GETDATE and SYSDATETIME and I got tremendous response for the same. I suggest you read that blog post before continuing this blog post today. I had asked people to honestly take part and share their view about above two system function. There are few emails as well few comments on the blog post asking question how did I come to know the difference between the same. The answer is real world issues. I was called in for performance tuning consultancy where I was asked very strange question by one developer. Here is the situation he was facing. System had a single table with two different column of datetime. One column was datelastmodified and second column was datefirstmodified. One of the column was DATETIME and another was DATETIME2. Developer was populating them with SYSDATETIME respectively. He was always thinking that the value inserted in the table will be the same. This table was only accessed by INSERT statement and there was no updates done over it in application.One fine day he ran distinct on both of this column and was in for surprise. He always thought that both of the table will have same data, but in fact they had very different data. He presented this scenario to me. I said this can not be possible but when looked at the resultset, I had to agree with him. Here is the simple script generated to demonstrate the problem he was facing. This is just a sample of original table. DECLARE @Intveral INT SET @Intveral = 10000 CREATE TABLE #TimeTable (FirstDate DATETIME, LastDate DATETIME2) WHILE (@Intveral > 0) BEGIN INSERT #TimeTable (FirstDate, LastDate) VALUES (SYSDATETIME(), SYSDATETIME()) SET @Intveral = @Intveral - 1 END GO SELECT COUNT(DISTINCT FirstDate) D_GETDATE, COUNT(DISTINCT LastDate) D_SYSGETDATE FROM #TimeTable GO SELECT DISTINCT a.FirstDate, b.LastDate FROM #TimeTable a INNER JOIN #TimeTable b ON a.FirstDate = b.LastDate GO SELECT * FROM #TimeTable GO DROP TABLE #TimeTable GO Let us see the resultset. You can clearly see from result that SYSDATETIME() does not populate the same value in the both of the field. In fact the value is either rounded down or rounded up in the field which is DATETIME. Event though we are populating the same value, the values are totally different in both the column resulting the SELF JOIN fail and display different DISTINCT values. The best policy is if you are using DATETIME use GETDATE() and if you are suing DATETIME2 use SYSDATETIME() to populate them with current date and time to accurately address the precision. As DATETIME2 is introduced in SQL Server 2008, above script will only work with SQL SErver 2008 and later versions. I hope I have answered few questions asked yesterday. Reference: Pinal Dave (http://www.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL DateTime, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • An XEvent a Day (10 of 31) – Targets Week – etw_classic_sync_target

    - by Jonathan Kehayias
    Yesterday’s post, Targets Week – pair_matching , looked at the pair_matching Target in Extended Events and how it could be used to find unmatched Events.  Today’s post will cover the etw_classic_sync_target Target, which can be used to track Events starting in SQL Server, out to the Windows Server OS Kernel, and then back to the Event completion in SQL Server. What is the etw_classic_sync_target Target? The etw_classic_sync_target Target is the target that hooks Extended Events in SQL Server...(read more)

    Read the article

  • An XEvent a Day (11 of 31) – Targets Week – Using Multiple Targets to Debug Orphaned Transactions

    - by Jonathan Kehayias
    Yesterday’s blog post Targets Week – etw_classic_sync_target covered the ETW integration that is built into Extended Events and how the etw_classic_sync_target can be used in conjunction with other ETW traces to provide troubleshooting at a level previously not possible with SQL Server. In today’s post we’ll look at how to use multiple targets to simplify analysis of Event collection. Why Multiple Targets? You might ask why you would want to use multiple Targets in an Event Session with Extended...(read more)

    Read the article

  • SQL SERVER – Wait Stats – Wait Types – Wait Queues – Day 0 of 28

    - by pinaldave
    This blog post will have running account of the all the blog post I will be doing in this month related to SQL Server Wait Types and Wait Queues. SQL SERVER – Introduction to Wait Stats and Wait Types – Wait Type – Day 1 of 28 SQL SERVER – Single Wait Time Introduction with Simple Example – Wait Type – Day 2 of 28 Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQLAuthority News – Monthly Roundup of Best SQL Posts

    - by pinaldave
    After receiving lots of requests from different readers for long time I have decided to write first monthly round up. If all of you like it I will continue writing the same every month. In fact, I really like the idea as I was able to go back and read all of my posts written in this month. This month was started with answering one of the most common question asked me to about What is Adventureworks? Many of you know the answer but to the surprise more number of the reader did not know the answer. There were few extra blog post which were in the same line as following. SQL SERVER – The Difference between Dual Core vs. Core 2 Duo SQLAuthority News – Wireless Router Security and Attached Devices – Complex Password SQL SERVER – DATE and TIME in SQL Server 2008 DMVs are also one of the most handy tools available in SQL Server, I have written following blog post where I have used DMV in scripts. SQL SERVER – Get Latest SQL Query for Sessions – DMV SQL SERVER – Find Most Expensive Queries Using DMV SQL SERVER – List All the DMV and DMF on Server I was able to write two follow-up of my earlier series where I was finding the size of the indexes using different SQL Scripts. And in fact one of the article Powershell is used as well. This was my very first attempt to use Powershell. SQL SERVER – Size of Index Table for Each Index – Solution 2 SQL SERVER – Size of Index Table for Each Index – Solution 3 – Powershell SQL SERVER – Four Posts on Removing the Bookmark Lookup – Key Lookup Without realizing I wrote series of the blog post on disabled index here is its complete list. I plan to write one more follow-up list on the same. SQL SERVER – Disable Clustered Index and Data Insert SQL SERVER – Understanding ALTER INDEX ALL REBUILD with Disabled Clustered Index SQL SERVER – Disabled Index and Update Statistics Two special post which I found very interesting to write are as following. SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008 SQL SERVER – Simple Example of Snapshot Isolation – Reduce the Blocking Transactions In personal adventures, I won the Community Impact Award for Last Year from Microsoft. Please leave your comment about how can I improve this round up or what more details I should include in the same. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Introduction to Lean Software Development and Kanban Systems – Eliminate Waste

    - by Ben Griswold
    In this post, we’ll continue the Lean Software Development and Kanban Systems series by concentrating on Principle #1: Eliminate Waste.   “Muda” is Waste in Japanese. In the next part of the series, we’ll dive into Principle #2: Create Knowledge / Amplify Learning. And I am going to be a little obnoxious about listing my Lean and Kanban references with every series post.  The references are great and they deserve this sort of attention. 

    Read the article

  • Getting started with Blocks and namespaces - Enterprise Library 5.0 Tutorial Part 2

    This is my second post in this series. In first blog post I explained how to install Enterprise Library 5.0 and provided links to various resources. Enterprise Library is divided into various blocks. Simply we can say, a block is a ready made solution for a particular common problem across various applications. So instead focusing on implementation of common problem across various applications, we can reuse these fully tested and extendable blocks to increase the productivity and also extendibility as these blocks are made with good design principles and patterns. Major blocks of Enterprise Library 5.0 are as follows.   Core infrastructure Functional Application Blocks Caching Data Exception Handling Logging Security Cryptography Validation Wiring Application Blocks Unity Policy Injection/Interception   Each block resides in its own assembly, and also some extra assemblies for common infrastructure. Assemblies are as follows. Microsoft.Practices.EnterpriseLibrary.Caching.Cryptography.dll Microsoft.Practices.EnterpriseLibrary.Caching.Database.dll Microsoft.Practices.EnterpriseLibrary.Caching.dll Microsoft.Practices.EnterpriseLibrary.Common.dll Microsoft.Practices.EnterpriseLibrary.Configuration.Design.HostAdapter.dll Microsoft.Practices.EnterpriseLibrary.Configuration.Design.HostAdapterV5.dll Microsoft.Practices.EnterpriseLibrary.Configuration.DesignTime.dll Microsoft.Practices.EnterpriseLibrary.Configuration.EnvironmentalOverrides.dll Microsoft.Practices.EnterpriseLibrary.Data.dll Microsoft.Practices.EnterpriseLibrary.Data.SqlCe.dll Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.dll Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging.dll Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.WCF.dll Microsoft.Practices.EnterpriseLibrary.Logging.Database.dll Microsoft.Practices.EnterpriseLibrary.Logging.dll Microsoft.Practices.EnterpriseLibrary.PolicyInjection.dll Microsoft.Practices.EnterpriseLibrary.Security.Cache.CachingStore.dll Microsoft.Practices.EnterpriseLibrary.Security.Cryptography.dll Microsoft.Practices.EnterpriseLibrary.Security.dll Microsoft.Practices.EnterpriseLibrary.Validation.dll Microsoft.Practices.EnterpriseLibrary.Validation.Integration.AspNet.dll Microsoft.Practices.EnterpriseLibrary.Validation.Integration.WCF.dll Microsoft.Practices.EnterpriseLibrary.Validation.Integration.WinForms.dll Microsoft.Practices.ServiceLocation.dll Microsoft.Practices.Unity.Configuration.dll Microsoft.Practices.Unity.dll Microsoft.Practices.Unity.Interception.dll Enterprise Library Configuration Tool In addition to these assemblies you would get configuration tool “EntLibConfig-32.exe”. If you are targeting your application to .NET 4.0 framework then you would need to use “EntLibConfig.NET4.exe”. Optionally you can install Visual Studio 2008 and Visual Studio 2010 add-ins whilst installing of Enterprise Library. So that you can invoke the enterprise Library configuration from Visual Studio by right clicking on “app.config” or “web.config” file as shown below. I would suggest you to download the documentation from Codeplex which was released on May 2010. It consists 3MB of information. you can also find issue tracker to know various issues/bugs currently people talking about enterprise library. There is also discussion link takes you to community site where you can post your questions. In my next blog post, I would cover more on each block. span.fullpost {display:none;}

    Read the article

  • An XEvent a Day (18 of 31) – A Look at Backup Internals and How to Track Backup and Restore Throughput (Part 2)

    - by Jonathan Kehayias
    In yesterday’s blog post A Look at Backup Internals and How to Track Backup and Restore Throughput (Part 1) , we looked at what happens when we Backup a database in SQL Server.  Today, we are going to use the information we captured to perform some analysis of the Backup information in an attempt to find ways to decrease the time it takes to backup a database.  When I began reviewing the data from the Backup in yesterdays post, I realized that I had made a mistake in the process and left...(read more)

    Read the article

  • An XEvent a Day (3 of 31) – Managing Event Sessions

    - by Jonathan Kehayias
    Yesterdays post, Querying the Extended Events Metadata , showed how to discover the objects available for use in Extended Events.  In todays post, we’ll take a look at the DDL Commands that are used to create and manage Event Sessions based on the objects available in the system.  Like other objects inside of SQL Server, there are three DDL commands that are used with Extended Events; CREATE EVENT SESSION , ALTER EVENT SESSION , and DROP EVENT SESSION .  The command names are self...(read more)

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >