Search Results

Search found 4708 results on 189 pages for '2005'.

Page 158/189 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • SQL Server becomes slow after restart

    - by Tobi DM
    I already posted this one on stackoverflow but someone gave me the hint to that I might have more luck on serverfault. We use SQL Server 2005 on an Windwos Server 2008. Ther Server has 48 GB RAM. SQL Server is configured to use 40 GB RAM. There is only one database hosted (About 70 GB). The only app beside SQL Server is our App-Server which connects the clients to the database. Now we encounter the following problem: After a restart of the server our the performance is great. The server grabs the 40 GB RAM wich it is allowed to and then runs fast as hell. But after about 4 weeks the system becomes slower and slower. The execution of statements (seen in the profiler) is raising slowly. But I cannot see that there is something going wrong on the server. CPU usage is at about 20% I/O also seems to be no Problem The process monitor does also not show that there are strange apps or something like that. Eventlog does also have no interessting messages No open transactions or blockings to see We do not use cursors in our app We tried already the following things without effect: Droped the cache by using the statements DBCC FreeProcCache DBCC FREESYSTEMCACHE('ALL') DBCC DropCleanbuffers Restarted the Appserver we are using. Restart the sql server service But nothing did help exept restarting the whole server. Any ideas?

    Read the article

  • best practices for setting up a new windows 2008 R2 server with ec2 AWS

    - by Alex
    Can someone comment what they would add to the following list of SOP in terms of best practices? This is being set up on AWS, and then after further testing, back in our datacenter. Standard Operation Procedure (SOP): Installation Part: 2 - Installation of Software Components in Windows 2008 R2 (Updated). Step: 1 Logon to the host through Remote Desktop. Strp: 2 Open Server Manager - Server Roles - Install Web Server IIS 7.5 with compatible of IIS 6 features and Management compatibility mode. Step: 3 Open IE/Mozilla to Download the below listed software's and save all installation files to folder called "AWS Server Install Files" for future reference.. Net Framework 2.0 (Download that from internet) Crystal reports for .Net Framework 2.0 (x64) (Download that from internet) SQL Server 2005 (AWS Image) Step: 4 Once all software's saved on local drive, then Install it one by one. Step: 5 Navigate to Desktop folder to install the below listed softwares. Microsoft Asp.net 2.0 AjaxExtention 1.0 (placed on Desktop \Softwares) WebEx recorder. (placed on Desktop \Softwares) Winrar(placed on Desktop \Softwares) Step: 6 Make sure all the software are working fine. Step: 7 Inspect the server once entirely. Step: 8 Logoff & Stop the Instance.

    Read the article

  • netlogon errors

    - by rorr
    I have two instances of mssql 2005 and am using CA XOSoft replication. The master is a failover cluster and the replica is a standalone server. They are all running Server 2003 sp2 x64. Same patch levels on all servers. This setup has worked great for several months until we recently restricted the RPC ports on both nodes of the master(5000 - 6000 using rpccfg.exe). We have to implement egress filtering, thus the limiting of the ports. We began receiving login errors for sql windows authentication and NETLOGON Event ID: 5719: This computer was not able to set up a secure session with a domain controller in domain due to the following: Not enough storage is available to process this command. This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator. We also see group policies failing to update and cluster file shares go offline at the same time. The RPC ports were set back to default when we started seeing these problems and the servers rebooted, but the problems persist. The domain controllers are not showing any errors. Running dcdiag and netdiag shows everything is fine. We have noticed that the XOSoft service ws_rep.exe is using a lot of handles(8 - 9k), about the same number that sqlserver is using. As soon as xosoft replication is stopped the login errors cease and everything functions correctly. I have opened a ticket with CA for XOSoft, but I'm not sure that the problem is actually xosoft, but that it is the one bringing the problem to light. I'm looking for tips on debugging RPC problems. Specifically on limiting the ports and then reverting the changes.

    Read the article

  • Compressing and copying large files on Windows Server?

    - by Aaron
    I've been having a hard time copying large database backups from the database server to a test box at another site. I'm open to any ideas that would help me get this database moved without having to resort to a USB hard drive and the mail. The database server is running Windows Server 2003 R2 Enterprise, 16 GB of RAM and two quad-core 3.0 GHz Xeon X5450s. Files are SQL Server 2005 backup files between 100 GB and 250 GB. The pipe is not the fastest and SQL Server backup files typically compress down to 10-40% of the original, so it made sense to me to compress the files first. I've tried a number of methods, including: gzip 1.2.4 (UnxUtils) and 1.3.12 (GnuWin) bzip2 1.0.1 (UnxUtils) and 1.0.5 (Cygwin) WinRAR 3.90 7-Zip 4.65 (7za.exe) I've attempted to use WinRAR and 7-Zip options for splitting into multiple segments. 7za.exe has worked well for me for database backups on another server, which has ~50 GB backups. I've also tried splitting the .BAK file first with various utilities and compressing the resulting segments. No joy with that approach either- no matter the tool I've tried, it ends up butting against the size of the file. Especially frustrating is that I've transferred files of similar size on Unix boxes without problems using rsync+ssh. Installing an SSH server is not an option for the situation I'm in, unfortunately. For example, this is how 7-Zip dies: H:\dbatmp>7za.exe a -t7z -v250m -mx3 h:\dbatmp\zip\db-20100419_1228.7z h:\dbatmp\db-20100419_1228.bak 7-Zip (A) 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Scanning Creating archive h:\dbatmp\zip\db-20100419_1228.7z Compressing db-20100419_1228.bak System error: Unspecified error

    Read the article

  • Best Asp.net Hosting

    - by dotnetguts
    There are many asp.net web hosting companies which spends lot on advertisement and also gives you very cheaper rate, as low as $5, but when it comes to support they are simply hopeless. Everyone can you please pass your experience with your past hosting companies and suggest any good asp.net hosting company? Please consider following requirement factors 1) Asp.net 3.5 or 4.0 supported. 2) Url Rewriter support 3) GZip support (Dynamic through code) 4) Initial Setup support (If required) 5) SQL Server 2005 or 2008 6) Allow to access SQL Server DB using SQL Mgmt Studio 7) Environment supporting Backup and Restore of DB on my own, without involving tech support team 8) Full Text Search support 9) FTP support 10) I can able to send atleast 500 Emails daily. 11) 99.9% Up Time (No matter all web hosting say they have 99.9% Up Time, but its not true). 12) Alert Email to be sent when they do any maintenance or during downtime. 13) Hosting Price should be reasonable. Incase you feel i am missing something please add to the list. Can anyone suggest good webhosting company based on above factors?

    Read the article

  • Recover badly recorded DVDs

    - by CesarGon
    A few years ago (2003-2005) I bought a Sony USB external DVD recorder for my Dell laptop and I used it to burn a lot of discs. Much later, when I tried to use one of these discs, I realised that I could not read it. The disc behaved as if it was scratched or dirty. I tried on a couple of different DVD drives but got the same effect. Sadly, all the discs that I burnt with that recorder suffer from the same problem. Edit. When I read one of these discs with ImgBurn, I get lots of unrecovered read errors in multiple sectors, even at 1x speed. The sectors that cause read errors seem to be quite random; it's not always the same one. I have no idea what could be wrong with the discs. I doubt that they are scratched or dirty; it would be too much of a coincidence that all the discs that I burnt with that recorder got damaged at the same time. Also, they don't show any physical defects. Is there any way to diagnose what the problem is and, hopefully, recover the contents of the discs? Many thanks.

    Read the article

  • File sharing problem on Windows Server 2003 x64

    - by O. Askari
    Hi, We have a customer that hosts our .NET application server on Windows Server 2003 x64. The problem is, its file sharing gets totally disabled after about 10-30 minutes. The only way to re-enable it is to restart the server but the same thing happens again after each restart. This server contains SQL Server 2005 Enterprise, .NET Framework 3.5 and our .NET based application server. We haven't had such a problem with any other customer before so we asked them to prepare another server to deploy our application on it. We installed our application server on the new machine and let SQL Server remain on the old one. Unfortunately the same problem happened to the new machine too. Now the old machine works only as database server and the new one works as application server but both of them have the same file sharing problem. File sharing on both machines doesn't get disabled on the same time but it eventually happens to both of them. I wonder why is this happening and how to find the reason to this problem. Any suggestion or solution is much appreciated.

    Read the article

  • windows 7 monitor instantly resumes from power off

    - by user167328
    It seems that some service or program is preventing my monitor from going into power save mode but I cannot find a way to dig down the cause. Does anybody know a way to find out which event may be causing the problem? Background information: If I leave/lock my computer I want to immediately power off the monitor. Stupid windows has no program for this so I use a third party utility. I use Display.exe from Noel Danjou since 2005 and have had no problem with it on XP, my Notebook with w7 x64 pro and at the office with w7 x64 enterprise. With my new home pc with w7 x64 pro the monitor goes off but instantly comes back on after a second. So I tested using a power plan with the shortest timeout of 1 minute but with that the display never goes off at all. (Setting a power plan timeout is no useful option because then I cannot read a document or watch a movie without the monitor going off) I could try trial and error methods but this is cumbersome and not what I regard as a professional solution. So I would prefer a way to analyze the windows events to find out the reason for the display staying active. Add On: In the meantime I tried safe mode and also exited from all tray programs but the problem is still there. I also checked the BIOS for special power settings but still no solution (P.S. Mobo is H87-G41 PC Mate with latest BIOS rev.)

    Read the article

  • Windows Server 2003 IPSec Tunnel Connected, But Not Working (Possibly NAT/RRAS Related)

    - by Kevinoid
    Configuration I have setup a "raw" IPSec tunnel between a Windows Server 2003 (SBS) machine and a Netgear FVG318 according to the instructions in Microsoft KB816514. The configuration is as follows (using the same conventions as the article): NetA | SBS2003 | FVG318 | NetB 10.0.0.0/24 | 216.x.x.x | 69.y.y.y | 10.0.254.0/24 Both the Main Mode and Quick Mode Security Associations are successfully completed and appear in the IP Security Monitor. I am also able to ping the SBS2003 server on its private address from any computer on NetB. The Problem Any traffic sent from a computer on NetA to NetB, or from SBS2003 to NetB (excluding ICMP Ping responses), is sent out on the public network interface outside the IPSec tunnel (no encryption or header authentication, as if the tunnel were not there). Pings sent from a computer on NetB to a computer on NetA successfully reach computers on NetA, but the responses are silently discarded by SBS2003 (they do not go out in the clear and do not generate any encrypted traffic). Possible Solutions Incorrect Configuration I could have mistyped something, somewhere, or KB816514 could be incorrect in some way. I have tried very hard to eliminate the first option. Have re-created the configuration several times, tried tweaking and adjusting all the settings I could without success (most prevent the SA from being established). NAT/RRAS I have seen multiple posts elsewhere suggesting that this could be due to interaction between NAT and the IPSec filters. Possibly the NetA private addresses get rewritten to 216.x.x.x before being compared with the Quick Mode IPSec filters and don't get tunneled because of the mismatch. In fact, The Cable Guy article from June 2005 "TCP/IP Packet Processing Paths" suggests that this is the case, (see step 2 and 4 of the Transit Traffic path). If this is the case, is there a way to exclude NetA-NetB traffic from NAT? Any thoughts, ideas, suggestions, and/or comments are appreciated.

    Read the article

  • What should the memory configuration be?

    - by AngryHacker
    We have a server (ProLiant DL585 G1 by HP), which hosts Windows 2003 x64 R2 with SQL Server 2005 x64 and a host of other apps. It currently has 6GB of RAM. We are currently very memory constrained and it's clear that we need to get more memory. 8GB will probably do the trick, however, we are not sure as to what memory configuration will give us the biggest performance buck. Currently all 8 memory slots are filled (4 slots have 1GB chip, while the other 4 slots have 512MB chips). Should we throw the 512MB sticks away and just replace them all with 1GB sticks? If we decided to go with a higher memory configuration (e.g. 10GB or 12GB or 16GB), is it advisable to keep all the sticks of the same size or it does not matter? I was once told that interleaved memory requires (for better performance) that memory should be in multiples (e.g. 2 or 4 or 8 or 16, etc...). I am not even sure that the server has an interleaved configuration (and don't know how to find out), but is this true? Thanks.

    Read the article

  • Losing SQL connections

    - by john pavelka
    sql servr 2005 - Standard; one dedicated sql server (VM); windows server 2003; Small databases; About once a week we lose all sql connections. It seems to fix itself after about 5-10 minutes. System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. We don't have a fully qualified DBA; it's kind of a joint effort here. Can somebody give me some general ideas for troubleshooting the network side and the application side? We already ran a few tuning profiles and ran through Database Tuning Advisor to apply indexing recommendations. It would sure be nice if there was a way to take a snapshot of what was running on sql server when these 100% cpu spikes occured, but sometimes we're not around. Is it common to throttle CPU for certain processes? Can this be done with Windows server 2003? For example, if security apps were making cpu spike to 100%, is there a way to limit their cpu usage? Any advice is appreciated. thanks,

    Read the article

  • NDR for meeting requests

    - by Adam
    We've got a mailbox for each department (e.g [email protected] and [email protected]) and everyone in that department has access to it, access is granted using Exchange Management Console. If I send a calendar invite to [email protected], I get a Undeliverable report: Delivery has failed to these recipients or groups: User_A The e-mail address you entered couldn't be found. Check the address and try resending the message. If the problem continues, please contact your helpdesk. User_B The e-mail address you entered couldn't be found. Check the address and try resending the message. If the problem continues, please contact your helpdesk. User_C The e-mail address you entered couldn't be found. Check the address and try resending the message. If the problem continues, please contact your helpdesk. The users are no longer in AD or Exchange but we cannot find any mention of them within any deligates or permissions anywhere. We only started to get this problem AFTER we upgraded our DCs from Windows Server 2003 to Windows Server 2008 and Exchange server from Windows Server 2003, with Exchange 2005 to Windows Server 2008, with exchange 2010.

    Read the article

  • Slow DB Performance. Seems to be memory related.

    - by David
    I am seeing a pooorly performing web app with a SQL 2005 backend. The db is on a w2k3 machine with 4GB RAM. When I run perfmon on it I see the following. Page life expectancy is low. Consistently under 300 while the Buffer cache hit ratio is always 99% +. The target server memory is always 1618304 and the total server memory is always a number just below that. So it seems that it isn't grabbing enough of the available memory. I have AWE enabled, with the lock pages right for the SQL service account and have set a maximum of 2.25Gb... but it doesn't go near that. When I restart the SQL service the page life expectancy goes much higher, 1000+, and the total target memory starts at 0 and slowly works its way back up to the previous limit. Then it hits the limit and the page life expectancy goes back down massively to <300. So I'm guessing there is something limiting the amount of memory. Any ideas on what that would be and how I can fix it?

    Read the article

  • LVS / IPVS difference in ActiveConn since upgrading

    - by Hans
    I've recently migrated from an old version of LVS / ldirectord (Ultra Monkey) to a new Debian install with ldirectord. Now the amount of Active Connections is usually higher than the amount of Inactive Connections, it used to be the other way around. Basically on the old load balancer the connections looked something like: -> RemoteAddress:Port Forward Weight ActiveConn InActConn -> 10.84.32.21:0 Masq 1 12 252 -> 10.84.32.22:0 Masq 1 18 368 However since migrating it to the new load balancer it looks more like: -> RemoteAddress:Port Forward Weight ActiveConn InActConn -> 10.84.32.21:0 Masq 1 313 141 -> 10.84.32.22:0 Masq 1 276 183 Old load balancer: Debian 3.1 ipvsadm 1.24 ldirectord 1.2.3 New load balancer: Debian 6.0.5 ipvsadm 1.25 ldirectord 1.0.3 (I guess the versioning system changed) Is it because the old load balancer was running a kernel from 2005, and ldirectord from 2004, and things have simply changed in the past 7 - 8 years? Did I miss some sysctl settings that I should be enforcing for it to behave in the same way? Everything appears to be working fine but can anyone see an issue with this behaviour? Thanks in advance! Additional info: I'm using LVS in masquerading mode, the real servers have the load balancer as their gateway. The real servers are running Apache, which hasn't changed during the upgrade. The boxes themselves show roughly the same amount of Inactive Connections shown in ipvsadm.

    Read the article

  • What does SQL Server's BACKUPIO wait type mean?

    - by solublefish
    I'm using Sql Server 2008 ("R1"), with some maintenance plans that back up my databases to a network share. Some of my backup jobs show long waits of type "BACKUPIO". Of course it seems like this is an I/O subsystem limitation, but I'm skeptical. Perfmon stats for I/O on the production (source) server are well within normal trends for that server. The destination server shows a sustained 7MB/s write rate, which seems incredibly low, even for a slow disk. The network link is gigabit ethernet and nowhere near saturated. The few docs I've turned up about BACKUPIO indicate that it's not specifically a wait on I/O, surprisingly enough. This MSFT doc says it's abnormal unless you're using a tape drive, which I'm not. But it doesn't say (or I don't understand) exactly what resource is missing. http://www.docstoc.com/docs/24580659/Performance-Tuning-in-SQL-Server-2005 And this piece says it's not related to I/O performance at all. http://www.informit.com/articles/article.aspx?p=686168&seqNum=5 "Note that BACKUPIO and IO_AUDIT_MUTEX are not related to IO performance." Anyway, does anyone know what BACKUPIO actually means and/or what I can do to diagnose or eliminate it?

    Read the article

  • What does the 'Burst Rate' stat mean in HDTune?

    - by UpTheCreek
    I recently upgraded my laptop's v slow hard drive to a seagate momentus 7200. Everything is working fine, but I'm a bit confused by these benchmark results: The burst rate is significantly less than the Maximim transfer rate, and not much higher than the normal minimum (if you ignore the spikes). What's going on here? On the HDtune website it defines Burst Rate as: ...the highest speed (in megabytes per second) at which data can be transferred from the drive interface (IDE or SCSI for example) to the operating system. Which begs some questions... e.g. if this is the highest, then how did the bechmarking tool record the 103MB/sec maximum? And if this really is the true maximum, then where is the bottleneck? The laptops SATA interface is on an Intel 82801GBM southbridge controller. When I check in hardware manager, I see that it's driver is iaStor.sys from 2005. Maybe that's the issue? I'll look for a newever version, but any insights would be appreciated. Thanks

    Read the article

  • Moving Images from Database to File System

    - by msarchet
    So currently in our system we have been storing image files in the database (SQL Express 2005). Unfortunately it wasn't perceived that this would reach the max database size allowed by the SQL Express License. So I have proposed a plan of storing the images in the file system and only indexing where the file is in the database. The plan is to save the root path in our OptionsTable as something like ImagesRoot and then only saving the actual imageID in the table, which is basically a FK from the PK of the record with the image. I have determined that it would be best to then split this down into sub-directories inside of the ImagesRoot based on every 1000 images so basically (ImagedID / 1000)\(ImageID % 1000) (e.g. ImageID is 1999 it would be in %ImageRoot%\1\999). I'm looking for any potential pitfalls of this system and any thing that could be improved as I am already receiving some resistance from the owner of the company who wants everything to be in databases. Along those lines I would also take reasons why it should all be in databases. I should mention we have in place already automated backups that run for all of our customers databases and any files that are generated by our program that are required to be saved over a period of time These are optional but if someone isn't using our system it is expected that they are using their own or data loss isn't our problem (it is if our system fails and they are using it!). Thanks

    Read the article

  • Low 'Burst Rate' from SATA drive in HDTune?

    - by UpTheCreek
    I recently upgraded my laptop's v slow hard drive to a seagate momentus 7200. Everything is working fine, but I'm a bit confused by these benchmark results: The burst rate is significantly less than the Maximim transfer rate, and not much higher than the normal minimum (if you ignore the spikes). What's going on here? On the HDtune website it defines Burst Rate as: ...the highest speed (in megabytes per second) at which data can be transferred from the drive interface (IDE or SCSI for example) to the operating system. Which begs some questions... e.g. if this is the highest, then how did the bechmarking tool record the 103MB/sec maximum? And if this really is the true maximum, then where is the bottleneck? The laptops SATA interface is on an Intel 82801GBM southbridge controller. When I check in hardware manager, I see that it's driver is iaStor.sys from 2005. Maybe that's the issue? I'll look for a newever version, but any insights would be appreciated. Thanks UPDATE: Acorting to this page on the HDTune website... An important parameter of the test is the Burst Rate. This value should always be higher than the maximum transfer rate. A lower value is usually an indication of a configuration problem. So what might be the configuration problem?

    Read the article

  • SQL Server: Network pauses after installing cheap SATA card: Is there a solution?

    - by samsmith
    At the risk of being assigned to the "bad DBA" club... I did something desperate, and may have to undo it. Problem: After installing a low cost eSATA board, my SQL Server is intermittently unresponsive (seemingly when there is a lot of IO to the eSATA drive). Questions: 1) Is there a solution to the intermittent unresponsiveness that allows me to keep the eSATA in place? 2) Whether or not (1==true): What is a decent, low cost way to add 1-3 TB storage to SQL for non-critical SQL DBs? Detail: Our SAN is full, and expanding it is costly and will take a month. I have a pressing need to add 1-3 TB for some development DBs (e.g. not mission critical; data loss is OK). As a bandaid, I threw a $20 eSATA PCI board in the Dell 1950 server, and attached an external 2TB eSATA drive. This seemed to work fine, but I notice that our production SQL DBs, and even remote desktop, now experience network "pauses" that they never did before (with both SQL client apps and remote desktop throwing "networking problem" errors). This SQL Server has lots of memory, and runs an instance of SQL 2005 (where all line of business apps reside) and an instance SQL 2008 (for development db's). SQL Server RAM has been appropriately configured, and this setup has run great for years. The server is: Dell 1950 Win2003 x64 14GB RAM PERC controller, 2 mirrored hd's internal Dell SAN over gbit ethernet, dual homed 2 PCIx slots (1 used by NIC for SAN, 1 now in use for eSATA board) Thank you for suggestions!

    Read the article

  • Unnecessary Error Message Being Displayed

    - by ThatMacLad
    I've set up a form to update my blog and it was working fine up until about this morning. It keeps on turning up with an Invalid Entry ID error on the edit post page when I click the update button despite the fact that it updates the homepage. All help is seriously appreciated. <html> <head> <title>Ultan's Blog | New Post</title> <link rel="stylesheet" href="css/editpost.css" type="text/css" /> </head> <body> <div class="new-form"> <div class="header"> </div> <div class="form-bg"> <?php mysql_connect ('localhost', 'root', 'root') ; mysql_select_db ('tmlblog'); if (isset($_POST['update'])) { $id = htmlspecialchars(strip_tags($_POST['id'])); $month = htmlspecialchars(strip_tags($_POST['month'])); $date = htmlspecialchars(strip_tags($_POST['date'])); $year = htmlspecialchars(strip_tags($_POST['year'])); $time = htmlspecialchars(strip_tags($_POST['time'])); $entry = $_POST['entry']; $title = htmlspecialchars(strip_tags($_POST['title'])); if (isset($_POST['password'])) $password = htmlspecialchars(strip_tags($_POST['password'])); else $password = ""; $entry = nl2br($entry); if (!get_magic_quotes_gpc()) { $title = addslashes($title); $entry = addslashes($entry); } $timestamp = strtotime ($month . " " . $date . " " . $year . " " . $time); $result = mysql_query("UPDATE php_blog SET timestamp='$timestamp', title='$title', entry='$entry', password='$password' WHERE id='$id' LIMIT 1") or print ("Can't update entry.<br />" . mysql_error()); header("Location: post.php?id=" . $id); } if (isset($_POST['delete'])) { $id = (int)$_POST['id']; $result = mysql_query("DELETE FROM php_blog WHERE id='$id'") or print ("Can't delete entry.<br />" . mysql_error()); if ($result != false) { print "The entry has been successfully deleted from the database."; exit; } } if (!isset($_GET['id']) || empty($_GET['id']) || !is_numeric($_GET['id'])) { die("Invalid entry ID."); } else { $id = (int)$_GET['id']; } $result = mysql_query ("SELECT * FROM php_blog WHERE id='$id'") or print ("Can't select entry.<br />" . $sql . "<br />" . mysql_error()); while ($row = mysql_fetch_array($result)) { $old_timestamp = $row['timestamp']; $old_title = stripslashes($row['title']); $old_entry = stripslashes($row['entry']); $old_password = $row['password']; $old_title = str_replace('"','\'',$old_title); $old_entry = str_replace('<br />', '', $old_entry); $old_month = date("F",$old_timestamp); $old_date = date("d",$old_timestamp); $old_year = date("Y",$old_timestamp); $old_time = date("H:i",$old_timestamp); } ?> <form method="post" action="<?php echo $_SERVER['PHP_SELF']; ?>"> <p><input type="hidden" name="id" value="<?php echo $id; ?>" /> <strong><label for="month">Date (month, day, year):</label></strong> <select name="month" id="month"> <option value="<?php echo $old_month; ?>"><?php echo $old_month; ?></option> <option value="January">January</option> <option value="February">February</option> <option value="March">March</option> <option value="April">April</option> <option value="May">May</option> <option value="June">June</option> <option value="July">July</option> <option value="August">August</option> <option value="September">September</option> <option value="October">October</option> <option value="November">November</option> <option value="December">December</option> </select> <input type="text" name="date" id="date" size="2" value="<?php echo $old_date; ?>" /> <select name="year" id="year"> <option value="<?php echo $old_year; ?>"><?php echo $old_year; ?></option> <option value="2004">2004</option> <option value="2005">2005</option> <option value="2006">2006</option> <option value="2007">2007</option> <option value="2008">2008</option> <option value="2009">2009</option> <option value="2010">2010</option> </select> <strong><label for="time">Time:</label></strong> <input type="text" name="time" id="time" size="5" value="<?php echo $old_time; ?>" /></p> <p><strong><label for="title">Title:</label></strong> <input type="text" name="title" id="title" value="<?php echo $old_title; ?>" size="40" /> </p> <p><strong><label for="password">Password protect?</label></strong> <input type="checkbox" name="password" id="password" value="1"<?php if($old_password == 1) echo " checked=\"checked\""; ?> /></p> <p><textarea cols="80" rows="20" name="entry" id="entry"><?php echo $old_entry; ?></textarea></p> <p><input type="submit" name="update" id="update" value="Update"></p> </form> <p><strong>Be absolutely sure that this is the post that you wish to remove from the blog!</strong><br /> </p> <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post"> <input type="hidden" name="id" id="id" value="<?php echo $id; ?>" /> <input type="submit" name="delete" id="delete" value="Delete" /> </form> </div> </div> </div> <div class="bottom"></div> </body> </html>

    Read the article

  • PHP 'Years' array

    - by J M 4
    I am trying to create an array for years which i will use in the DOB year piece of a form I am building. Currently, I know there are two ways to handle the issue but I don't really care for either: 1) Range: I know I can create a year array using the following <?php $year = range(1910,date("Y")); $_SESSION['years_arr'] = $year; ?> the problem with Point 1 is two fold: a) my function call shows the first year as 'selected' instead of "Year" as I have as option="0", and b) I want the years reversed so 2010 is the first in the least and shown decreasing. My function call is: PHP <?php function showOptionsDrop($array, $active, $echo=true){ $string = ''; foreach($array as $k => $v){ $s = ($active == $k)? ' selected="selected"' : ''; $string .= '<option value="'.$k.'"'.$s.'>'.$v.'</option>'."\n"; } if($echo) echo $string; else return $string; } ?> HTML <table> <tr> <td>State:</td> <td><select name="F1State"><option value="0">Choose a year</option><?php showOptionsDrop($_SESSION['years_arr'], null, true); ?></select> </td> </tr> </table> 2) Long Array I know i can physically create an array with years listed out but this takes up a lot of space and time if I ever want to go back and modify. ex: PHP $years = array('1900'=>"1900", '1901'=>"1901", '1902'=>"1902", '1903'=>"1903", '1904'=>"1904", '1905'=>"1905", '1906'=>"1906", '1907'=>"1907", '1908'=>"1908", '1909'=>"1909", '1910'=>"1910", '1911'=>"1911", '1912'=>"1912", '1913'=>"1913", '1914'=>"1914", '1915'=>"1915", '1916'=>"1916", '1917'=>"1917", '1918'=>"1918", '1919'=>"1919", '1920'=>"1920", '1921'=>"1921", '1922'=>"1922", '1923'=>"1923", '1924'=>"1924", '1925'=>"1925", '1926'=>"1926", '1927'=>"1927", '1928'=>"1928", '1929'=>"1929", '1930'=>"1930", '1931'=>"1931", '1932'=>"1932", '1933'=>"1933", '1934'=>"1934", '1935'=>"1935", '1936'=>"1936", '1937'=>"1937", '1938'=>"1938", '1939'=>"1939", '1940'=>"1940", '1941'=>"1941", '1942'=>"1942", '1943'=>"1943", '1944'=>"1944", '1945'=>"1945", '1946'=>"1946", '1947'=>"1947", '1948'=>"1948", '1949'=>"1949", '1950'=>"1950", '1951'=>"1951", '1952'=>"1952", '1953'=>"1953", '1954'=>"1954", '1955'=>"1955", '1956'=>"1956", '1957'=>"1957", '1958'=>"1958", '1959'=>"1959", '1960'=>"1960", '1961'=>"1961", '1962'=>"1962", '1963'=>"1963", '1964'=>"1964", '1965'=>"1965", '1966'=>"1966", '1967'=>"1967", '1968'=>"1968", '1969'=>"1969", '1970'=>"1970", '1971'=>"1971", '1972'=>"1972", '1973'=>"1973", '1974'=>"1974", '1975'=>"1975", '1976'=>"1976", '1977'=>"1977", '1978'=>"1978", '1979'=>"1979", '1980'=>"1980", '1981'=>"1981", '1982'=>"1982", '1983'=>"1983", '1984'=>"1984", '1985'=>"1985", '1986'=>"1986", '1987'=>"1987", '1988'=>"1988", '1989'=>"1989", '1990'=>"1990", '1991'=>"1991", '1992'=>"1992", '1993'=>"1993", '1994'=>"1994", '1995'=>"1995", '1996'=>"1996", '1997'=>"1997", '1998'=>"1998", '1999'=>"1999", '2000'=>"2000", '2001'=>"2001", '2002'=>"2002", '2003'=>"2003", '2004'=>"2004", '2005'=>"2005", '2006'=>"2006", '2007'=>"2007", '2008'=>"2008", '2009'=>"2009", '2010'=>"2010"); $_SESSION['years_arr'] = $years_arr; Does anybody have a recommended idea how to work - or just how to simply modify my existing code? Thank you!

    Read the article

  • What&rsquo;s New in ASP.NET 4.0 Part Two: WebForms and Visual Studio Enhancements

    - by Rick Strahl
    In the last installment I talked about the core changes in the ASP.NET runtime that I’ve been taking advantage of. In this column, I’ll cover the changes to the Web Forms engine and some of the cool improvements in Visual Studio that make Web and general development easier. WebForms The WebForms engine is the area that has received most significant changes in ASP.NET 4.0. Probably the most widely anticipated features are related to managing page client ids and of ViewState on WebForm pages. Take Control of Your ClientIDs Unique ClientID generation in ASP.NET has been one of the most complained about “features” in ASP.NET. Although there’s a very good technical reason for these unique generated ids - they guarantee unique ids for each and every server control on a page - these unique and generated ids often get in the way of client-side JavaScript development and CSS styling as it’s often inconvenient and fragile to work with the long, generated ClientIDs. In ASP.NET 4.0 you can now specify an explicit client id mode on each control or each naming container parent control to control how client ids are generated. By default, ASP.NET generates mangled client ids for any control contained in a naming container (like a Master Page, or a User Control for example). The key to ClientID management in ASP.NET 4.0 are the new ClientIDMode and ClientIDRowSuffix properties. ClientIDMode supports four different ClientID generation settings shown below. For the following examples, imagine that you have a Textbox control named txtName inside of a master page control container on a WebForms page. <%@Page Language="C#"      MasterPageFile="~/Site.Master"     CodeBehind="WebForm2.aspx.cs"     Inherits="WebApplication1.WebForm2"  %> <asp:Content ID="content"  ContentPlaceHolderID="content"               runat="server"               ClientIDMode="Static" >       <asp:TextBox runat="server" ID="txtName" /> </asp:Content> The four available ClientIDMode values are: AutoID This is the existing behavior in ASP.NET 1.x-3.x where full naming container munging takes place. <input name="ctl00$content$txtName" type="text"        id="ctl00_content_txtName" /> This should be familiar to any ASP.NET developer and results in fairly unpredictable client ids that can easily change if the containership hierarchy changes. For example, removing the master page changes the name in this case, so if you were to move a block of script code that works against the control to a non-Master page, the script code immediately breaks. Static This option is the most deterministic setting that forces the control’s ClientID to use its ID value directly. No naming container naming at all is applied and you end up with clean client ids: <input name="ctl00$content$txtName"         type="text" id="txtName" /> Note that the name property which is used for postback variables to the server still is munged, but the ClientID property is displayed simply as the ID value that you have assigned to the control. This option is what most of us want to use, but you have to be clear on that because it can potentially cause conflicts with other controls on the page. If there are several instances of the same naming container (several instances of the same user control for example) there can easily be a client id naming conflict. Note that if you assign Static to a data-bound control, like a list child control in templates, you do not get unique ids either, so for list controls where you rely on unique id for child controls, you’ll probably want to use Predictable rather than Static. I’ll write more on this a little later when I discuss ClientIDRowSuffix. Predictable The previous two values are pretty self-explanatory. Predictable however, requires some explanation. To me at least it’s not in the least bit predictable. MSDN defines this value as follows: This algorithm is used for controls that are in data-bound controls. The ClientID value is generated by concatenating the ClientID value of the parent naming container with the ID value of the control. If the control is a data-bound control that generates multiple rows, the value of the data field specified in the ClientIDRowSuffix property is added at the end. For the GridView control, multiple data fields can be specified. If the ClientIDRowSuffix property is blank, a sequential number is added at the end instead of a data-field value. Each segment is separated by an underscore character (_). The key that makes this value a bit confusing is that it relies on the parent NamingContainer’s ClientID to build its own ClientID value. This effectively means that the value is not predictable at all but rather very tightly coupled to the parent naming container’s ClientIDMode setting. For my simple textbox example, if the ClientIDMode property of the parent naming container (Page in this case) is set to “Predictable” you’ll get this: <input name="ctl00$content$txtName" type="text"         id="content_txtName" /> which gives an id that based on walking up to the currently active naming container (the MasterPage content container) and starting the id formatting from there downward. Think of this as a semi unique name that’s guaranteed unique only for the naming container. If, on the other hand, the Page is set to “AutoID” you get the following with Predictable on txtName: <input name="ctl00$content$txtName" type="text"         id="ctl00_content_txtName" /> The latter is effectively the same as if you specified AutoID because it inherits the AutoID naming from the Page and Content Master Page control of the page. But again - predictable behavior always depends on the parent naming container and how it generates its id, so the id may not always be exactly the same as the AutoID generated value because somewhere in the NamingContainer chain the ClientIDMode setting may be set to a different value. For example, if you had another naming container in the middle that was set to Static you’d end up effectively with an id that starts with the NamingContainers id rather than the whole ctl000_content munging. The most common use for Predictable is likely to be for data-bound controls, which results in each data bound item getting a unique ClientID. Unfortunately, even here the behavior can be very unpredictable depending on which data-bound control you use - I found significant differences in how template controls in a GridView behave from those that are used in a ListView control. For example, GridView creates clean child ClientIDs, while ListView still has a naming container in the ClientID, presumably because of the template container on which you can’t set ClientIDMode. Predictable is useful, but only if all naming containers down the chain use this setting. Otherwise you’re right back to the munged ids that are pretty unpredictable. Another property, ClientIDRowSuffix, can be used in combination with ClientIDMode of Predictable to force a suffix onto list client controls. For example: <asp:GridView runat="server" ID="gvItems"              AutoGenerateColumns="false"             ClientIDMode="Static"              ClientIDRowSuffix="Id">     <Columns>     <asp:TemplateField>         <ItemTemplate>             <asp:Label runat="server" id="txtName"                        Text='<%# Eval("Name") %>'                   ClientIDMode="Predictable"/>         </ItemTemplate>     </asp:TemplateField>     <asp:TemplateField>         <ItemTemplate>         <asp:Label runat="server" id="txtId"                     Text='<%# Eval("Id") %>'                     ClientIDMode="Predictable" />         </ItemTemplate>     </asp:TemplateField>     </Columns>  </asp:GridView> generates client Ids inside of a column in the master page described earlier: <td>     <span id="txtName_0">Rick</span> </td> where the value after the underscore is the ClientIDRowSuffix field - in this case “Id” of the item data bound to the control. Note that all of the child controls require ClientIDMode=”Predictable” in order for the ClientIDRowSuffix to be applied, and the parent GridView controls need to be set to Static either explicitly or via Naming Container inheritance to give these simple names. It’s a bummer that ClientIDRowSuffix doesn’t work with Static to produce this automatically. Another real problem is that other controls process the ClientIDMode differently. For example, a ListView control processes the Predictable ClientIDMode differently and produces the following with the Static ListView and Predictable child controls: <span id="ctrl0_txtName_0">Rick</span> I couldn’t even figure out a way using ClientIDMode to get a simple ID that also uses a suffix short of falling back to manually generated ids using <%= %> expressions instead. Given the inconsistencies inside of list controls using <%= %>, ids for the ListView might not be a bad idea anyway. Inherit The final setting is Inherit, which is the default for all controls except Page. This means that controls by default inherit the parent naming container’s ClientIDMode setting. For more detailed information on ClientID behavior and different scenarios you can check out a blog post of mine on this subject: http://www.west-wind.com/weblog/posts/54760.aspx. ClientID Enhancements Summary The ClientIDMode property is a welcome addition to ASP.NET 4.0. To me this is probably the most useful WebForms feature as it allows me to generate clean IDs simply by setting ClientIDMode="Static" on either the page or inside of Web.config (in the Pages section) which applies the setting down to the entire page which is my 95% scenario. For the few cases when it matters - for list controls and inside of multi-use user controls or custom server controls) - I can use Predictable or even AutoID to force controls to unique names. For application-level page development, this is easy to accomplish and provides maximum usability for working with client script code against page controls. ViewStateMode Another area of large criticism for WebForms is ViewState. ViewState is used internally by ASP.NET to persist page-level changes to non-postback properties on controls as pages post back to the server. It’s a useful mechanism that works great for the overall mechanics of WebForms, but it can also cause all sorts of overhead for page operation as ViewState can very quickly get out of control and consume huge amounts of bandwidth in your page content. ViewState can also wreak havoc with client-side scripting applications that modify control properties that are tracked by ViewState, which can produce very unpredictable results on a Postback after client-side updates. Over the years in my own development, I’ve often turned off ViewState on pages to reduce overhead. Yes, you lose some functionality, but you can easily implement most of the common functionality in non-ViewState workarounds. Relying less on heavy ViewState controls and sticking with simpler controls or raw HTML constructs avoids getting around ViewState problems. In ASP.NET 3.x and prior, it wasn’t easy to control ViewState - you could turn it on or off and if you turned it off at the page or web.config level, you couldn’t turn it back on for specific controls. In short, it was an all or nothing approach. With ASP.NET 4.0, the new ViewStateMode property gives you more control. It allows you to disable ViewState globally either on the page or web.config level and then turn it back on for specific controls that might need it. ViewStateMode only works when EnableViewState="true" on the page or web.config level (which is the default). You can then use ViewStateMode of Disabled, Enabled or Inherit to control the ViewState settings on the page. If you’re shooting for minimal ViewState usage, the ideal situation is to set ViewStateMode to disabled on the Page or web.config level and only turn it back on particular controls: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"        ClientIDMode="Static"                ViewStateMode="Disabled"     EnableViewState="true"  %> <!-- this control has viewstate  --> <asp:TextBox runat="server" ID="txtName"  ViewStateMode="Enabled" />       <!-- this control has no viewstate - it inherits  from parent container --> <asp:TextBox runat="server" ID="txtAddress" /> Note that the EnableViewState="true" at the Page level isn’t required since it’s the default, but it’s important that the value is true. ViewStateMode has no effect if EnableViewState="false" at the page level. The main benefit of ViewStateMode is that it allows you to more easily turn off ViewState for most of the page and enable only a few key controls that might need it. For me personally, this is a perfect combination as most of my WebForm apps can get away without any ViewState at all. But some controls - especially third party controls - often don’t work well without ViewState enabled, and now it’s much easier to selectively enable controls rather than the old way, which required you to pretty much turn off ViewState for all controls that you didn’t want ViewState on. Inline HTML Encoding HTML encoding is an important feature to prevent cross-site scripting attacks in data entered by users on your site. In order to make it easier to create HTML encoded content, ASP.NET 4.0 introduces a new Expression syntax using <%: %> to encode string values. The encoding expression syntax looks like this: <%: "<script type='text/javascript'>" +     "alert('Really?');</script>" %> which produces properly encoded HTML: &lt;script type=&#39;text/javascript&#39; &gt;alert(&#39;Really?&#39;);&lt;/script&gt; Effectively this is a shortcut to: <%= HttpUtility.HtmlEncode( "<script type='text/javascript'>" + "alert('Really?');</script>") %> Of course the <%: %> syntax can also evaluate expressions just like <%= %> so the more common scenario applies this expression syntax against data your application is displaying. Here’s an example displaying some data model values: <%: Model.Address.Street %> This snippet shows displaying data from your application’s data store or more importantly, from data entered by users. Anything that makes it easier and less verbose to HtmlEncode text is a welcome addition to avoid potential cross-site scripting attacks. Although I listed Inline HTML Encoding here under WebForms, anything that uses the WebForms rendering engine including ASP.NET MVC, benefits from this feature. ScriptManager Enhancements The ASP.NET ScriptManager control in the past has introduced some nice ways to take programmatic and markup control over script loading, but there were a number of shortcomings in this control. The ASP.NET 4.0 ScriptManager has a number of improvements that make it easier to control script loading and addresses a few of the shortcomings that have often kept me from using the control in favor of manual script loading. The first is the AjaxFrameworkMode property which finally lets you suppress loading the ASP.NET AJAX runtime. Disabled doesn’t load any ASP.NET AJAX libraries, but there’s also an Explicit mode that lets you pick and choose the library pieces individually and reduce the footprint of ASP.NET AJAX script included if you are using the library. There’s also a new EnableCdn property that forces any script that has a new WebResource attribute CdnPath property set to a CDN supplied URL. If the script has this Attribute property set to a non-null/empty value and EnableCdn is enabled on the ScriptManager, that script will be served from the specified CdnPath. [assembly: WebResource(    "Westwind.Web.Resources.ww.jquery.js",    "application/x-javascript",    CdnPath =  "http://mysite.com/scripts/ww.jquery.min.js")] Cool, but a little too static for my taste since this value can’t be changed at runtime to point at a debug script as needed, for example. Assembly names for loading scripts from resources can now be simple names rather than fully qualified assembly names, which make it less verbose to reference scripts from assemblies loaded from your bin folder or the assembly reference area in web.config: <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <Scripts>         <asp:ScriptReference          Name="Westwind.Web.Resources.ww.jquery.js"         Assembly="Westwind.Web" />     </Scripts>        </asp:ScriptManager> The ScriptManager in 4.0 also supports script combining via the CompositeScript tag, which allows you to very easily combine scripts into a single script resource served via ASP.NET. Even nicer: You can specify the URL that the combined script is served with. Check out the following script manager markup that combines several static file scripts and a script resource into a single ASP.NET served resource from a static URL (allscripts.js): <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <CompositeScript          Path="~/scripts/allscripts.js">         <Scripts>             <asp:ScriptReference                    Path="~/scripts/jquery.js" />             <asp:ScriptReference                    Path="~/scripts/ww.jquery.js" />             <asp:ScriptReference            Name="Westwind.Web.Resources.editors.js"                 Assembly="Westwind.Web" />         </Scripts>     </CompositeScript> </asp:ScriptManager> When you render this into HTML, you’ll see a single script reference in the page: <script src="scripts/allscripts.debug.js"          type="text/javascript"></script> All you need to do to make this work is ensure that allscripts.js and allscripts.debug.js exist in the scripts folder of your application - they can be empty but the file has to be there. This is pretty cool, but you want to be real careful that you use unique URLs for each combination of scripts you combine or else browser and server caching will easily screw you up royally. The script manager also allows you to override native ASP.NET AJAX scripts now as any script references defined in the Scripts section of the ScriptManager trump internal references. So if you want custom behavior or you want to fix a possible bug in the core libraries that normally are loaded from resources, you can now do this simply by referencing the script resource name in the Name property and pointing at System.Web for the assembly. Not a common scenario, but when you need it, it can come in real handy. Still, there are a number of shortcomings in this control. For one, the ScriptManager and ClientScript APIs still have no common entry point so control developers are still faced with having to check and support both APIs to load scripts so that controls can work on pages that do or don’t have a ScriptManager on the page. The CdnUrl is static and compiled in, which is very restrictive. And finally, there’s still no control over where scripts get loaded on the page - ScriptManager still injects scripts into the middle of the HTML markup rather than in the header or optionally the footer. This, in turn, means there is little control over script loading order, which can be problematic for control developers. MetaDescription, MetaKeywords Page Properties There are also a number of additional Page properties that correspond to some of the other features discussed in this column: ClientIDMode, ClientTarget and ViewStateMode. Another minor but useful feature is that you can now directly access the MetaDescription and MetaKeywords properties on the Page object to set the corresponding meta tags programmatically. Updating these values programmatically previously required either <%= %> expressions in the page markup or dynamic insertion of literal controls into the page. You can now just set these properties programmatically on the Page object in any Control derived class on the page or the Page itself: Page.MetaKeywords = "ASP.NET,4.0,New Features"; Page.MetaDescription = "This article discusses the new features in ASP.NET 4.0"; Note, that there’s no corresponding ASP.NET tag for the HTML Meta element, so the only way to specify these values in markup and access them is via the @Page tag: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"      ClientIDMode="Static"                MetaDescription="Article that discusses what's                      new in ASP.NET 4.0"     MetaKeywords="ASP.NET,4.0,New Features" %> Nothing earth shattering but quite convenient. Visual Studio 2010 Enhancements for Web Development For Web development there are also a host of editor enhancements in Visual Studio 2010. Some of these are not Web specific but they are useful for Web developers in general. Text Editors Throughout Visual Studio 2010, the text editors have all been updated to a new core engine based on WPF which provides some interesting new features for various code editors including the nice ability to zoom in and out with Ctrl-MouseWheel to quickly change the size of text. There are many more API options to control the editor and although Visual Studio 2010 doesn’t yet use many of these features, we can look forward to enhancements in add-ins and future editor updates from the various language teams that take advantage of the visual richness that WPF provides to editing. On the negative side, I’ve noticed that occasionally the code editor and especially the HTML and JavaScript editors will lose the ability to use various navigation keys like arrows, back and delete keys, which requires closing and reopening the documents at times. This issue seems to be well documented so I suspect this will be addressed soon with a hotfix or within the first service pack. Overall though, the code editors work very well, especially given that they were re-written completely using WPF, which was one of my big worries when I first heard about the complete redesign of the editors. Multi-Targeting Visual Studio now targets all versions of the .NET framework from 2.0 forward. You can use Visual Studio 2010 to work on your ASP.NET 2, 3.0 and 3.5 applications which is a nice way to get your feet wet with the new development environment without having to make changes to existing applications. It’s nice to have one tool to work in for all the different versions. Multi-Monitor Support One cool feature of Visual Studio 2010 is the ability to drag windows out of the Visual Studio environment and out onto the desktop including onto another monitor easily. Since Web development often involves working with a host of designers at the same time - visual designer, HTML markup window, code behind and JavaScript editor - it’s really nice to be able to have a little more screen real estate to work on each of these editors. Microsoft made a welcome change in the environment. IntelliSense Snippets for HTML and JavaScript Editors The HTML and JavaScript editors now finally support IntelliSense scripts to create macro-based template expansions that have been in the core C# and Visual Basic code editors since Visual Studio 2005. Snippets allow you to create short XML-based template definitions that can act as static macros or real templates that can have replaceable values that can be embedded into the expanded text. The XML syntax for these snippets is straight forward and it’s pretty easy to create custom snippets manually. You can easily create snippets using XML and store them in your custom snippets folder (C:\Users\rstrahl\Documents\Visual Studio 2010\Code Snippets\Visual Web Developer\My HTML Snippets and My JScript Snippets), but it helps to use one of the third-party tools that exist to simplify the process for you. I use SnippetEditor, by Bill McCarthy, which makes short work of creating snippets interactively (http://snippeteditor.codeplex.com/). Note: You may have to manually add the Visual Studio 2010 User specific Snippet folders to this tool to see existing ones you’ve created. Code snippets are some of the biggest time savers and HTML editing more than anything deals with lots of repetitive tasks that lend themselves to text expansion. Visual Studio 2010 includes a slew of built-in snippets (that you can also customize!) and you can create your own very easily. If you haven’t done so already, I encourage you to spend a little time examining your coding patterns and find the repetitive code that you write and convert it into snippets. I’ve been using CodeRush for this for years, but now you can do much of the basic expansion natively for HTML and JavaScript snippets. jQuery Integration Is Now Native jQuery is a popular JavaScript library and recently Microsoft has recently stated that it will become the primary client-side scripting technology to drive higher level script functionality in various ASP.NET Web projects that Microsoft provides. In Visual Studio 2010, the default full project template includes jQuery as part of a new project including the support files that provide IntelliSense (-vsdoc files). IntelliSense support for jQuery is now also baked into Visual Studio 2010, so unlike Visual Studio 2008 which required a separate download, no further installs are required for a rich IntelliSense experience with jQuery. Summary ASP.NET 4.0 brings many useful improvements to the platform, but thankfully most of the changes are incremental changes that don’t compromise backwards compatibility and they allow developers to ease into the new features one feature at a time. None of the changes in ASP.NET 4.0 or Visual Studio 2010 are monumental or game changers. The bigger features are language and .NET Framework changes that are also optional. This ASP.NET and tools release feels more like fine tuning and getting some long-standing kinks worked out of the platform. It shows that the ASP.NET team is dedicated to paying attention to community feedback and responding with changes to the platform and development environment based on this feedback. If you haven’t gotten your feet wet with ASP.NET 4.0 and Visual Studio 2010, there’s no reason not to give it a shot now - the ASP.NET 4.0 platform is solid and Visual Studio 2010 works very well for a brand new release. Check it out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • What is the recommended way of parsing an XML feed with multiple namespaces with ActionScript 3.0?

    - by dafko
    I have seen the following methods to be used in several online examples, but haven't found any documentation on the recommended way of parsing an XML feed. Method 1: protected function xmlResponseHandler(event:ResultEvent):void { var atom:Namespace = new Namespace("http://www.w3.org/2005/Atom"); var microsoftData:Namespace = new Namespace("http://schemas.microsoft.com/ado/2007/08/dataservices"); var microsoftMetadata:Namespace = new Namespace("http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"); var ac:ArrayCollection = new ArrayCollection(); var keyValuePairs:KeyValuePair; var propertyList:XMLList = (event.result as XML)..atom::entry.atom::content.microsoftMetadata::properties; for each (var properties:XML in propertyList) { keyValuePairs = new KeyValuePair(properties.microsoftData::FieldLocation, properties.microsoftData::Locationid); ac.addItem(keyValuePairs); } cb.dataProvider = ac; } Method 2: protected function xmlResponseHandler(event:ResultEvent):void { namespace atom = "http://www.w3.org/2005/Atom"; namespace d = "http://schemas.microsoft.com/ado/2007/08/dataservices"; namespace m = "http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"; use namespace d; use namespace m; use namespace atom; var ac:ArrayCollection = new ArrayCollection(); var keyValuePairs:KeyValuePair; var propertyList:XMLList = (event.result as XML)..entry.content.properties; for each (var properties:XML in propertyList) { keyValuePairs = new KeyValuePair(properties.FieldLocation, properties.Locationid); ac.addItem(keyValuePairs); } cb.dataProvider = ac; } Sample XML feed: <?xml version="1.0" encoding="iso-8859-1" standalone="yes"?> <feed xml:base="http://www.test.com/Test/my.svc/" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom"> <title type="text">Test_Locations</title> <id>http://www.test.com/test/my.svc/Test_Locations</id> <updated>2010-04-27T20:41:23Z</updated> <link rel="self" title="Test_Locations" href="Test_Locations" /> <entry> <id>1</id> <title type="text"></title> <updated>2010-04-27T20:41:23Z</updated> <author> <name /> </author> <link rel="edit" title="Test_Locations" href="http://www.test.com/id=1" /> <category term="MySQLModel.Test_Locations" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <content type="application/xml"> <m:properties> <d:FieldLocation>Test Location</d:FieldLocation> <d:Locationid>test0129</d:Locationid> </m:properties> </content> </entry> <entry> <id>2</id> <title type="text"></title> <updated>2010-04-27T20:41:23Z</updated> <author> <name /> </author> <link rel="edit" title="Test_Locations" href="http://www.test.com/id=2" /> <category term="MySQLModel.Test_Locations" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <content type="application/xml"> <m:properties> <d:FieldLocation>Yet Another Test Location</d:FieldLocation> <d:Locationid>test25</d:Locationid> </m:properties> </content> </entry> </feed>

    Read the article

  • CreationName for SSIS 2008 and adding components programmatically

    If you are building SSIS 2008 packages programmatically and adding data flow components, you will probably need to know the creation name of the component to add. I can never find a handy reference when I need one, hence this rather mundane post. See also CreationName for SSS 2005. We start with a very simple snippet for adding a component: // Add the Data Flow Task package.Executables.Add("STOCK:PipelineTask"); // Get the task host wrapper, and the Data Flow task TaskHost taskHost = package.Executables[0] as TaskHost; MainPipe dataFlowTask = (MainPipe)taskHost.InnerObject; // Add OLE-DB source component - ** This is where we need the creation name ** IDTSComponentMetaData90 componentSource = dataFlowTask.ComponentMetaDataCollection.New(); componentSource.Name = "OLEDBSource"; componentSource.ComponentClassID = "DTSAdapter.OLEDBSource.2"; So as you can see the creation name for a OLE-DB Source is DTSAdapter.OLEDBSource.2. CreationName Reference  ADO NET Destination Microsoft.SqlServer.Dts.Pipeline.ADONETDestination, Microsoft.SqlServer.ADONETDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 ADO NET Source Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter, Microsoft.SqlServer.ADONETSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Aggregate DTSTransform.Aggregate.2 Audit DTSTransform.Lineage.2 Cache Transform DTSTransform.Cache.1 Character Map DTSTransform.CharacterMap.2 Checksum Konesans.Dts.Pipeline.ChecksumTransform.ChecksumTransform, Konesans.Dts.Pipeline.ChecksumTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Conditional Split DTSTransform.ConditionalSplit.2 Copy Column DTSTransform.CopyMap.2 Data Conversion DTSTransform.DataConvert.2 Data Mining Model Training MSMDPP.PXPipelineProcessDM.2 Data Mining Query MSMDPP.PXPipelineDMQuery.2 DataReader Destination Microsoft.SqlServer.Dts.Pipeline.DataReaderDestinationAdapter, Microsoft.SqlServer.DataReaderDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Derived Column DTSTransform.DerivedColumn.2 Dimension Processing MSMDPP.PXPipelineProcessDimension.2 Excel Destination DTSAdapter.ExcelDestination.2 Excel Source DTSAdapter.ExcelSource.2 Export Column TxFileExtractor.Extractor.2 Flat File Destination DTSAdapter.FlatFileDestination.2 Flat File Source DTSAdapter.FlatFileSource.2 Fuzzy Grouping DTSTransform.GroupDups.2 Fuzzy Lookup DTSTransform.BestMatch.2 Import Column TxFileInserter.Inserter.2 Lookup DTSTransform.Lookup.2 Merge DTSTransform.Merge.2 Merge Join DTSTransform.MergeJoin.2 Multicast DTSTransform.Multicast.2 OLE DB Command DTSTransform.OLEDBCommand.2 OLE DB Destination DTSAdapter.OLEDBDestination.2 OLE DB Source DTSAdapter.OLEDBSource.2 Partition Processing MSMDPP.PXPipelineProcessPartition.2 Percentage Sampling DTSTransform.PctSampling.2 Performance Counters Source DataCollectorTransform.TxPerfCounters.1 Pivot DTSTransform.Pivot.2 Raw File Destination DTSAdapter.RawDestination.2 Raw File Source DTSAdapter.RawSource.2 Recordset Destination DTSAdapter.RecordsetDestination.2 RegexClean Konesans.Dts.Pipeline.RegexClean.RegexClean, Konesans.Dts.Pipeline.RegexClean, Version=2.0.0.0, Culture=neutral, PublicKeyToken=d1abe77e8a21353e Row Count DTSTransform.RowCount.2 Row Count Plus Konesans.Dts.Pipeline.RowCountPlusTransform.RowCountPlusTransform, Konesans.Dts.Pipeline.RowCountPlusTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Number Konesans.Dts.Pipeline.RowNumberTransform.RowNumberTransform, Konesans.Dts.Pipeline.RowNumberTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Sampling DTSTransform.RowSampling.2 Script Component Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost, Microsoft.SqlServer.TxScript, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Slowly Changing Dimension DTSTransform.SCD.2 Sort DTSTransform.Sort.2 SQL Server Compact Destination Microsoft.SqlServer.Dts.Pipeline.SqlCEDestinationAdapter, Microsoft.SqlServer.SqlCEDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 SQL Server Destination DTSAdapter.SQLServerDestination.2 Term Extraction DTSTransform.TermExtraction.2 Term Lookup DTSTransform.TermLookup.2 Trash Destination Konesans.Dts.Pipeline.TrashDestination.Trash, Konesans.Dts.Pipeline.TrashDestination, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b8351fe7752642cc TxTopQueries DataCollectorTransform.TxTopQueries.1 Union All DTSTransform.UnionAll.2 Unpivot DTSTransform.UnPivot.2 XML Source Microsoft.SqlServer.Dts.Pipeline.XmlSourceAdapter, Microsoft.SqlServer.XmlSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Here is a simple console program that can be used to enumerate the pipeline components installed on your machine, and dumps out a list of all components like that above. You will need to add a reference to the Microsoft.SQLServer.ManagedDTS assembly. using System; using System.Diagnostics; using Microsoft.SqlServer.Dts.Runtime; public class Program { static void Main(string[] args) { Application application = new Application(); PipelineComponentInfos componentInfos = application.PipelineComponentInfos; foreach (PipelineComponentInfo componentInfo in componentInfos) { Debug.WriteLine(componentInfo.Name + "\t" + componentInfo.CreationName); } Console.Read(); } }

    Read the article

  • SQL SERVER – Introduction to Extended Events – Finding Long Running Queries

    - by pinaldave
    The job of an SQL Consultant is very interesting as always. The month before, I was busy doing query optimization and performance tuning projects for our clients, and this month, I am busy delivering my performance in Microsoft SQL Server 2005/2008 Query Optimization and & Performance Tuning Course. I recently read white paper about Extended Event by SQL Server MVP Jonathan Kehayias. You can read the white paper here: Using SQL Server 2008 Extended Events. I also read another appealing chapter by Jonathan in the book, SQLAuthority Book Review – Professional SQL Server 2008 Internals and Troubleshooting. After reading these excellent notes by Jonathan, I decided to upgrade my course and include Extended Event as one of the modules. This week, I have delivered Extended Events session two times and attendees really liked the said course. They really think Extended Events is one of the most powerful tools available. Extended Events can do many things. I suggest that you read the white paper I mentioned to learn more about this tool. Instead of writing a long theory, I am going to write a very quick script for Extended Events. This event session captures all the longest running queries ever since the event session was started. One of the many advantages of the Extended Events is that it can be configured very easily and it is a robust method to collect necessary information in terms of troubleshooting. There are many targets where you can store the information, which include XML file target, which I really like. In the following Events, we are writing the details of the event at two locations: 1) Ringer Buffer; and 2) XML file. It is not necessary to write at both places, either of the two will do. -- Extended Event for finding *long running query* IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name='LongRunningQuery') DROP EVENT SESSION LongRunningQuery ON SERVER GO -- Create Event CREATE EVENT SESSION LongRunningQuery ON SERVER -- Add event to capture event ADD EVENT sqlserver.sql_statement_completed ( -- Add action - event property ACTION (sqlserver.sql_text, sqlserver.tsql_stack) -- Predicate - time 1000 milisecond WHERE sqlserver.sql_statement_completed.duration > 1000 ) -- Add target for capturing the data - XML File ADD TARGET package0.asynchronous_file_target( SET filename='c:\LongRunningQuery.xet', metadatafile='c:\LongRunningQuery.xem'), -- Add target for capturing the data - Ring Bugger ADD TARGET package0.ring_buffer (SET max_memory = 4096) WITH (max_dispatch_latency = 1 seconds) GO -- Enable Event ALTER EVENT SESSION LongRunningQuery ON SERVER STATE=START GO -- Run long query (longer than 1000 ms) SELECT * FROM AdventureWorks.Sales.SalesOrderDetail ORDER BY UnitPriceDiscount DESC GO -- Stop the event ALTER EVENT SESSION LongRunningQuery ON SERVER STATE=STOP GO -- Read the data from Ring Buffer SELECT CAST(dt.target_data AS XML) AS xmlLockData FROM sys.dm_xe_session_targets dt JOIN sys.dm_xe_sessions ds ON ds.Address = dt.event_session_address JOIN sys.server_event_sessions ss ON ds.Name = ss.Name WHERE dt.target_name = 'ring_buffer' AND ds.Name = 'LongRunningQuery' GO -- Read the data from XML File SELECT event_data_XML.value('(event/data[1])[1]','VARCHAR(100)') AS Database_ID, event_data_XML.value('(event/data[2])[1]','INT') AS OBJECT_ID, event_data_XML.value('(event/data[3])[1]','INT') AS object_type, event_data_XML.value('(event/data[4])[1]','INT') AS cpu, event_data_XML.value('(event/data[5])[1]','INT') AS duration, event_data_XML.value('(event/data[6])[1]','INT') AS reads, event_data_XML.value('(event/data[7])[1]','INT') AS writes, event_data_XML.value('(event/action[1])[1]','VARCHAR(512)') AS sql_text, event_data_XML.value('(event/action[2])[1]','VARCHAR(512)') AS tsql_stack, CAST(event_data_XML.value('(event/action[2])[1]','VARCHAR(512)') AS XML).value('(frame/@handle)[1]','VARCHAR(50)') AS handle FROM ( SELECT CAST(event_data AS XML) event_data_XML, * FROM sys.fn_xe_file_target_read_file ('c:\LongRunningQuery*.xet', 'c:\LongRunningQuery*.xem', NULL, NULL)) T GO -- Clean up. Drop the event DROP EVENT SESSION LongRunningQuery ON SERVER GO Just run the above query, afterwards you will find following result set. This result set contains the query that was running over 1000 ms. In our example, I used the XML file, and it does not reset when SQL services or computers restarts (if you are using DMV, it will reset when SQL services restarts). This event session can be very helpful for troubleshooting. Let me know if you want me to write more about Extended Events. I am totally fascinated with this feature, so I’m planning to acquire more knowledge about it so I can determine its other usages. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Training, SQLServer, T SQL, Technology Tagged: SQL Extended Events

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >