Search Results

Search found 8943 results on 358 pages for 'audit tables'.

Page 113/358 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Design review for application facing memory issues

    - by Mr Moose
    I apologise in advance for the length of this post, but I want to paint an accurate picture of the problems my app is facing and then pose some questions below; I am trying to address some self inflicted design pain that is now leading to my application crashing due to out of memory errors. An abridged description of the problem domain is as follows; The application takes in a “dataset” that consists of numerous text files containing related data An individual text file within the dataset usually contains approx 20 “headers” that contain metadata about the data it contains. It also contains a large tab delimited section containing data that is related to data in one of the other text files contained within the dataset. The number of columns per file is very variable from 2 to 256+ columns. The original application was written to allow users to load a dataset, map certain columns of each of the files which basically indicating key information on the files to show how they are related as well as identify a few expected column names. Once this is done, a validation process takes place to enforce various rules and ensure that all the relationships between the files are valid. Once that is done, the data is imported into a SQL Server database. The database design is an EAV (Entity-Attribute-Value) model used to cater for the variable columns per file. I know EAV has its detractors, but in this case, I feel it was a reasonable choice given the disparate data and variable number of columns submitted in each dataset. The memory problem Given the fact the combined size of all text files was at most about 5 megs, and in an effort to reduce the database transaction time, it was decided to read ALL the data from files into memory and then perform the following; perform all the validation whilst the data was in memory relate it using an object model Start DB transaction and write the key columns row by row, noting the Id of the written row (all tables in the database utilise identity columns), then the Id of the newly written row is applied to all related data Once all related data had been updated with the key information to which it relates, these records are written using SqlBulkCopy. Due to our EAV model, we essentially have; x columns by y rows to write, where x can by 256+ and rows are often into the tens of thousands. Once all the data is written without error (can take several minutes for large datasets), Commit the transaction. The problem now comes from the fact we are now receiving individual files containing over 30 megs of data. In a dataset, we can receive any number of files. We’ve started seen datasets of around 100 megs coming in and I expect it is only going to get bigger from here on in. With files of this size, data can’t even be read into memory without the app falling over, let alone be validated and imported. I anticipate having to modify large chunks of the code to allow validation to occur by parsing files line by line and am not exactly decided on how to handle the import and transactions. Potential improvements I’ve wondered about using GUIDs to relate the data rather than relying on identity fields. This would allow data to be related prior to writing to the database. This would certainly increase the storage required though. Especially in an EAV design. Would you think this is a reasonable thing to try, or do I simply persist with identity fields (natural keys can’t be trusted to be unique across all submitters). Use of staging tables to get data into the database and only performing the transaction to copy data from staging area to actual destination tables. Questions For systems like this that import large quantities of data, how to you go about keeping transactions small. I’ve kept them as small as possible in the current design, but they are still active for several minutes and write hundreds of thousands of records in one transaction. Is there a better solution? The tab delimited data section is read into a DataTable to be viewed in a grid. I don’t need the full functionality of a DataTable, so I suspect it is overkill. Is there anyway to turn off various features of DataTables to make them more lightweight? Are there any other obvious things you would do in this situation to minimise the memory footprint of the application described above? Thanks for your kind attention.

    Read the article

  • SQL SERVER – Not Possible – Delete From Multiple Table – Update Multiple Table in Single Statement

    - by pinaldave
    There are two questions which I get every single day multiple times. In my gmail, I have created standard canned reply for them. Let us see the questions here. I want to delete from multiple table in a single statement how will I do it? I want to update multiple table in a single statement how will I do it? The answer is – No, You cannot and you should not. SQL Server does not support deleting or updating from two tables in a single update. If you want to delete or update two different tables – you may want to write two different delete or update statements for it. This method has many issues – from the consistency of the data to SQL syntax. Now here is the real reason for this blog post – yesterday I was asked this question again and I replied my canned answer saying it is not possible and it should not be any way implemented that day. In the response to my reply I was pointed out to my own blog post where user suggested that I had previously mentioned this is possible and with demo example. Let us go over my conversation – you may find it interesting. Let us call the user DJ. DJ: Pinal, can we delete multiple table in a single statement or with single delete statement? Pinal: No, you cannot and you should not. DJ: Oh okey, if that is the case, why do you suggest to do that? Pinal: (baffled) I am not suggesting that. I am rather suggesting that it is not possible and it should not be possible. DJ: Hmm… but in that case why did you blog about it earlier? Pinal: (What?) No, I did not. I am pretty confident. DJ: Well, I am confident as well. You did. Pinal: In that case, it is my word against your word. Isn’t it? DJ: I have proof. Do you want to see it that you suggest it is possible? Pinal: Yes, I will be delighted too. (After 10 Minutes) DJ: Here are not one but two of your blog posts which talks about it - SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2 SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – T-SQL Example – Part 2 of 2 Pinal: Oh! DJ: I know I was correct. Pinal: Well, oh man, I did not mean there what you mean here. DJ: I did not understand can you explain it further. Pinal: Here we go. The example in the other blog is the example of the cascading delete or cascading update. I think you may want to understand the concept of the foreign keys and cascading update/delete. The concept of cascading exists to maintain data integrity. If there primary keys get deleted the update or delete reflects on the foreign key table to maintain the key integrity and data consistency. SQL Server follows ANSI Entry SQL with regard to referential integrity between PrimaryKey and ForeignKey columns which requires the inserting, updating, and deleting of data in related tables to be restricted to values that preserve the integrity. This is all together different concept than deleting multiple values in a single statement. When I hear that someone wants to delete or update multiple table in a single statement what I assume is something very similar to following. DELETE/UPDATE Table 1 (cols) Table 2 (cols) VALUES … which is not valid statement/syntax as well it is not ASNI standards as well. I guess, after this discussion with DJ, I realize I need to do a blog post so I can add the link to this blog post in my canned answer. Well, it was a fun conversation with DJ and I hope it the message is very clear now. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

  • Creating the Business Card Request InfoPath Form

    - by JKenderdine
    Business Card Request Demo Files Back in January I spoke at SharePoint Saturday Virginia Beach about InfoPath forms and Web Part deployment.  Below is some of the information and details regarding the form I created for the session.  There are many blogs and Microsoft articles on how to create a basic form so I won’t repeat that information here.   This blog will just explain a few of the options I chose when creating the solutions for SPS Virginia Beach.  The above link contains the zipped package files of the two InfoPath forms(no code solution and coded solution), the list template for the Location list I used, and the PowerPoint deck.  If you plan to use these templates, you will need to update the forms to work within your own environments (change data connections, code links, etc.).  Also, you must have the SharePoint Enterprise version, with InfoPath Services configured in order to use the Web Browser enabled forms. So what are the requirements for this template? Business Card Request Form Template Design Plan: Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Show and hide fields as necessary for requirements Create multiple views – one for those submitting the form and Another view for the executive assistants placing the orders. Browser based form integrated into SharePoint team site Submitted directly to form library The base form was created using the blank template.  The table and rows were added using Insert tab and selecting Custom Table.  The use of tables is a great way to make sure everything lines up.  You do have to split the tables from time to time.  If you’ve ever split cells and then tried to re-align one to find that you impacted the others, you know why.  Here is what the base form looks like in InfoPath.   Show and hide fields as necessary for requirements You will notice I also used Sections within the form.  These show or hide depending on options selected or whether or not fields are blank.  This is a great way to prevent your users from feeling overwhelmed with a large form (this one wouldn’t apply).  Although not used in this one, you can also use various views with a tab interface.  I’ll show that in another post. Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Utilizing rules you can load data when the form initiates (Data tab, Form Load).  Anything you can automate is always appreciated by the user as that is data they don’t have to enter.  For example, loading their user id or other user information on load: Always keep in mind though how much data you load and the method for loading that data (through rules, code, etc.).  They have an impact on form performance.  The form will take longer to load if you bring in a ton of data from external sources.  Laura Rogers has a great blog post on using the User Information List to load user information.   If the user has logged into SharePoint, then this can be used quite effectively and without a huge performance hit.   What I have found is that using the User Profile service via code behind or the Web Service “GetUserProfileByName” (as above) can take more time to load the user data.  Just food for thought. You must add the data connection in order for the above rules to work.  You can connect to the data connection through the Data tab, Data Connections or select Manage Data Connections link which appears under the main data source.  The data connections can be SharePoint lists or libraries, SQL data tables, XML files, etc.  Create multiple views – one for those submitting the form and Another view for the executive assistants placing the orders. You can also create multiple views for the users to enhance their experience.  Once they’ve entered the information and submitted their request for business cards, they don’t really need to see the main data input screen any more.  They just need to view what they entered. From the Page Design tab, select New View and give the view a name.  To review the existing views, click the down arrow under View: The ReviewView shows just what the user needs and nothing more: Once you have everything configured, the form should be tested within a Test SharePoint environment before final deployment to production.  This validates you don’t have any rules or code that could impact the server negatively. Submitted directly to form library   You will need to know the form library that you will be submitting to when publishing the template.  Configure the Submit data connection to connect to this library.  There is already one configured in the sample,  but it will need to be updated to your environment prior to publishing. The Design template is different from the Published template.  While both have the .XSN extension, the published template contains all the “package” information for the form.  The published form is what is loaded into Central Admin, not the design template. Browser based form integrated into SharePoint team site In Central Admin, under General Settings, select Manage Form Templates.  Upload the published form template and Activate it to a site collection. Now it is available as a content type to select in the form library.  Some documentation on publishing form templates:  Technet – Manage administrator approved form templates And that’s all our base requirements.  Hope this helps to give a good start.

    Read the article

  • Disaster Recovery Discovery

    - by Rodney Landrum
    Last weekend I joined several of my IT staff on a mission to perform a DR test in our remote CoLo center in a large South East city of the US. Can I be more obtuse? The goal was simple for me as the sole DBA in a throng of Windows, Storage, Network and SAN admins – restore the databases and make them work. There were 4 applications that back ended to 7 SQL Server databases on 4 different SQL Server instances. We would maintain the original server names, but beyond that it was fair game. We had time to prepare so I was able to script out or otherwise automate the recovery process. I used sp_help_revlogin for three of the servers, a bit of a cheat actually because restoring the Master database on the target DR servers was the specified course of action according to the DR procedures ( the caveat “IF REQUIRED” left it open to interpretation. I really wanted to avoid the step of restoring Master for a number of reasons but mainly because I did not want to deal with issues starting SQL Services afterward. Having to account for the location of TempDB and the version conflicts of the resource DBs were just two of the battles I chose not to fight. Not to mention other system database location problems that might arise and prevent SQL from starting.  I was going to have to restore all of the user databases anyway, so I would not really gain any benefit, outside of logins, for taking the time to restore the source Master database over the newly installed one on the fresh server. What I wanted was the ability to restore the Master database as a user database, call it Master_Mine, from a backup on the source system and then use that restored database to script the SQL Logins and passwords on the DR systems. While I did not attempt this on the trip, the thought stuck in my mind and this past week I succeeded at scripting user accounts and passwords using only a restored copy of the Master database. Granted there were several challenges to overcome.  Also, as is usual for any work like this the usual disclaimers apply:  This is not something that I would imagine Microsoft would condone or support and this was really only an experiment for me to learn if it was even possible. While I have tested the process with success, I do not know that I would use this technique in a documented procedure because future updates for SQL Server will render this technique non-functional. I thought at first, incorrectly of course, that I could use sp_help_revlogin on a restored copy of the master database I named Master_Mine.   Since sp_help_revlogin uses system schema objects, sys.syslogins and sys.server_principals, this was not going to work because all results would come from the main Master database. To test this I added a SQL login via SSMS, backed up Master, restored  it as Master_Mine, and then deleted the login.  Even though the test account I created should presumably still be in the Master_Mine database, I should be able to get to it and script out its creation with its password hash so that I would not need to know the password, but any applications that stored that password would not have to be altered in the DR scenario. They would just work as expected. Once I realized that would not work I began looking deeper.  Knowing that sys.syslogins and sys.server_principals are system views, their underlying code should be available with sp_helptext, right? They were. And this led me to discover the two tables sys.sysxlgns and sys.sysprivs, where the data I needed was stored. These tables existed in both the real Master and the restored copy, Master_Mine.  I used this information to tweak the sp_help_revlogin stored procedure to use these tables instead to create the logins cursor used in sp_help_revlogin. For the password hash,  sp_help_revlogin uses the function LoginProperty() which takes a user name and option ‘passwordhash’ to return the hash for the user. Unfortunately, it requires the login to exist in the Master database. This would not work. So another slight modification I had to make was to pull the password hash itself (pwdhash from sys.sysxlgns) into the logins cursor and comment out the section of sp_help_revlogin that uses LoginProperty. Instead, I pass the pwdhash value as the variable @PWD_varbinary to the sp_hexadecimal stored procedure which is also created by and used within the code provided by Microsoft in the link above for sp_help_revlogin. The final challenge: sys.sysxlgns and sys.server_principals are visible only within a Dedicated Administrator Connection (DAC) query window in SSMS or within SQLCDMD.  To open a DAC connection you have to be logged in on the SQL Server itself, via RDP in my case,  and you preface the server name in the query connection with ADMIN:, so that the server connection looks like ADMIN:ServerName. From there you can create the modified stored procedure in the restored copy of a Master database from a source system as whatever name you like, and then run the modified stored procedure. I named my new stored procedure usp_help_revlogin_MyMaster. Upon execution I was happy to see the logins and password hashes that I needed to apply from the source Master database without having to restore over the new Master system database and without the need to access the original server (assuming it was down due to whatever disaster put it in that state). You will note that I am not providing full code samples here of the modifications. I will say that it was a slight bit of work and anyone who needed to do this for whatever reason, could fairly easily roll their own solution with the information provided herein.  My goal, as I said was to prove that this could be done and provide another option if required to ease the burden of getting SQL Servers up and available in an emergency situation where alternatives may be more challenging or otherwise unavailable.  

    Read the article

  • I need to understand why my server turned off

    - by Dema
    Our organization was robbed and definitely it was inside job. I was set up. I work as a manager and as system administrator in this organization and everything goes against me. The only clue I have is that someone accidentally or intentionally turned of a server that is in the office indicating that some one was inside at the time that no one should be. This is the only evidence I have that can justify me.  I looked the log files and they show that the Power button was pressed. Can you help me to find out that that was not a bug or systems overheat? I will post the log files and if you will ask more I will gladly provide the information. Messages: Dec 24 21:43:14 jamx shutdown[27883]: shutting down for system halt Dec 24 21:43:15 jamx init: Switching to runlevel: 0 Dec 24 21:43:15 jamx smartd[3047]: smartd received signal 15: Terminated Dec 24 21:43:15 jamx smartd[3047]: smartd is exiting (exit status 0) Dec 24 21:43:15 jamx avahi-daemon[3015]: Got SIGTERM, quitting. Dec 24 21:43:15 jamx avahi-daemon[3015]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::221:85ff:fe11:8221. Dec 24 21:43:15 jamx avahi-daemon[3015]: Leaving mDNS multicast group on interface eth0.IPv4 with address 82.207.41.239. Dec 24 21:43:15 jamx shutdown[27962]: shutting down for system halt Dec 24 21:43:15 jamx saslauthd[2983]: server_exit     : master exited: 2983 Dec 24 21:43:29 jamx nmbd[2921]: [2010/12/24 21:43:29, 0] nmbd/nmbd.c:terminate(58) Dec 24 21:43:29 jamx nmbd[2921]:   Got SIGTERM: going down... Dec 24 21:43:31 jamx clamd[2526]: Pid file removed. Dec 24 21:43:31 jamx clamd[2526]: --- Stopped at Fri Dec 24 21:43:31 2010 Dec 24 21:43:31 jamx clamd[2526]: Socket file removed. Dec 24 21:43:31 jamx mydns[2645]: jamx.org.ua up 9h44m48s (35088s) 117 questions (0/s) NOERROR=117 SERVFAIL=0 NXDOMAIN=0 NOTIMP=0 REFUSED=0 (100% TCP, 117 queries) Dec 24 21:43:31 jamx mydns[2645]: terminated Dec 24 21:43:34 jamx ntpd[2512]: ntpd exiting on signal 15 Dec 24 21:43:34 jamx hcid[2265]: Got disconnected from the system message bus Dec 24 21:43:35 jamx rpc.statd[2167]: Caught signal 15, un-registering and exiting. Dec 24 21:43:35 jamx portmap[28473]: connect from 127.0.0.1 to unset(status): request from unprivileged port Dec 24 21:43:35 jamx auditd[2021]: The audit daemon is exiting. Dec 24 21:43:35 jamx kernel: audit(1293219815.505:4044): audit_pid=0 old=2021 by auid=4294967295 Dec 24 21:43:35 jamx pcscd: pcscdaemon.c:572:signal_trap() Preparing for suicide Dec 24 21:43:36 jamx pcscd: hotplug_libusb.c:376:HPRescanUsbBus() Hotplug stopped Dec 24 21:43:36 jamx pcscd: readerfactory.c:1379:RFCleanupReaders() entering cleaning function Dec 24 21:43:36 jamx pcscd: pcscdaemon.c:532:at_exit() cleaning /var/run Dec 24 21:43:36 jamx kernel: Kernel logging (proc) stopped. Dec 24 21:43:36 jamx kernel: Kernel log daemon terminating. Dec 24 21:43:37 jamx exiting on signal 15 Acpid: [Fri Dec 24 21:43:14 2010] received event "button/power PWRF 00000080 00000001" [Fri Dec 24 21:43:14 2010] notifying client 2382[68:68] [Fri Dec 24 21:43:14 2010] executing action "/bin/ps awwux | /bin/grep gnome-power-manager | /bin/grep -qv grep || /sbin/shutdown -h now" [Fri Dec 24 21:43:14 2010] BEGIN HANDLER MESSAGES [Fri Dec 24 21:43:15 2010] END HANDLER MESSAGES [Fri Dec 24 21:43:15 2010] action exited with status 0 [Fri Dec 24 21:43:15 2010] completed event "button/power PWRF 00000080 00000001" [Fri Dec 24 21:43:15 2010] received event "button/power PWRF 00000080 00000002" [Fri Dec 24 21:43:15 2010] notifying client 2382[68:68] [Fri Dec 24 21:43:15 2010] executing action "/bin/ps awwux | /bin/grep gnome-power-manager | /bin/grep -qv grep || /sbin/shutdown -h now" [Fri Dec 24 21:43:15 2010] BEGIN HANDLER MESSAGES [Fri Dec 24 21:43:15 2010] END HANDLER MESSAGES [Fri Dec 24 21:43:15 2010] action exited with status 0 [Fri Dec 24 21:43:15 2010] completed event "button/power PWRF 00000080 00000002" [Fri Dec 24 21:43:34 2010] exiting

    Read the article

  • Cisco Pix how to add an additional block of static ip addresses for nat?

    - by Scott Szretter
    I have a pix 501 with 5 static ip addresses. My isp just gave me 5 more. I am trying to figure out how to add the new block and then how to nat/open at least one of them to an inside machine. So far, I named a new interface "intf2", ip range is 71.11.11.58 - 62 (gateway should 71.11.11.57) imgsvr is the machine I want to nat to one of the (71.11.11.59) new ip addresses. mail (.123) is an example of a machine that is mapped to the current existing 5 ip block (96.11.11.121 gate / 96.11.11.122-127) and working fine. Building configuration... : Saved : PIX Version 6.3(4) interface ethernet0 auto interface ethernet0 vlan1 logical interface ethernet1 auto nameif ethernet0 outside security0 nameif ethernet1 inside security100 nameif vlan1 intf2 security1 enable password xxxxxxxxx encrypted passwd xxxxxxxxx encrypted hostname xxxxxxxPIX domain-name xxxxxxxxxxx no fixup protocol dns fixup protocol ftp 21 fixup protocol h323 h225 1720 fixup protocol h323 ras 1718-1719 fixup protocol http 80 fixup protocol rsh 514 fixup protocol rtsp 554 fixup protocol sip 5060 fixup protocol sip udp 5060 fixup protocol skinny 2000 no fixup protocol smtp 25 fixup protocol sqlnet 1521 fixup protocol tftp 69 names ...snip... name 192.168.10.13 mail name 192.168.10.29 imgsvr object-group network vpn1 network-object mail 255.255.255.255 access-list outside_access_in permit tcp any host 96.11.11.124 eq www access-list outside_access_in permit tcp any host 96.11.11.124 eq https access-list outside_access_in permit tcp any host 96.11.11.124 eq 3389 access-list outside_access_in permit tcp any host 96.11.11.123 eq https access-list outside_access_in permit tcp any host 96.11.11.123 eq www access-list outside_access_in permit tcp any host 96.11.11.125 eq smtp access-list outside_access_in permit tcp any host 96.11.11.125 eq https access-list outside_access_in permit tcp any host 96.11.11.125 eq 10443 access-list outside_access_in permit tcp any host 96.11.11.126 eq smtp access-list outside_access_in permit tcp any host 96.11.11.126 eq https access-list outside_access_in permit tcp any host 96.11.11.126 eq 10443 access-list outside_access_in deny ip any any access-list inside_nat0_outbound permit ip 192.168.0.0 255.255.0.0 IPPool2 255.255.255.0 access-list inside_nat0_outbound permit ip 172.17.0.0 255.255.0.0 IPPool2 255.255.255.0 access-list inside_nat0_outbound permit ip 172.16.0.0 255.255.0.0 IPPool2 255.255.255.0 ...snip... access-list inside_access_in deny tcp any any eq smtp access-list inside_access_in permit ip any any pager lines 24 logging on logging buffered notifications mtu outside 1500 mtu inside 1500 ip address outside 96.11.11.122 255.255.255.248 ip address inside 192.168.10.15 255.255.255.0 ip address intf2 71.11.11.58 255.255.255.248 ip audit info action alarm ip audit attack action alarm pdm location exchange 255.255.255.255 inside pdm location mail 255.255.255.255 inside pdm location IPPool2 255.255.255.0 outside pdm location 96.11.11.122 255.255.255.255 inside pdm location 192.168.10.1 255.255.255.255 inside pdm location 192.168.10.6 255.255.255.255 inside pdm location mail-gate1 255.255.255.255 inside pdm location mail-gate2 255.255.255.255 inside pdm location imgsvr 255.255.255.255 inside pdm location 71.11.11.59 255.255.255.255 intf2 pdm logging informational 100 pdm history enable arp timeout 14400 global (outside) 1 interface global (outside) 2 96.11.11.123 global (intf2) 3 interface global (intf2) 4 71.11.11.59 nat (inside) 0 access-list inside_nat0_outbound nat (inside) 2 mail 255.255.255.255 0 0 nat (inside) 1 0.0.0.0 0.0.0.0 0 0 static (inside,outside) tcp 96.11.11.123 smtp mail smtp netmask 255.255.255.255 0 0 static (inside,outside) tcp 96.11.11.123 https mail https netmask 255.255.255.255 0 0 static (inside,outside) tcp 96.11.11.123 www mail www netmask 255.255.255.255 0 0 static (inside,outside) 96.11.11.124 ts netmask 255.255.255.255 0 0 static (inside,outside) 96.11.11.126 mail-gate2 netmask 255.255.255.255 0 0 static (inside,outside) 96.11.11.125 mail-gate1 netmask 255.255.255.255 0 0 access-group outside_access_in in interface outside access-group inside_access_in in interface inside route outside 0.0.0.0 0.0.0.0 96.11.11.121 1 route intf2 0.0.0.0 0.0.0.0 71.11.11.57 2 timeout xlate 0:05:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 rpc 0:10:00 h225 1:00:00 timeout h323 0:05:00 mgcp 0:05:00 sip 0:30:00 sip_media 0:02:00 timeout uauth 0:05:00 absolute floodguard enable ...snip... : end [OK] Thanks!

    Read the article

  • Create Resume problem

    - by ar31an
    hello mates, i am working on a project of online resume management system and i am encountering an exception while creating resume. [b] Exception: Data type mismatch in criteria expression. [/b] here is my code for Create Resume-1.aspx.cs using System; using System.Data; using System.Configuration; using System.Collections; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; using System.Data.OleDb; public partial class _Default : System.Web.UI.Page { string sql, sql2, sql3, sql4, sql5, sql6, sql7, sql8, sql9, sql10, sql11, sql12; string conString = "Provider=Microsoft.ACE.OLEDB.12.0; Data Source=D:\\Deliverable4.accdb"; protected OleDbConnection rMSConnection; protected OleDbCommand rMSCommand; protected OleDbDataAdapter rMSDataAdapter; protected DataSet dataSet; protected DataTable dataTable; protected DataRow dataRow; protected void Page_Load(object sender, EventArgs e) { } protected void Button1_Click(object sender, EventArgs e) { string contact1 = TextBox1.Text; string contact2 = TextBox2.Text; string cellphone = TextBox3.Text; string address = TextBox4.Text; string city = TextBox5.Text; string addqualification = TextBox18.Text; //string SecondLastDegreeGrade = TextBox17.Text; //string SecondLastDegreeInstitute = TextBox16.Text; //string SecondLastDegreeNameOther = TextBox15.Text; string LastDegreeNameOther = TextBox11.Text; string LastDegreeInstitute = TextBox12.Text; string LastDegreeGrade = TextBox13.Text; string tentativeFromDate = (DropDownList4.SelectedValue + " " + DropDownList7.SelectedValue + " " + DropDownList8.SelectedValue); try { sql6 = "select CountryID from COUNTRY"; rMSConnection = new OleDbConnection(conString); rMSDataAdapter = new OleDbDataAdapter(sql6, rMSConnection); dataSet = new DataSet("cID"); rMSDataAdapter.Fill(dataSet, "COUNTRY"); dataTable = dataSet.Tables["COUNTRY"]; int cId = (int)dataTable.Rows[0][0]; rMSConnection.Close(); sql4 = "select PersonalDetailID from PERSONALDETAIL"; rMSConnection = new OleDbConnection(conString); rMSDataAdapter = new OleDbDataAdapter(sql4, rMSConnection); dataSet = new DataSet("PDID"); rMSDataAdapter.Fill(dataSet, "PERSONALDETAIL"); dataTable = dataSet.Tables["PERSONALDETAIL"]; int PDId = (int)dataTable.Rows[0][0]; rMSConnection.Close(); sql5 = "update PERSONALDETAIL set Phone1 ='" + contact1 + "' , Phone2 = '" + contact2 + "', CellPhone = '" + cellphone + "', Address = '" + address + "', City = '" + city + "', CountryID = '" + cId + "' where PersonalDetailID = '" + PDId + "'"; rMSConnection = new OleDbConnection(conString); rMSConnection.Open(); rMSCommand = new OleDbCommand(sql5, rMSConnection); rMSCommand.ExecuteNonQuery(); rMSConnection.Close(); sql3 = "select DesignationID from DESIGNATION"; rMSConnection = new OleDbConnection(conString); rMSDataAdapter = new OleDbDataAdapter(sql3, rMSConnection); dataSet = new DataSet("DesID"); rMSDataAdapter.Fill(dataSet, "DESIGNATION"); dataTable = dataSet.Tables["DESIGNATION"]; int desId = (int)dataTable.Rows[0][0]; rMSConnection.Close(); sql2 = "select DepartmentID from DEPARTMENT"; rMSConnection = new OleDbConnection(conString); rMSDataAdapter = new OleDbDataAdapter(sql2, rMSConnection); dataSet = new DataSet("DID"); rMSDataAdapter.Fill(dataSet, "DEPARTMENT"); dataTable = dataSet.Tables["DEPARTMENT"]; int dId = (int)dataTable.Rows[0][0]; rMSConnection.Close(); sql7 = "select ResumeID from RESUME"; rMSConnection = new OleDbConnection(conString); rMSDataAdapter = new OleDbDataAdapter(sql7, rMSConnection); dataSet = new DataSet("rID"); rMSDataAdapter.Fill(dataSet, "RESUME"); dataTable = dataSet.Tables["RESUME"]; int rId = (int)dataTable.Rows[0][0]; rMSConnection.Close(); sql = "update RESUME set PersonalDetailID ='" + PDId + "' , DesignationID = '" + desId + "', DepartmentID = '" + dId + "', TentativeFromDate = '" + tentativeFromDate + "', AdditionalQualification = '" + addqualification + "' where ResumeID = '" + rId + "'"; rMSConnection = new OleDbConnection(conString); rMSConnection.Open(); rMSCommand = new OleDbCommand(sql, rMSConnection); rMSCommand.ExecuteNonQuery(); rMSConnection.Close(); sql8 = "insert into INSTITUTE (InstituteName) values ('" + LastDegreeInstitute + "')"; rMSConnection = new OleDbConnection(conString); rMSConnection.Open(); rMSCommand = new OleDbCommand(sql8, rMSConnection); rMSCommand.ExecuteNonQuery(); rMSConnection.Close(); sql9 = "insert into DEGREE (DegreeName) values ('" + LastDegreeNameOther + "')"; rMSConnection = new OleDbConnection(conString); rMSConnection.Open(); rMSCommand = new OleDbCommand(sql9, rMSConnection); rMSCommand.ExecuteNonQuery(); rMSConnection.Close(); sql11 = "select InstituteID from INSTITUTE"; rMSConnection = new OleDbConnection(conString); rMSDataAdapter = new OleDbDataAdapter(sql11, rMSConnection); dataSet = new DataSet("insID"); rMSDataAdapter.Fill(dataSet, "INSTITUTE"); dataTable = dataSet.Tables["INSTITUTE"]; int insId = (int)dataTable.Rows[0][0]; rMSConnection.Close(); sql12 = "select DegreeID from DEGREE"; rMSConnection = new OleDbConnection(conString); rMSDataAdapter = new OleDbDataAdapter(sql12, rMSConnection); dataSet = new DataSet("degID"); rMSDataAdapter.Fill(dataSet, "DEGREE"); dataTable = dataSet.Tables["DEGREE"]; int degId = (int)dataTable.Rows[0][0]; rMSConnection.Close(); sql10 = "insert into QUALIFICATION (Grade, ResumeID, InstituteID, DegreeID) values ('" + LastDegreeGrade + "', '" + rId + "', '" + insId + "', '" + degId + "')"; rMSConnection = new OleDbConnection(conString); rMSConnection.Open(); rMSCommand = new OleDbCommand(sql10, rMSConnection); rMSCommand.ExecuteNonQuery(); rMSConnection.Close(); Response.Redirect("Applicant.aspx"); } catch (Exception exp) { rMSConnection.Close(); Label1.Text = "Exception: " + exp.Message; } } protected void Button2_Click(object sender, EventArgs e) { } } And for Create Resume-1.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Create Resume-1.aspx.cs" Inherits="_Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <div><center> <strong><span style="font-size: 16pt"></span></strong>&nbsp;</center> <center> &nbsp;</center> <center style="background-color: silver"> &nbsp;</center> <center> <strong><span style="font-size: 16pt">Step 1</span></strong></center> <center style="background-color: silver"> &nbsp;</center> <center> &nbsp;</center> <center> &nbsp;</center> <center> <asp:Label ID="PhoneNo1" runat="server" Text="Contact No 1*"></asp:Label> <asp:TextBox ID="TextBox1" runat="server"></asp:TextBox> <asp:RequiredFieldValidator ID="RequiredFieldValidator1" runat="server" ErrorMessage="Items marked with '*' cannot be left blank." ControlToValidate="TextBox1"></asp:RequiredFieldValidator><br /> <asp:Label ID="PhoneNo2" runat="server" Text="Contact No 2"></asp:Label> <asp:TextBox ID="TextBox2" runat="server"></asp:TextBox><br /> <asp:Label ID="CellNo" runat="server" Text="Cell Phone No"></asp:Label> <asp:TextBox ID="TextBox3" runat="server"></asp:TextBox><br /> <asp:Label ID="Address" runat="server" Text="Street Address*"></asp:Label> <asp:TextBox ID="TextBox4" runat="server"></asp:TextBox> <asp:RequiredFieldValidator ID="RequiredFieldValidator2" runat="server" ErrorMessage="Items marked with '*' cannot be left blank." ControlToValidate="TextBox4"></asp:RequiredFieldValidator><br /> <asp:Label ID="City" runat="server" Text="City*"></asp:Label> <asp:TextBox ID="TextBox5" runat="server"></asp:TextBox> <asp:RequiredFieldValidator ID="RequiredFieldValidator3" runat="server" ErrorMessage="Items marked with '*' cannot be left blank." ControlToValidate="TextBox5"></asp:RequiredFieldValidator><br /> <asp:Label ID="Country" runat="server" Text="Country of Origin*"></asp:Label> <asp:DropDownList ID="DropDownList1" runat="server" DataSourceID="SqlDataSource1" DataTextField="CountryName" DataValueField="CountryID"> </asp:DropDownList><asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:ConnectionString7 %>" DeleteCommand="DELETE FROM [COUNTRY] WHERE (([CountryID] = ?) OR ([CountryID] IS NULL AND ? IS NULL))" InsertCommand="INSERT INTO [COUNTRY] ([CountryID], [CountryName]) VALUES (?, ?)" ProviderName="<%$ ConnectionStrings:ConnectionString7.ProviderName %>" SelectCommand="SELECT * FROM [COUNTRY]" UpdateCommand="UPDATE [COUNTRY] SET [CountryName] = ? WHERE (([CountryID] = ?) OR ([CountryID] IS NULL AND ? IS NULL))"> <DeleteParameters> <asp:Parameter Name="CountryID" Type="Int32" /> </DeleteParameters> <UpdateParameters> <asp:Parameter Name="CountryName" Type="String" /> <asp:Parameter Name="CountryID" Type="Int32" /> </UpdateParameters> <InsertParameters> <asp:Parameter Name="CountryID" Type="Int32" /> <asp:Parameter Name="CountryName" Type="String" /> </InsertParameters> </asp:SqlDataSource> <asp:RequiredFieldValidator ID="RequiredFieldValidator4" runat="server" ErrorMessage="Items marked with '*' cannot be left blank." ControlToValidate="DropDownList1"></asp:RequiredFieldValidator><br /> <asp:Label ID="DepartmentOfInterest" runat="server" Text="Department of Interest*"></asp:Label> <asp:DropDownList ID="DropDownList2" runat="server" DataSourceID="SqlDataSource2" DataTextField="DepartmentName" DataValueField="DepartmentID"> </asp:DropDownList><asp:SqlDataSource ID="SqlDataSource2" runat="server" ConnectionString="<%$ ConnectionStrings:ConnectionString7 %>" DeleteCommand="DELETE FROM [DEPARTMENT] WHERE [DepartmentID] = ?" InsertCommand="INSERT INTO [DEPARTMENT] ([DepartmentID], [DepartmentName]) VALUES (?, ?)" ProviderName="<%$ ConnectionStrings:ConnectionString7.ProviderName %>" SelectCommand="SELECT * FROM [DEPARTMENT]" UpdateCommand="UPDATE [DEPARTMENT] SET [DepartmentName] = ? WHERE [DepartmentID] = ?"> <DeleteParameters> <asp:Parameter Name="DepartmentID" Type="Int32" /> </DeleteParameters> <UpdateParameters> <asp:Parameter Name="DepartmentName" Type="String" /> <asp:Parameter Name="DepartmentID" Type="Int32" /> </UpdateParameters> <InsertParameters> <asp:Parameter Name="DepartmentID" Type="Int32" /> <asp:Parameter Name="DepartmentName" Type="String" /> </InsertParameters> </asp:SqlDataSource> <asp:RequiredFieldValidator ID="RequiredFieldValidator5" runat="server" ErrorMessage="Items marked with '*' cannot be left blank." ControlToValidate="DropDownList2"></asp:RequiredFieldValidator><br /> <asp:Label ID="DesignationAppliedFor" runat="server" Text="Position Applied For*"></asp:Label> <asp:DropDownList ID="DropDownList3" runat="server" DataSourceID="SqlDataSource3" DataTextField="DesignationName" DataValueField="DesignationID"> </asp:DropDownList><asp:SqlDataSource ID="SqlDataSource3" runat="server" ConnectionString="<%$ ConnectionStrings:ConnectionString7 %>" DeleteCommand="DELETE FROM [DESIGNATION] WHERE [DesignationID] = ?" InsertCommand="INSERT INTO [DESIGNATION] ([DesignationID], [DesignationName], [DesignationStatus]) VALUES (?, ?, ?)" ProviderName="<%$ ConnectionStrings:ConnectionString7.ProviderName %>" SelectCommand="SELECT * FROM [DESIGNATION]" UpdateCommand="UPDATE [DESIGNATION] SET [DesignationName] = ?, [DesignationStatus] = ? WHERE [DesignationID] = ?"> <DeleteParameters> <asp:Parameter Name="DesignationID" Type="Int32" /> </DeleteParameters> <UpdateParameters> <asp:Parameter Name="DesignationName" Type="String" /> <asp:Parameter Name="DesignationStatus" Type="String" /> <asp:Parameter Name="DesignationID" Type="Int32" /> </UpdateParameters> <InsertParameters> <asp:Parameter Name="DesignationID" Type="Int32" /> <asp:Parameter Name="DesignationName" Type="String" /> <asp:Parameter Name="DesignationStatus" Type="String" /> </InsertParameters> </asp:SqlDataSource> <asp:RequiredFieldValidator ID="RequiredFieldValidator6" runat="server" ErrorMessage="Items marked with '*' cannot be left blank." ControlToValidate="DropDownList3"></asp:RequiredFieldValidator><br /> <asp:Label ID="TentativeFromDate" runat="server" Text="Can Join From*"></asp:Label> <asp:DropDownList ID="DropDownList4" runat="server"> <asp:ListItem>1</asp:ListItem> <asp:ListItem>2</asp:ListItem> <asp:ListItem>3</asp:ListItem> <asp:ListItem>4</asp:ListItem> <asp:ListItem>5</asp:ListItem> <asp:ListItem>6</asp:ListItem> <asp:ListItem>7</asp:ListItem> <asp:ListItem>8</asp:ListItem> <asp:ListItem>9</asp:ListItem> <asp:ListItem>10</asp:ListItem> <asp:ListItem>11</asp:ListItem> <asp:ListItem>12</asp:ListItem> </asp:DropDownList>&nbsp;<asp:DropDownList ID="DropDownList7" runat="server"> <asp:ListItem>1</asp:ListItem> <asp:ListItem>2</asp:ListItem> <asp:ListItem>3</asp:ListItem> <asp:ListItem>4</asp:ListItem> <asp:ListItem>5</asp:ListItem> <asp:ListItem>6</asp:ListItem> <asp:ListItem>7</asp:ListItem> <asp:ListItem>8</asp:ListItem> <asp:ListItem>9</asp:ListItem> <asp:ListItem>10</asp:ListItem> <asp:ListItem>11</asp:ListItem> <asp:ListItem>12</asp:ListItem> <asp:ListItem>13</asp:ListItem> <asp:ListItem>14</asp:ListItem> <asp:ListItem>15</asp:ListItem> <asp:ListItem>16</asp:ListItem> <asp:ListItem>17</asp:ListItem> <asp:ListItem>18</asp:ListItem> <asp:ListItem>19</asp:ListItem> <asp:ListItem>20</asp:ListItem> <asp:ListItem>21</asp:ListItem> <asp:ListItem>22</asp:ListItem> <asp:ListItem>23</asp:ListItem> <asp:ListItem>24</asp:ListItem> <asp:ListItem>25</asp:ListItem> <asp:ListItem>26</asp:ListItem> <asp:ListItem>27</asp:ListItem> <asp:ListItem>28</asp:ListItem> <asp:ListItem>29</asp:ListItem> <asp:ListItem>30</asp:ListItem> <asp:ListItem>31</asp:ListItem> </asp:DropDownList> <asp:DropDownList ID="DropDownList8" runat="server"> <asp:ListItem>2010</asp:ListItem> </asp:DropDownList> <asp:RequiredFieldValidator ID="RequiredFieldValidator7" runat="server" ErrorMessage="Items marked with '*' cannot be left blank." ControlToValidate="DropDownList4"></asp:RequiredFieldValidator></center> <center> <br /> <asp:Label ID="LastDegreeName" runat="server" Text="Last Degree*"></asp:Label> <asp:DropDownList ID="DropDownList5" runat="server" DataSourceID="SqlDataSource5" DataTextField="DegreeName" DataValueField="DegreeID"> </asp:DropDownList><asp:SqlDataSource ID="SqlDataSource5" runat="server" ConnectionString="<%$ ConnectionStrings:ConnectionString7 %>" DeleteCommand="DELETE FROM [DEGREE] WHERE [DegreeID] = ?" InsertCommand="INSERT INTO [DEGREE] ([DegreeID], [DegreeName]) VALUES (?, ?)" ProviderName="<%$ ConnectionStrings:ConnectionString7.ProviderName %>" SelectCommand="SELECT * FROM [DEGREE]" UpdateCommand="UPDATE [DEGREE] SET [DegreeName] = ? WHERE [DegreeID] = ?"> <DeleteParameters> <asp:Parameter Name="DegreeID" Type="Int32" /> </DeleteParameters> <UpdateParameters> <asp:Parameter Name="DegreeName" Type="String" /> <asp:Parameter Name="DegreeID" Type="Int32" /> </UpdateParameters> <InsertParameters> <asp:Parameter Name="DegreeID" Type="Int32" /> <asp:Parameter Name="DegreeName" Type="String" /> </InsertParameters> </asp:SqlDataSource> <asp:RequiredFieldValidator ID="RequiredFieldValidator8" runat="server" ErrorMessage="Items marked with '*' cannot be left blank." ControlToValidate="DropDownList5"></asp:RequiredFieldValidator><br /> <asp:Label ID="LastDegreeNameOther" runat="server" Text="Other"></asp:Label> <asp:TextBox ID="TextBox11" runat="server"></asp:TextBox><br /> <asp:Label ID="LastDegreeInstitute" runat="server" Text="Institute Name*"></asp:Label> <asp:TextBox ID="TextBox12" runat="server"></asp:TextBox> <asp:RequiredFieldValidator ID="RequiredFieldValidator9" runat="server" ErrorMessage="Items marked with '*' cannot be left blank." ControlToValidate="TextBox12"></asp:RequiredFieldValidator><br /> <asp:Label ID="LastDegreeGrade" runat="server" Text="Marks / Grade*"></asp:Label> <asp:TextBox ID="TextBox13" runat="server"></asp:TextBox> <asp:RequiredFieldValidator ID="RequiredFieldValidator10" runat="server" ErrorMessage="Items marked with '*' cannot be left blank." ControlToValidate="TextBox13"></asp:RequiredFieldValidator></center> <center> &nbsp;</center> <center> <br /> <asp:Label ID="SecondLastDegreeName" runat="server" Text="Second Last Degree*"></asp:Label> <asp:DropDownList ID="DropDownList6" runat="server" DataSourceID="SqlDataSource4" DataTextField="DegreeName" DataValueField="DegreeID"> </asp:DropDownList><asp:SqlDataSource ID="SqlDataSource4" runat="server" ConnectionString="<%$ ConnectionStrings:ConnectionString7 %>" DeleteCommand="DELETE FROM [DEGREE] WHERE [DegreeID] = ?" InsertCommand="INSERT INTO [DEGREE] ([DegreeID], [DegreeName]) VALUES (?, ?)" ProviderName="<%$ ConnectionStrings:ConnectionString7.ProviderName %>" SelectCommand="SELECT * FROM [DEGREE]" UpdateCommand="UPDATE [DEGREE] SET [DegreeName] = ? WHERE [DegreeID] = ?"> <DeleteParameters> <asp:Parameter Name="DegreeID" Type="Int32" /> </DeleteParameters> <UpdateParameters> <asp:Parameter Name="DegreeName" Type="String" /> <asp:Parameter Name="DegreeID" Type="Int32" /> </UpdateParameters> <InsertParameters> <asp:Parameter Name="DegreeID" Type="Int32" /> <asp:Parameter Name="DegreeName" Type="String" /> </InsertParameters> </asp:SqlDataSource> <asp:RequiredFieldValidator ID="RequiredFieldValidator11" runat="server" ErrorMessage="Items marked with '*' cannot be left blank." ControlToValidate="DropDownList6"></asp:RequiredFieldValidator><br /> <asp:Label ID="SecondLastDegreeNameOther" runat="server" Text="Other"></asp:Label> <asp:TextBox ID="TextBox15" runat="server"></asp:TextBox><br /> <asp:Label ID="SecondLastDegreeInstitute" runat="server" Text="Institute Name*"></asp:Label> <asp:TextBox ID="TextBox16" runat="server"></asp:TextBox> <asp:RequiredFieldValidator ID="RequiredFieldValidator12" runat="server" ErrorMessage="Items marked with '*' cannot be left blank." ControlToValidate="TextBox16"></asp:RequiredFieldValidator><br /> <asp:Label ID="SecondLastDegreeGrade" runat="server" Text="Marks / Grade*"></asp:Label> <asp:TextBox ID="TextBox17" runat="server"></asp:TextBox> <asp:RequiredFieldValidator ID="RequiredFieldValidator13" runat="server" ErrorMessage="Items marked with '*' cannot be left blank." ControlToValidate="TextBox17"></asp:RequiredFieldValidator></center> <center> <br /> <asp:Label ID="AdditionalQualification" runat="server" Text="Additional Qualification"></asp:Label> <asp:TextBox ID="TextBox18" runat="server" TextMode="MultiLine"></asp:TextBox></center> <center> &nbsp;</center> <center> <asp:Button ID="Button1" runat="server" Text="Save and Exit" OnClick="Button1_Click" /> &nbsp;&nbsp; <asp:Button ID="Button2" runat="server" Text="Next" OnClick="Button2_Click" /></center> <center> &nbsp;</center> <center> <asp:Label ID="Label1" runat="server"></asp:Label>&nbsp;</center> <center> &nbsp;</center> <center style="background-color: silver"> &nbsp;</center> </div> </form> </body> </html>

    Read the article

  • Start a Mapping or Process Flow from OWB Browser

    - by Dong Ruirong
    Basically, we start a Mapping or Process Flow from Oracle Warehouse Builder (OWB) Design Client. But actually we can also start a Mapping or Process Flow from OWB Browser. This paper will introduce the Start Report first and then introduce how to start/rerun a Mapping or Process Flow from OWB Browser. Start Report Start Report is used to start an execution of a Mapping or Process Flow. So there are two kinds of Start Report: Mapping Start Report (See Figure 1) and Process Flow Start Report (See Figure 2). Start Report shows the Mapping or Process Flow identification properties, including latest deployment and latest execution, lists all execution parameters for the Mapping or Process Flow, which were specified by the latest deployment, and assigns parameter default values from the latest deployment specification. You can do a couple of things from Start Report: Sort execution parameters on name, category. Table 1 lists all parameters of a Mapping. Table 2 lists all parameters of a Process Flow. Change values of any input parameter where permitted. For some parameters, selection lists are provided. For example, Mapping’s parameter Audit Level has a selection list. Reset all parameter settings to their default values. Apply basic validation to parameter values before starting an execution. Start the Mapping or Process Flow, which means it is executed immediately. Navigate to Deployment Report for latest deployment details of the Mapping or Process Flow. Navigate to Execution Job Report for latest execution of current Mapping or Process Flow Link to on-link help Warehouse Report Page, Deployment Report, Execution Report, Execution Schedule Report and Execution Summary Report. Figure 1 Mapping Start Report Table 1 Execution Parameters and default values for a Mapping Category Name Mode Input Value System Audit Level In Error Details System Bulk Size In 1000 System Commit Frequency In 1000 System EXECUTE_RESUME_TASK In FALSE System FORCE_RESUME_OPTION In FALSE System Max No of Errors In 50 System NUMBER_OF_TIMES_TO_RETRY In 2 System Operating Mode In Set Based Fail Over to Row Based System PARALLEL_LEVEL In 0 System Procedure Name In main System Purge Group In WB Figure 2 Process Flow Start Report Table 2 Execution Parameters and default values for a Process Flow Category Name Mode Input Value System EVAL_LOCATION In   System Item Key In-Out   System Item Type In PFPKG_1 Start a Mapping or Process Flow To navigate to Start Report, it’s better to login OWB Browser with Control Center option; if not, after logging in OWB Browser, go to Control Center first. Then you can follow the ways introduced in this section to navigate to Start Report. One more thing you need to pay attention to is that you are not allowed to deploy any Mappings and Process Flows from OWB Browser as it’s not supported. So it’s necessary to deploy the Mappings and Process Flows first before starting them from OWB Browser. If you have deployed a Mapping or Process Flow but have not started it, please navigate from Object Summary Report or Deployment Schedule Report to Start Report. 1. Navigating from Object Summary Report to Start Report Open the Object Summary Report to see all deployed Mappings and Process Flows. Click the Mapping Name or Process Flow Name link to see its Deployment Report. Select the Start link in the Available Reports tab for the given Mapping or Process Flow to display a Start Report for the Mapping or Process Flow. The execution parameters have the default deployment-time settings. Change any of the input parameter values as required. Click Start Execution button to execute the Mapping or Process Flow. 2. Navigating from Deployment Schedule Report to Start Report Open the Deployment Schedule Report to see deployment details of Mapping and Process Flow. Expand the project trees to find the deployed Mappings and Process Flows. Click the Mapping Name or Process Flow Name link to see its Deployment Report. Select the Start link in the Available Reports tab for the given Mapping or Process Flow to display a Start Report for the Mapping or Process Flow. The execution parameters have the default deployment-time settings. Change any of the input parameter values as required. Click Start Execution button to execute the Mapping or Process Flow. Re-run a Mapping or Process Flow If you have executed a Mapping or Process Flow, you can navigate from Object Summary Report, Deployment Schedule Report, Execution Summary Report or Execution Schedule Report to Start Report. 1. Navigating from the Execution Summary Report to Start Report Open the Execution Summary Report to see all execution jobs including Mapping jobs and Process Flow jobs. Click on the Mapping Name or Process Flow Name to see its Execution Report. Select the Start link in the Available Reports tab for the given Mapping or Process Flow to display a Start Report for the Mapping or Process Flow. The execution parameters have the default deployment-time settings. Change any of the input parameter values as required. Click Start Execution button to execute the Mapping or Process Flow. 2. Navigating from the Execution Schedule Report to Start Report Open the Execution Schedule Report to see list of all executions of Mapping and Process Flow. Click on the Mapping Name or Process Flow Name to see its Execution Report. Select the Start link in the Available Reports tab for the given Mapping or Process Flow to display a Start Report for the Mapping or Process Flow. The execution parameters have the default deployment-time settings. Change any of the input parameter values as required. Click Start Execution button to execute the Mapping or Process Flow. If the execution of a Mapping or Process Flow is successful, you will see this message from the Start Report: Start Execution request successful. (See Figure 3) Figure 3 Execution Result You can also confirm the execution of the Mapping or Process Flow by referring to Execution Report of the current Mapping or Process Flow by clicking the link in the Available Reports tab for the given Mapping or Process Flow. One new record of execution job details is added to Execution Report of the Mapping or Process Flow which shows the details of the execution such as Start Time, Elapsed Time, Status, the number of records selected, inserted, updated, deleted etc.

    Read the article

  • Ubuntu 12.04 wireless (wifi) not working, can not upgrade to 12.10, touchpad gestures not working. What to do?

    - by Ritwik
    I installed ubuntu 12.04 LTS 3 days ago and since then wireless feature and touchpad gestures are not working. Tried everything on internet but still unsuccessful. I cant upgrade to ubuntu 12.10. These are the following comments I tried. Please help me. EDIT: just realized usb 3.0 is also not working. COMMAND lsb_release -r OUTPUT ----------------------------------------------------------------- Release: 12.04 ----------------------------------------------------------------- COMMAND lspci OUTPUT ------------------------------------------------------------------ 00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor DRAM Controller (rev 06) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller (rev 06) 00:01.1 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x8 Controller (rev 06) 00:02.0 VGA compatible controller: Intel Corporation 4th Gen Core Processor Integrated Graphics Controller (rev 06) 00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06) 00:14.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI (rev 05) 00:16.0 Communication controller: Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2 (rev 05) 00:1b.0 Audio device: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #1 (rev d5) 00:1c.1 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #2 (rev d5) 00:1c.2 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #3 (rev d5) 00:1d.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation HM86 Express LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 05) 00:1f.3 SMBus: Intel Corporation 8 Series/C220 Series Chipset Family SMBus Controller (rev 05) 07:00.0 3D controller: NVIDIA Corporation GF117M [GeForce 610M/710M / GT 620M/625M/630M/720M] (rev a1) 08:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 07) 09:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS5229 PCI Express Card Reader (rev 01) 0f:00.0 Network controller: Qualcomm Atheros QCA9565 / AR9565 Wireless Network Adapter (rev 01) ------------------------------------------------------------------ COMMAND sudo apt-get install linux-backports-modules-wireless-lucid-generic OUTPUT ------------------------------------------------------------------- Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package linux-backports-modules-wireless-lucid-generic ------------------------------------------------------------------- COMMAND cat /etc/lsb-release; uname -a OUTPUT ------------------------------------------------------------------- DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.5 LTS" Linux ritwik-PC 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:46:11 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux ------------------------------------------------------------------- COMMAND lspci -nnk | grep -iA2 net OUTPUT ------------------------------------------------------------------- 08:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 07) Subsystem: Hewlett-Packard Company Device [103c:225d] Kernel driver in use: r8169 -- 0f:00.0 Network controller [0280]: Qualcomm Atheros QCA9565 / AR9565 Wireless Network Adapter [168c:0036] (rev 01) Subsystem: Hewlett-Packard Company Device [103c:217f] ------------------------------------------------------------------- COMMAND lsusb OUTPUT ------------------------------------------------------------------- Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 002: ID 8087:8008 Intel Corp. Bus 002 Device 002: ID 8087:8000 Intel Corp. ------------------------------------------------------------------- COMMAND iwconfig OUTPUT ------------------------------------------------------------------- lo no wireless extensions. eth0 no wireless extensions. ------------------------------------------------------------------- COMMAND rfkill list all OUTPUT ------------------------------------------------------------------- 0: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: no 1: hp-bluetooth: Bluetooth Soft blocked: no Hard blocked: no ------------------------------------------------------------------- COMMAND lsmod OUTPUT ------------------------------------------------------------------- Module Size Used by snd_hda_codec_realtek 224215 1 bnep 18281 2 rfcomm 47604 0 bluetooth 180113 10 bnep,rfcomm parport_pc 32866 0 ppdev 17113 0 nls_iso8859_1 12713 1 nls_cp437 16991 1 vfat 17585 1 fat 61512 1 vfat snd_hda_intel 33719 3 snd_hda_codec 127706 2 snd_hda_codec_realtek,snd_hda_intel snd_hwdep 17764 1 snd_hda_codec snd_pcm 97275 2 snd_hda_intel,snd_hda_codec snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi snd_seq 61929 2 snd_seq_midi,snd_seq_midi_event nouveau 775039 0 joydev 17693 0 snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq ttm 76949 1 nouveau uvcvideo 72627 0 snd 79041 15 snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device videodev 98259 1 uvcvideo drm_kms_helper 46978 1 nouveau psmouse 98051 0 drm 241971 3 nouveau,ttm,drm_kms_helper i2c_algo_bit 13423 1 nouveau soundcore 15091 1 snd snd_page_alloc 18529 2 snd_hda_intel,snd_pcm v4l2_compat_ioctl32 17128 1 videodev hp_wmi 18092 0 serio_raw 13211 0 sparse_keymap 13890 1 hp_wmi mxm_wmi 13021 1 nouveau video 19651 1 nouveau wmi 19256 2 hp_wmi,mxm_wmi mac_hid 13253 0 lp 17799 0 parport 46562 3 parport_pc,ppdev,lp r8169 62190 0 ------------------------------------------------------------------- COMMAND sudo su modprobe -v ath9k OUTPUT ------------------------------------------------------------------- insmod /lib/modules/3.2.0-67-generic/kernel/net/wireless/cfg80211.ko insmod /lib/modules/3.2.0-67-generic/kernel/drivers/net/wireless/ath/ath.ko insmod /lib/modules/3.2.0-67-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_hw.ko insmod /lib/modules/3.2.0-67-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_common.ko insmod /lib/modules/3.2.0-67-generic/kernel/net/mac80211/mac80211.ko insmod /lib/modules/3.2.0-67-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k.ko ------------------------------------------------------------------- COMMAND do-release-upgrade OUTPUT ------------------------------------------------------------------- Err Upgrade tool signature 404 Not Found [IP: 91.189.88.149 80] Err Upgrade tool 404 Not Found [IP: 91.189.88.149 80] Fetched 0 B in 0s (0 B/s) WARNING:root:file 'quantal.tar.gz.gpg' missing Failed to fetch Fetching the upgrade failed. There may be a network problem. ------------------------------------------------------------------- COMMAND sudo modprobe ath9k dmesg | grep ath9k NO OUTPUT FOR THEM COMMAND dmesg | grep -e ath -e 80211 OUTPUT ------------------------------------------------------------------- [ 13.232372] type=1400 audit(1408867538.399:9): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=975 comm="apparmor_parser" [ 13.232615] type=1400 audit(1408867538.399:10): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=975 comm="apparmor_parser" [ 15.186599] ath3k: probe of 3-4:1.0 failed with error -110 [ 15.186635] usbcore: registered new interface driver ath3k [ 88.219329] cfg80211: Calling CRDA to update world regulatory domain [ 88.351665] cfg80211: World regulatory domain updated: [ 88.351667] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 88.351670] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 88.351671] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 88.351673] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 88.351674] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 88.351675] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) ------------------------------------------------------------------- COMMAND sudo apt-get install touchpad-indicator OUTPUT ------------------------------------------------------------------- Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: gir1.2-gconf-2.0 python-pyudev Suggested packages: python-qt4 python-pyside.qtcore The following NEW packages will be installed: gir1.2-gconf-2.0 python-pyudev touchpad-indicator 0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded. Need to get 84.1 kB of archives. After this operation, 1,136 kB of additional disk space will be used. Do you want to continue [Y/n]? Y Get:1 http://ppa.launchpad.net/atareao/atareao/ubuntu/ precise/main touchpad-indicator all 0.9.3.12-1ubuntu1 [46.5 kB] Get:2 http://archive.ubuntu.com/ubuntu/ precise/main gir1.2-gconf-2.0 amd64 3.2.5-0ubuntu2 [7,098 B] Get:3 http://archive.ubuntu.com/ubuntu/ precise/main python-pyudev all 0.13-1 [30.5 kB] Fetched 84.1 kB in 2s (31.6 kB/s) Selecting previously unselected package gir1.2-gconf-2.0. (Reading database ... 169322 files and directories currently installed.) Unpacking gir1.2-gconf-2.0 (from .../gir1.2-gconf-2.0_3.2.5-0ubuntu2_amd64.deb) ... Selecting previously unselected package python-pyudev. Unpacking python-pyudev (from .../python-pyudev_0.13-1_all.deb) ... Selecting previously unselected package touchpad-indicator. Unpacking touchpad-indicator (from .../touchpad-indicator_0.9.3.12-1ubuntu1_all.deb) ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for hicolor-icon-theme ... Processing triggers for software-center ... INFO:softwarecenter.db.update:no translation information in database needed Setting up gir1.2-gconf-2.0 (3.2.5-0ubuntu2) ... Setting up python-pyudev (0.13-1) ... Setting up touchpad-indicator (0.9.3.12-1ubuntu1) ... ------------------------------------------------------------------- Not able to find ( drivers/net/wireless/ath/ath9k/hw.c ) or ( drivers/net/wireless/ath/ath9k/hw.h )

    Read the article

  • Announcing Entity Framework Code-First (CTP5 release)

    - by ScottGu
    This week the data team released the CTP5 build of the new Entity Framework Code-First library.  EF Code-First enables a pretty sweet code-centric development workflow for working with data.  It enables you to: Develop without ever having to open a designer or define an XML mapping file Define model objects by simply writing “plain old classes” with no base classes required Use a “convention over configuration” approach that enables database persistence without explicitly configuring anything Optionally override the convention-based persistence and use a fluent code API to fully customize the persistence mapping I’m a big fan of the EF Code-First approach, and wrote several blog posts about it this summer: Code-First Development with Entity Framework 4 (July 16th) EF Code-First: Custom Database Schema Mapping (July 23rd) Using EF Code-First with an Existing Database (August 3rd) Today’s new CTP5 release delivers several nice improvements over the CTP4 build, and will be the last preview build of Code First before the final release of it.  We will ship the final EF Code First release in the first quarter of next year (Q1 of 2011).  It works with all .NET application types (including both ASP.NET Web Forms and ASP.NET MVC projects). Installing EF Code First You can install and use EF Code First CTP5 using one of two ways: Approach 1) By downloading and running a setup program.  Once installed you can reference the EntityFramework.dll assembly it provides within your projects.      or: Approach 2) By using the NuGet Package Manager within Visual Studio to download and install EF Code First within a project.  To do this, simply bring up the NuGet Package Manager Console within Visual Studio (View->Other Windows->Package Manager Console) and type “Install-Package EFCodeFirst”: Typing “Install-Package EFCodeFirst” within the Package Manager Console will cause NuGet to download the EF Code First package, and add it to your current project: Doing this will automatically add a reference to the EntityFramework.dll assembly to your project:   NuGet enables you to have EF Code First setup and ready to use within seconds.  When the final release of EF Code First ships you’ll also be able to just type “Update-Package EFCodeFirst” to update your existing projects to use the final release. EF Code First Assembly and Namespace The CTP5 release of EF Code First has an updated assembly name, and new .NET namespace: Assembly Name: EntityFramework.dll Namespace: System.Data.Entity These names match what we plan to use for the final release of the library. Nice New CTP5 Improvements The new CTP5 release of EF Code First contains a bunch of nice improvements and refinements. Some of the highlights include: Better support for Existing Databases Built-in Model-Level Validation and DataAnnotation Support Fluent API Improvements Pluggable Conventions Support New Change Tracking API Improved Concurrency Conflict Resolution Raw SQL Query/Command Support The rest of this blog post contains some more details about a few of the above changes. Better Support for Existing Databases EF Code First makes it really easy to create model layers that work against existing databases.  CTP5 includes some refinements that further streamline the developer workflow for this scenario. Below are the steps to use EF Code First to create a model layer for the Northwind sample database: Step 1: Create Model Classes and a DbContext class Below is all of the code necessary to implement a simple model layer using EF Code First that goes against the Northwind database: EF Code First enables you to use “POCO” – Plain Old CLR Objects – to represent entities within a database.  This means that you do not need to derive model classes from a base class, nor implement any interfaces or data persistence attributes on them.  This enables the model classes to be kept clean, easily testable, and “persistence ignorant”.  The Product and Category classes above are examples of POCO model classes. EF Code First enables you to easily connect your POCO model classes to a database by creating a “DbContext” class that exposes public properties that map to the tables within a database.  The Northwind class above illustrates how this can be done.  It is mapping our Product and Category classes to the “Products” and “Categories” tables within the database.  The properties within the Product and Category classes in turn map to the columns within the Products and Categories tables – and each instance of a Product/Category object maps to a row within the tables. The above code is all of the code required to create our model and data access layer!  Previous CTPs of EF Code First required an additional step to work against existing databases (a call to Database.Initializer<Northwind>(null) to tell EF Code First to not create the database) – this step is no longer required with the CTP5 release.  Step 2: Configure the Database Connection String We’ve written all of the code we need to write to define our model layer.  Our last step before we use it will be to setup a connection-string that connects it with our database.  To do this we’ll add a “Northwind” connection-string to our web.config file (or App.Config for client apps) like so:   <connectionStrings>          <add name="Northwind"          connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|\northwind.mdf;User Instance=true"          providerName="System.Data.SqlClient" />   </connectionStrings> EF “code first” uses a convention where DbContext classes by default look for a connection-string that has the same name as the context class.  Because our DbContext class is called “Northwind” it by default looks for a “Northwind” connection-string to use.  Above our Northwind connection-string is configured to use a local SQL Express database (stored within the \App_Data directory of our project).  You can alternatively point it at a remote SQL Server. Step 3: Using our Northwind Model Layer We can now easily query and update our database using the strongly-typed model layer we just built with EF Code First. The code example below demonstrates how to use LINQ to query for products within a specific product category.  This query returns back a sequence of strongly-typed Product objects that match the search criteria: The code example below demonstrates how we can retrieve a specific Product object, update two of its properties, and then save the changes back to the database: EF Code First handles all of the change-tracking and data persistence work for us, and allows us to focus on our application and business logic as opposed to having to worry about data access plumbing. Built-in Model Validation EF Code First allows you to use any validation approach you want when implementing business rules with your model layer.  This enables a great deal of flexibility and power. Starting with this week’s CTP5 release, EF Code First also now includes built-in support for both the DataAnnotation and IValidatorObject validation support built-into .NET 4.  This enables you to easily implement validation rules on your models, and have these rules automatically be enforced by EF Code First whenever you save your model layer.  It provides a very convenient “out of the box” way to enable validation within your applications. Applying DataAnnotations to our Northwind Model The code example below demonstrates how we could add some declarative validation rules to two of the properties of our “Product” model: We are using the [Required] and [Range] attributes above.  These validation attributes live within the System.ComponentModel.DataAnnotations namespace that is built-into .NET 4, and can be used independently of EF.  The error messages specified on them can either be explicitly defined (like above) – or retrieved from resource files (which makes localizing applications easy). Validation Enforcement on SaveChanges() EF Code-First (starting with CTP5) now automatically applies and enforces DataAnnotation rules when a model object is updated or saved.  You do not need to write any code to enforce this – this support is now enabled by default.  This new support means that the below code – which violates our above rules – will automatically throw an exception when we call the “SaveChanges()” method on our Northwind DbContext: The DbEntityValidationException that is raised when the SaveChanges() method is invoked contains a “EntityValidationErrors” property that you can use to retrieve the list of all validation errors that occurred when the model was trying to save.  This enables you to easily guide the user on how to fix them.  Note that EF Code-First will abort the entire transaction of changes if a validation rule is violated – ensuring that our database is always kept in a valid, consistent state. EF Code First’s validation enforcement works both for the built-in .NET DataAnnotation attributes (like Required, Range, RegularExpression, StringLength, etc), as well as for any custom validation rule you create by sub-classing the System.ComponentModel.DataAnnotations.ValidationAttribute base class. UI Validation Support A lot of our UI frameworks in .NET also provide support for DataAnnotation-based validation rules. For example, ASP.NET MVC, ASP.NET Dynamic Data, and Silverlight (via WCF RIA Services) all provide support for displaying client-side validation UI that honor the DataAnnotation rules applied to model objects. The screen-shot below demonstrates how using the default “Add-View” scaffold template within an ASP.NET MVC 3 application will cause appropriate validation error messages to be displayed if appropriate values are not provided: ASP.NET MVC 3 supports both client-side and server-side enforcement of these validation rules.  The error messages displayed are automatically picked up from the declarative validation attributes – eliminating the need for you to write any custom code to display them. Keeping things DRY The “DRY Principle” stands for “Do Not Repeat Yourself”, and is a best practice that recommends that you avoid duplicating logic/configuration/code in multiple places across your application, and instead specify it only once and have it apply everywhere. EF Code First CTP5 now enables you to apply declarative DataAnnotation validations on your model classes (and specify them only once) and then have the validation logic be enforced (and corresponding error messages displayed) across all applications scenarios – including within controllers, views, client-side scripts, and for any custom code that updates and manipulates model classes. This makes it much easier to build good applications with clean code, and to build applications that can rapidly iterate and evolve. Other EF Code First Improvements New to CTP5 EF Code First CTP5 includes a bunch of other improvements as well.  Below are a few short descriptions of some of them: Fluent API Improvements EF Code First allows you to override an “OnModelCreating()” method on the DbContext class to further refine/override the schema mapping rules used to map model classes to underlying database schema.  CTP5 includes some refinements to the ModelBuilder class that is passed to this method which can make defining mapping rules cleaner and more concise.  The ADO.NET Team blogged some samples of how to do this here. Pluggable Conventions Support EF Code First CTP5 provides new support that allows you to override the “default conventions” that EF Code First honors, and optionally replace them with your own set of conventions. New Change Tracking API EF Code First CTP5 exposes a new set of change tracking information that enables you to access Original, Current & Stored values, and State (e.g. Added, Unchanged, Modified, Deleted).  This support is useful in a variety of scenarios. Improved Concurrency Conflict Resolution EF Code First CTP5 provides better exception messages that allow access to the affected object instance and the ability to resolve conflicts using current, original and database values.  Raw SQL Query/Command Support EF Code First CTP5 now allows raw SQL queries and commands (including SPROCs) to be executed via the SqlQuery and SqlCommand methods exposed off of the DbContext.Database property.  The results of these method calls can be materialized into object instances that can be optionally change-tracked by the DbContext.  This is useful for a variety of advanced scenarios. Full Data Annotations Support EF Code First CTP5 now supports all standard DataAnnotations within .NET, and can use them both to perform validation as well as to automatically create the appropriate database schema when EF Code First is used in a database creation scenario.  Summary EF Code First provides an elegant and powerful way to work with data.  I really like it because it is extremely clean and supports best practices, while also enabling solutions to be implemented very, very rapidly.  The code-only approach of the library means that model layers end up being flexible and easy to customize. This week’s CTP5 release further refines EF Code First and helps ensure that it will be really sweet when it ships early next year.  I recommend using NuGet to install and give it a try today.  I think you’ll be pleasantly surprised by how awesome it is. Hope this helps, Scott

    Read the article

  • SQL Server &ndash; Undelete a Table and Restore a Single Table from Backup

    - by Mladen Prajdic
    This post is part of the monthly community event called T-SQL Tuesday started by Adam Machanic (blog|twitter) and hosted by someone else each month. This month the host is Sankar Reddy (blog|twitter) and the topic is Misconceptions in SQL Server. You can follow posts for this theme on Twitter by looking at #TSQL2sDay hashtag. Let me start by saying: This code is a crazy hack that is to never be used unless you really, really have to. Really! And I don’t think there’s a time when you would really have to use it for real. Because it’s a hack there are number of things that can go wrong so play with it knowing that. I’ve managed to totally corrupt one database. :) Oh… and for those saying: yeah yeah.. you have a single table in a file group and you’re restoring that, I say “nay nay” to you. As we all know SQL Server can’t do single table restores from backup. This is kind of a obvious thing due to different relational integrity (RI) concerns. Since we have to maintain that we have to restore all tables represented in a RI graph. For this exercise i say BAH! to those concerns. Note that this method “works” only for simple tables that don’t have LOB and off rows data. The code can be expanded to include those but I’ve tried to leave things “simple”. Note that for this to work our table needs to be relatively static data-wise. This doesn’t work for OLTP table. Products are a perfect example of static data. They don’t change much between backups, pretty much everything depends on them and their table is one of those tables that are relatively easy to accidentally delete everything from. This only works if the database is in Full or Bulk-Logged recovery mode for tables where the contents have been deleted or truncated but NOT when a table was dropped. Everything we’ll talk about has to be done before the data pages are reused for other purposes. After deletion or truncation the pages are marked as reusable so you have to act fast. The best thing probably is to put the database into single user mode ASAP while you’re performing this procedure and return it to multi user after you’re done. How do we do it? We will be using an undocumented but known DBCC commands: DBCC PAGE, an undocumented function sys.fn_dblog and a little known DATABASE RESTORE PAGE option. All tests will be on a copy of Production.Product table in AdventureWorks database called Production.Product1 because the original table has FK constraints that prevent us from truncating it for testing. -- create a duplicate table. This doesn't preserve indexes!SELECT *INTO AdventureWorks.Production.Product1FROM AdventureWorks.Production.Product   After we run this code take a full back to perform further testing.   First let’s see what the difference between DELETE and TRUNCATE is when it comes to logging. With DELETE every row deletion is logged in the transaction log. With TRUNCATE only whole data page deallocations are logged in the transaction log. Getting deleted data pages is simple. All we have to look for is row delete entry in the sys.fn_dblog output. But getting data pages that were truncated from the transaction log presents a bit of an interesting problem. I will not go into depths of IAM(Index Allocation Map) and PFS (Page Free Space) pages but suffice to say that every IAM page has intervals that tell us which data pages are allocated for a table and which aren’t. If we deep dive into the sys.fn_dblog output we can see that once you truncate a table all the pages in all the intervals are deallocated and this is shown in the PFS page transaction log entry as deallocation of pages. For every 8 pages in the same extent there is one PFS page row in the transaction log. This row holds information about all 8 pages in CSV format which means we can get to this data with some parsing. A great help for parsing this stuff is Peter Debetta’s handy function dbo.HexStrToVarBin that converts hexadecimal string into a varbinary value that can be easily converted to integer tus giving us a readable page number. The shortened (columns removed) sys.fn_dblog output for a PFS page with CSV data for 1 extent (8 data pages) looks like this: -- [Page ID] is displayed in hex format. -- To convert it to readable int we'll use dbo.HexStrToVarBin function found at -- http://sqlblog.com/blogs/peter_debetta/archive/2007/03/09/t-sql-convert-hex-string-to-varbinary.aspx -- This function must be installed in the master databaseSELECT Context, AllocUnitName, [Page ID], DescriptionFROM sys.fn_dblog(NULL, NULL)WHERE [Current LSN] = '00000031:00000a46:007d' The pages at the end marked with 0x00—> are pages that are allocated in the extent but are not part of a table. We can inspect the raw content of each data page with a DBCC PAGE command: -- we need this trace flag to redirect output to the query window.DBCC TRACEON (3604); -- WITH TABLERESULTS gives us data in table format instead of message format-- we use format option 3 because it's the easiest to read and manipulate further onDBCC PAGE (AdventureWorks, 1, 613, 3) WITH TABLERESULTS   Since the DBACC PAGE output can be quite extensive I won’t put it here. You can see an example of it in the link at the beginning of this section. Getting deleted data back When we run a delete statement every row to be deleted is marked as a ghost record. A background process periodically cleans up those rows. A huge misconception is that the data is actually removed. It’s not. Only the pointers to the rows are removed while the data itself is still on the data page. We just can’t access it with normal means. To get those pointers back we need to restore every deleted page using the RESTORE PAGE option mentioned above. This restore must be done from a full backup, followed by any differential and log backups that you may have. This is necessary to bring the pages up to the same point in time as the rest of the data.  However the restore doesn’t magically connect the restored page back to the original table. It simply replaces the current page with the one from the backup. After the restore we use the DBCC PAGE to read data directly from all data pages and insert that data into a temporary table. To finish the RESTORE PAGE  procedure we finally have to take a tail log backup (simple backup of the transaction log) and restore it back. We can now insert data from the temporary table to our original table by hand. Getting truncated data back When we run a truncate the truncated data pages aren’t touched at all. Even the pointers to rows stay unchanged. Because of this getting data back from truncated table is simple. we just have to find out which pages belonged to our table and use DBCC PAGE to read data off of them. No restore is necessary. Turns out that the problems we had with finding the data pages is alleviated by not having to do a RESTORE PAGE procedure. Stop stalling… show me The Code! This is the code for getting back deleted and truncated data back. It’s commented in all the right places so don’t be afraid to take a closer look. Make sure you have a full backup before trying this out. Also I suggest that the last step of backing and restoring the tail log is performed by hand. USE masterGOIF OBJECT_ID('dbo.HexStrToVarBin') IS NULL RAISERROR ('No dbo.HexStrToVarBin installed. Go to http://sqlblog.com/blogs/peter_debetta/archive/2007/03/09/t-sql-convert-hex-string-to-varbinary.aspx and install it in master database' , 18, 1) SET NOCOUNT ONBEGIN TRY DECLARE @dbName VARCHAR(1000), @schemaName VARCHAR(1000), @tableName VARCHAR(1000), @fullBackupName VARCHAR(1000), @undeletedTableName VARCHAR(1000), @sql VARCHAR(MAX), @tableWasTruncated bit; /* THE FIRST LINE ARE OUR INPUT PARAMETERS In this case we're trying to recover Production.Product1 table in AdventureWorks database. My full backup of AdventureWorks database is at e:\AW.bak */ SELECT @dbName = 'AdventureWorks', @schemaName = 'Production', @tableName = 'Product1', @fullBackupName = 'e:\AW.bak', @undeletedTableName = '##' + @tableName + '_Undeleted', @tableWasTruncated = 0, -- copy the structure from original table to a temp table that we'll fill with restored data @sql = 'IF OBJECT_ID(''tempdb..' + @undeletedTableName + ''') IS NOT NULL DROP TABLE ' + @undeletedTableName + ' SELECT *' + ' INTO ' + @undeletedTableName + ' FROM [' + @dbName + '].[' + @schemaName + '].[' + @tableName + ']' + ' WHERE 1 = 0' EXEC (@sql) IF OBJECT_ID('tempdb..#PagesToRestore') IS NOT NULL DROP TABLE #PagesToRestore /* FIND DATA PAGES WE NEED TO RESTORE*/ CREATE TABLE #PagesToRestore ([ID] INT IDENTITY(1,1), [FileID] INT, [PageID] INT, [SQLtoExec] VARCHAR(1000)) -- DBCC PACE statement to run later RAISERROR ('Looking for deleted pages...', 10, 1) -- use T-LOG direct read to get deleted data pages INSERT INTO #PagesToRestore([FileID], [PageID], [SQLtoExec]) EXEC('USE [' + @dbName + '];SELECT FileID, PageID, ''DBCC TRACEON (3604); DBCC PAGE ([' + @dbName + '], '' + FileID + '', '' + PageID + '', 3) WITH TABLERESULTS'' as SQLToExecFROM (SELECT DISTINCT LEFT([Page ID], 4) AS FileID, CONVERT(VARCHAR(100), ' + 'CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING([Page ID], 6, 20)))) AS PageIDFROM sys.fn_dblog(NULL, NULL)WHERE AllocUnitName LIKE ''%' + @schemaName + '.' + @tableName + '%'' ' + 'AND Context IN (''LCX_MARK_AS_GHOST'', ''LCX_HEAP'') AND Operation in (''LOP_DELETE_ROWS''))t');SELECT *FROM #PagesToRestore -- if upper EXEC returns 0 rows it means the table was truncated so find truncated pages IF (SELECT COUNT(*) FROM #PagesToRestore) = 0 BEGIN RAISERROR ('No deleted pages found. Looking for truncated pages...', 10, 1) -- use T-LOG read to get truncated data pages INSERT INTO #PagesToRestore([FileID], [PageID], [SQLtoExec]) -- dark magic happens here -- because truncation simply deallocates pages we have to find out which pages were deallocated. -- we can find this out by looking at the PFS page row's Description column. -- for every deallocated extent the Description has a CSV of 8 pages in that extent. -- then it's just a matter of parsing it. -- we also remove the pages in the extent that weren't allocated to the table itself -- marked with '0x00-->00' EXEC ('USE [' + @dbName + '];DECLARE @truncatedPages TABLE(DeallocatedPages VARCHAR(8000), IsMultipleDeallocs BIT);INSERT INTO @truncatedPagesSELECT REPLACE(REPLACE(Description, ''Deallocated '', ''Y''), ''0x00-->00 '', ''N'') + '';'' AS DeallocatedPages, CHARINDEX('';'', Description) AS IsMultipleDeallocsFROM (SELECT DISTINCT LEFT([Page ID], 4) AS FileID, CONVERT(VARCHAR(100), CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING([Page ID], 6, 20)))) AS PageID, DescriptionFROM sys.fn_dblog(NULL, NULL)WHERE Context IN (''LCX_PFS'') AND Description LIKE ''Deallocated%'' AND AllocUnitName LIKE ''%' + @schemaName + '.' + @tableName + '%'') t;SELECT FileID, PageID , ''DBCC TRACEON (3604); DBCC PAGE ([' + @dbName + '], '' + FileID + '', '' + PageID + '', 3) WITH TABLERESULTS'' as SQLToExecFROM (SELECT LEFT(PageAndFile, 1) as WasPageAllocatedToTable , SUBSTRING(PageAndFile, 2, CHARINDEX('':'', PageAndFile) - 2 ) as FileID , CONVERT(VARCHAR(100), CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING(PageAndFile, CHARINDEX('':'', PageAndFile) + 1, LEN(PageAndFile))))) as PageIDFROM ( SELECT SUBSTRING(DeallocatedPages, delimPosStart, delimPosEnd - delimPosStart) as PageAndFile, IsMultipleDeallocs FROM ( SELECT *, CHARINDEX('';'', DeallocatedPages)*(N-1) + 1 AS delimPosStart, CHARINDEX('';'', DeallocatedPages)*N AS delimPosEnd FROM @truncatedPages t1 CROSS APPLY (SELECT TOP (case when t1.IsMultipleDeallocs = 1 then 8 else 1 end) ROW_NUMBER() OVER(ORDER BY number) as N FROM master..spt_values) t2 )t)t)tWHERE WasPageAllocatedToTable = ''Y''') SELECT @tableWasTruncated = 1 END DECLARE @lastID INT, @pagesCount INT SELECT @lastID = 1, @pagesCount = COUNT(*) FROM #PagesToRestore SELECT @sql = 'Number of pages to restore: ' + CONVERT(VARCHAR(10), @pagesCount) IF @pagesCount = 0 RAISERROR ('No data pages to restore.', 18, 1) ELSE RAISERROR (@sql, 10, 1) -- If the table was truncated we'll read the data directly from data pages without restoring from backup IF @tableWasTruncated = 0 BEGIN -- RESTORE DATA PAGES FROM FULL BACKUP IN BATCHES OF 200 WHILE @lastID <= @pagesCount BEGIN -- create CSV string of pages to restore SELECT @sql = STUFF((SELECT ',' + CONVERT(VARCHAR(100), FileID) + ':' + CONVERT(VARCHAR(100), PageID) FROM #PagesToRestore WHERE ID BETWEEN @lastID AND @lastID + 200 ORDER BY ID FOR XML PATH('')), 1, 1, '') SELECT @sql = 'RESTORE DATABASE [' + @dbName + '] PAGE = ''' + @sql + ''' FROM DISK = ''' + @fullBackupName + '''' RAISERROR ('Starting RESTORE command:' , 10, 1) WITH NOWAIT; RAISERROR (@sql , 10, 1) WITH NOWAIT; EXEC(@sql); RAISERROR ('Restore DONE' , 10, 1) WITH NOWAIT; SELECT @lastID = @lastID + 200 END /* If you have any differential or transaction log backups you should restore them here to bring the previously restored data pages up to date */ END DECLARE @dbccSinglePage TABLE ( [ParentObject] NVARCHAR(500), [Object] NVARCHAR(500), [Field] NVARCHAR(500), [VALUE] NVARCHAR(MAX) ) DECLARE @cols NVARCHAR(MAX), @paramDefinition NVARCHAR(500), @SQLtoExec VARCHAR(1000), @FileID VARCHAR(100), @PageID VARCHAR(100), @i INT = 1 -- Get deleted table columns from information_schema view -- Need sp_executeSQL because database name can't be passed in as variable SELECT @cols = 'select @cols = STUFF((SELECT '', ['' + COLUMN_NAME + '']''FROM ' + @dbName + '.INFORMATION_SCHEMA.COLUMNSWHERE TABLE_NAME = ''' + @tableName + ''' AND TABLE_SCHEMA = ''' + @schemaName + '''ORDER BY ORDINAL_POSITIONFOR XML PATH('''')), 1, 2, '''')', @paramDefinition = N'@cols nvarchar(max) OUTPUT' EXECUTE sp_executesql @cols, @paramDefinition, @cols = @cols OUTPUT -- Loop through all the restored data pages, -- read data from them and insert them into temp table -- which you can then insert into the orignial deleted table DECLARE dbccPageCursor CURSOR GLOBAL FORWARD_ONLY FOR SELECT [FileID], [PageID], [SQLtoExec] FROM #PagesToRestore ORDER BY [FileID], [PageID] OPEN dbccPageCursor; FETCH NEXT FROM dbccPageCursor INTO @FileID, @PageID, @SQLtoExec; WHILE @@FETCH_STATUS = 0 BEGIN RAISERROR ('---------------------------------------------', 10, 1) WITH NOWAIT; SELECT @sql = 'Loop iteration: ' + CONVERT(VARCHAR(10), @i); RAISERROR (@sql, 10, 1) WITH NOWAIT; SELECT @sql = 'Running: ' + @SQLtoExec RAISERROR (@sql, 10, 1) WITH NOWAIT; -- if something goes wrong with DBCC execution or data gathering, skip it but print error BEGIN TRY INSERT INTO @dbccSinglePage EXEC (@SQLtoExec) -- make the data insert magic happen here IF (SELECT CONVERT(BIGINT, [VALUE]) FROM @dbccSinglePage WHERE [Field] LIKE '%Metadata: ObjectId%') = OBJECT_ID('['+@dbName+'].['+@schemaName +'].['+@tableName+']') BEGIN DELETE @dbccSinglePage WHERE NOT ([ParentObject] LIKE 'Slot % Offset %' AND [Object] LIKE 'Slot % Column %') SELECT @sql = 'USE tempdb; ' + 'IF (OBJECTPROPERTY(object_id(''' + @undeletedTableName + '''), ''TableHasIdentity'') = 1) ' + 'SET IDENTITY_INSERT ' + @undeletedTableName + ' ON; ' + 'INSERT INTO ' + @undeletedTableName + '(' + @cols + ') ' + STUFF((SELECT ' UNION ALL SELECT ' + STUFF((SELECT ', ' + CASE WHEN VALUE = '[NULL]' THEN 'NULL' ELSE '''' + [VALUE] + '''' END FROM ( -- the unicorn help here to correctly set ordinal numbers of columns in a data page -- it's turning STRING order into INT order (1,10,11,2,21 into 1,2,..10,11...21) SELECT [ParentObject], [Object], Field, VALUE, RIGHT('00000' + O1, 6) AS ParentObjectOrder, RIGHT('00000' + REVERSE(LEFT(O2, CHARINDEX(' ', O2)-1)), 6) AS ObjectOrder FROM ( SELECT [ParentObject], [Object], Field, VALUE, REPLACE(LEFT([ParentObject], CHARINDEX('Offset', [ParentObject])-1), 'Slot ', '') AS O1, REVERSE(LEFT([Object], CHARINDEX('Offset ', [Object])-2)) AS O2 FROM @dbccSinglePage WHERE t.ParentObject = ParentObject )t)t ORDER BY ParentObjectOrder, ObjectOrder FOR XML PATH('')), 1, 2, '') FROM @dbccSinglePage t GROUP BY ParentObject FOR XML PATH('') ), 1, 11, '') + ';' RAISERROR (@sql, 10, 1) WITH NOWAIT; EXEC (@sql) END END TRY BEGIN CATCH SELECT @sql = 'ERROR!!!' + CHAR(10) + CHAR(13) + 'ErrorNumber: ' + ERROR_NUMBER() + '; ErrorMessage' + ERROR_MESSAGE() + CHAR(10) + CHAR(13) + 'FileID: ' + @FileID + '; PageID: ' + @PageID RAISERROR (@sql, 10, 1) WITH NOWAIT; END CATCH DELETE @dbccSinglePage SELECT @sql = 'Pages left to process: ' + CONVERT(VARCHAR(10), @pagesCount - @i) + CHAR(10) + CHAR(13) + CHAR(10) + CHAR(13) + CHAR(10) + CHAR(13), @i = @i+1 RAISERROR (@sql, 10, 1) WITH NOWAIT; FETCH NEXT FROM dbccPageCursor INTO @FileID, @PageID, @SQLtoExec; END CLOSE dbccPageCursor; DEALLOCATE dbccPageCursor; EXEC ('SELECT ''' + @undeletedTableName + ''' as TableName; SELECT * FROM ' + @undeletedTableName)END TRYBEGIN CATCH SELECT ERROR_NUMBER() AS ErrorNumber, ERROR_MESSAGE() AS ErrorMessage IF CURSOR_STATUS ('global', 'dbccPageCursor') >= 0 BEGIN CLOSE dbccPageCursor; DEALLOCATE dbccPageCursor; ENDEND CATCH-- if the table was deleted we need to finish the restore page sequenceIF @tableWasTruncated = 0BEGIN -- take a log tail backup and then restore it to complete page restore process DECLARE @currentDate VARCHAR(30) SELECT @currentDate = CONVERT(VARCHAR(30), GETDATE(), 112) RAISERROR ('Starting Log Tail backup to c:\Temp ...', 10, 1) WITH NOWAIT; PRINT ('BACKUP LOG [' + @dbName + '] TO DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') EXEC ('BACKUP LOG [' + @dbName + '] TO DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') RAISERROR ('Log Tail backup done.', 10, 1) WITH NOWAIT; RAISERROR ('Starting Log Tail restore from c:\Temp ...', 10, 1) WITH NOWAIT; PRINT ('RESTORE LOG [' + @dbName + '] FROM DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') EXEC ('RESTORE LOG [' + @dbName + '] FROM DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') RAISERROR ('Log Tail restore done.', 10, 1) WITH NOWAIT;END-- The last step is manual. Insert data from our temporary table to the original deleted table The misconception here is that you can do a single table restore properly in SQL Server. You can't. But with little experimentation you can get pretty close to it. One way to possible remove a dependency on a backup to retrieve deleted pages is to quickly run a similar script to the upper one that gets data directly from data pages while the rows are still marked as ghost records. It could be done if we could beat the ghost record cleanup task.

    Read the article

  • Basics of Join Factorization

    - by Hong Su
    We continue our series on optimizer transformations with a post that describes the Join Factorization transformation. The Join Factorization transformation was introduced in Oracle 11g Release 2 and applies to UNION ALL queries. Union all queries are commonly used in database applications, especially in data integration applications. In many scenarios the branches in a UNION All query share a common processing, i.e, refer to the same tables. In the current Oracle execution strategy, each branch of a UNION ALL query is evaluated independently, which leads to repetitive processing, including data access and join. The join factorization transformation offers an opportunity to share the common computations across the UNION ALL branches. Currently, join factorization only factorizes common references to base tables only, i.e, not views. Consider a simple example of query Q1. Q1:    select t1.c1, t2.c2    from t1, t2, t3    where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c2 = 2 and t2.c2 = t3.c2   union all    select t1.c1, t2.c2    from t1, t2, t4    where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c3 = t4.c3; Table t1 appears in both the branches. As does the filter predicates on t1 (t1.c1 > 1) and the join predicates involving t1 (t1.c1 = t2.c1). Nevertheless, without any transformation, the scan (and the filtering) on t1 has to be done twice, once per branch. Such a query may benefit from join factorization which can transform Q1 into Q2 as follows: Q2:    select t1.c1, VW_JF_1.item_2    from t1, (select t2.c1 item_1, t2.c2 item_2                   from t2, t3                    where t2.c2 = t3.c2 and t2.c2 = 2                                  union all                   select t2.c1 item_1, t2.c2 item_2                   from t2, t4                    where t2.c3 = t4.c3) VW_JF_1    where t1.c1 = VW_JF_1.item_1 and t1.c1 > 1; In Q2, t1 is "factorized" and thus the table scan and the filtering on t1 is done only once (it's shared). If t1 is large, then avoiding one extra scan of t1 can lead to a huge performance improvement. Another benefit of join factorization is that it can open up more join orders. Let's look at query Q3. Q3:    select *    from t5, (select t1.c1, t2.c2                  from t1, t2, t3                  where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c2 = 2 and t2.c2 = t3.c2                 union all                  select t1.c1, t2.c2                  from t1, t2, t4                  where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c3 = t4.c3) V;   where t5.c1 = V.c1 In Q3, view V is same as Q1. Before join factorization, t1, t2 and t3 must be joined first before they can be joined with t5. But if join factorization factorizes t1 from view V, t1 can then be joined with t5. This opens up new join orders. That being said, join factorization imposes certain join orders. For example, in Q2, t2 and t3 appear in the first branch of the UNION ALL query in view VW_JF_1. T2 must be joined with t3 before it can be joined with t1 which is outside of the VW_JF_1 view. The imposed join order may not necessarily be the best join order. For this reason, join factorization is performed under cost-based transformation framework; this means that we cost the plans with and without join factorization and choose the cheapest plan. Note that if the branches in UNION ALL have DISTINCT clauses, join factorization is not valid. For example, Q4 is NOT semantically equivalent to Q5.   Q4:     select distinct t1.*      from t1, t2      where t1.c1 = t2.c1  union all      select distinct t1.*      from t1, t2      where t1.c1 = t2.c1 Q5:    select distinct t1.*     from t1, (select t2.c1 item_1                   from t2                union all                   select t2.c1 item_1                  from t2) VW_JF_1     where t1.c1 = VW_JF_1.item_1 Q4 might return more rows than Q5. Q5's results are guaranteed to be duplicate free because of the DISTINCT key word at the top level while Q4's results might contain duplicates.   The examples given so far involve inner joins only. Join factorization is also supported in outer join, anti join and semi join. But only the right tables of outer join, anti join and semi joins can be factorized. It is not semantically correct to factorize the left table of outer join, anti join or semi join. For example, Q6 is NOT semantically equivalent to Q7. Q6:     select t1.c1, t2.c2    from t1, t2    where t1.c1 = t2.c1(+) and t2.c2 (+) = 2  union all    select t1.c1, t2.c2    from t1, t2      where t1.c1 = t2.c1(+) and t2.c2 (+) = 3 Q7:     select t1.c1, VW_JF_1.item_2    from t1, (select t2.c1 item_1, t2.c2 item_2                  from t2                  where t2.c2 = 2                union all                  select t2.c1 item_1, t2.c2 item_2                  from t2                                                                                                    where t2.c2 = 3) VW_JF_1       where t1.c1 = VW_JF_1.item_1(+)                                                                  However, the right side of an outer join can be factorized. For example, join factorization can transform Q8 to Q9 by factorizing t2, which is the right table of an outer join. Q8:    select t1.c2, t2.c2    from t1, t2      where t1.c1 = t2.c1 (+) and t1.c1 = 1 union all    select t1.c2, t2.c2    from t1, t2    where t1.c1 = t2.c1(+) and t1.c1 = 2 Q9:   select VW_JF_1.item_2, t2.c2   from t2,             (select t1.c1 item_1, t1.c2 item_2            from t1            where t1.c1 = 1           union all            select t1.c1 item_1, t1.c2 item_2            from t1            where t1.c1 = 2) VW_JF_1   where VW_JF_1.item_1 = t2.c1(+) All of the examples in this blog show factorizing a single table from two branches. This is just for ease of illustration. Join factorization can factorize multiple tables and from more than two UNION ALL branches.  SummaryJoin factorization is a cost-based transformation. It can factorize common computations from branches in a UNION ALL query which can lead to huge performance improvement. 

    Read the article

  • ASP.Net Authentication with MVC2--how to integrate with DB?

    - by alchemical
    I'm trying to understand the authentication section sample project that opens in a new MVC2 project in VS2010. It essentially lets you register, login, etc. I looked through the code that implements this briefly, it looked fairly complicated. (10 tables, 40 sprocs, 10 views, 4 models, 1 model, 1 controller, etc.) Is it best to utilize this provided framework for authentication? If so, how would I integrate this with my own database models (which has user and role tables, etc.). Also, if I use their framework, are there any performance issues at higher traffic volumes (like SO for example), do I need to become responsible for maintaining the authentication DB as well in this case?

    Read the article

  • EPPlus - .xlsx is locked for editing by 'another user'

    - by AdamTheITMan
    I have searched through every possible answer on SO for a solution, but nothing has worked. I am basically creating an excel file from a database and sending the results to the response stream using EPPlus(OpenXML). The following code gives me an error when trying to open my generated excel sheet "[report].xlsx is locked for editing by 'another user'." It will open fine the first time, but the second time it's locked. Dim columnData As New List(Of Integer) Dim rowHeaders As New List(Of String) Dim letter As String = "B" Dim x As Integer = 0 Dim trendBy = context.Session("TRENDBY").ToString() Dim dateHeaders As New List(Of String) dateHeaders = DirectCast(context.Session("DATEHEADERS"), List(Of String)) Dim DS As New DataSet DS = DirectCast(context.Session("DS"), DataSet) Using excelPackage As New OfficeOpenXml.ExcelPackage Dim excelWorksheet = excelPackage.Workbook.Worksheets.Add("Report") 'Add title to the top With excelWorksheet.Cells("B1") .Value = "Account Totals by " + If(trendBy = "Months", "Month", "Week") .Style.Font.Bold = True End With 'add date headers x = 2 'start with letter B (aka 2) For Each Header As String In dateHeaders With excelWorksheet.Cells(letter + "2") .Value = Header .Style.HorizontalAlignment = OfficeOpenXml.Style.ExcelHorizontalAlignment.Right .AutoFitColumns() End With x = x + 1 letter = Helper.GetColumnIndexToColumnLetter(x) Next 'Adds the descriptive row headings down the left side of excel sheet x = 0 For Each DC As DataColumn In DS.Tables(0).Columns If (x < DS.Tables(0).Columns.Count) Then rowHeaders.Add(DC.ColumnName) End If Next Dim range = excelWorksheet.Cells("A3:A30") range.LoadFromCollection(rowHeaders) 'Add the meat and potatoes of report x = 2 For Each dTable As DataTable In DS.Tables columnData.Clear() For Each DR As DataRow In dTable.Rows For Each item As Object In DR.ItemArray columnData.Add(item) Next Next letter = Helper.GetColumnIndexToColumnLetter(x) excelWorksheet.Cells(letter + "3").LoadFromCollection(columnData) With excelWorksheet.Cells(letter + "3") .Formula = "=SUM(" + letter + "4:" + letter + "6)" .Style.Font.Bold = True .Style.Font.Size = 12 End With With excelWorksheet.Cells(letter + "7") .Formula = "=SUM(" + letter + "8:" + letter + "11)" .Style.Font.Bold = True .Style.Font.Size = 12 End With With excelWorksheet.Cells(letter + "12") .Style.Font.Bold = True .Style.Font.Size = 12 End With With excelWorksheet.Cells(letter + "13") .Formula = "=SUM(" + letter + "14:" + letter + "20)" .Style.Font.Bold = True .Style.Font.Size = 12 End With With excelWorksheet.Cells(letter + "21") .Formula = "=SUM(" + letter + "22:" + letter + "23)" .Style.Font.Bold = True .Style.Font.Size = 12 End With With excelWorksheet.Cells(letter + "24") .Formula = "=SUM(" + letter + "25:" + letter + "26)" .Style.Font.Bold = True .Style.Font.Size = 12 End With With excelWorksheet.Cells(letter + "27") .Formula = "=SUM(" + letter + "28:" + letter + "29)" .Style.Font.Bold = True .Style.Font.Size = 12 End With With excelWorksheet.Cells(letter + "30") .Formula = "=SUM(" + letter + "3," + letter + "7," + letter + "12," + letter + "13," + letter + "21," + letter + "24," + letter + "27)" .Style.Font.Bold = True .Style.Font.Size = 12 End With x = x + 1 Next range.AutoFitColumns() 'send it to response Using stream As New MemoryStream(excelPackage.GetAsByteArray()) context.Response.Clear() context.Response.ContentType = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet" context.Response.AddHeader("content-disposition", "attachment; filename=filetest.xlsx") context.Response.OutputStream.Write(stream.ToArray(), 0, stream.ToArray().Length) context.Response.Flush() context.Response.Close() End Using End Using

    Read the article

  • Fluent NHibernate Automapping with RIA services

    - by VexXtreme
    Hi guys I've encountered a slight problem recently, or rather a lack of understanding of how NHibernate automapping works with RIA data services. Namely, I don't understand how to use Association and Include attributes. For instance, I've created two tables in my database and corresponding classes (that NHibernate correctly fills). The problem is, RIA doesn't generate properties (collections) bound by foreign key to other tables, on the client side, although I've defined them in my classes in my domain model... it generates just properties that belong to their own class, on the client side. I assume that these attributes aren't necessary since NHibernate automapper is supposed to fill those collections on it's own... I'm quite confused as to how this works. And I don't understand why RIA simply skips properties such as public virtual IList<Medication> Medications{ get; set; } during autogeneration. Any input is appreciated Thanks

    Read the article

  • Is there a 'RadLabel' from Telerik?

    - by Young Ninja
    I use the "Label" attribute in Telerik quite frequently. I like it because it helps me consistently structure tables. An example: <ul class="box"> <li><telerik:RadTextBox runat="server" Label="Name:" LabelCssClass="label" Enabled="false" Width="100%" /></li> <li><telerik:RadTextBox runat="server" ID="MachineSize" Label="Password:" LabelCssClass="label" Width="100%" /></li> </ul> I've run into a problem. I would like to continue with the above layout/structure, but in some cases I have tables that simply dump output (ie no user input). To be consistent, I need a RadLabel, which takes an input of "Label" and "Text", and then aligns them appropriately in the overall table format. Is there such a thing?

    Read the article

  • --log-slave-updates is OFF but some updates are still logged to the slave binary log?

    - by quanta
    MySQL version 5.5.14 According to the document, by the default, slave does not log to its binary log any updates that are received from a master server. Here are my config. on the slave: # egrep 'bin|slave' /etc/my.cnf relay-log=mysqld-relay-bin log-bin = /var/log/mysql/mysql-bin binlog-format=MIXED sync_binlog = 1 log-bin-trust-function-creators = 1 mysql> show global variables like 'log_slave%'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | log_slave_updates | OFF | +-------------------+-------+ 1 row in set (0.01 sec) mysql> select @@log_slave_updates; +---------------------+ | @@log_slave_updates | +---------------------+ | 0 | +---------------------+ 1 row in set (0.00 sec) but slave still logs the some changes to its binary logs, let's see the file size: -rw-rw---- 1 mysql mysql 37M Apr 1 01:00 /var/log/mysql/mysql-bin.001256 -rw-rw---- 1 mysql mysql 25M Apr 2 01:00 /var/log/mysql/mysql-bin.001257 -rw-rw---- 1 mysql mysql 46M Apr 3 01:00 /var/log/mysql/mysql-bin.001258 -rw-rw---- 1 mysql mysql 115M Apr 4 01:00 /var/log/mysql/mysql-bin.001259 -rw-rw---- 1 mysql mysql 105M Apr 4 18:54 /var/log/mysql/mysql-bin.001260 and the sample query when reading these binary files with mysqlbinlog utility: #120404 19:08:57 server id 3 end_log_pos 110324763 Query thread_id=382435 exec_time=0 error_code=0 SET TIMESTAMP=1333541337/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'118212' COLLATE 'utf8_general_ci')) /*!*/; # at 110324763 Did I miss something? Reply to @RolandoMySQLDBA: If replication brought this over, then the same query has to be in the relay logs. Please go find the relay log that has the INSERT query with the same TIMESTAMP (1333541337). There is no such query with the same TIMESTAMP in the relay logs. If you cannot find it in the relay logs, then look and see if Infobright is posting the INSERT query. In that instance, the INSERT should be recorded in the binary logs of the Slave. Looking more deeply into the binary logs, I see that almost of the queries are CREATE/INSERT/UPDATE/DROP "temporary" tables, something like this: # at 123873315 #120405 0:42:04 server id 3 end_log_pos 123873618 Query thread_id=395373 exec_time=0 error_code=0 SET TIMESTAMP=1333561324/*!*/; SET @@session.pseudo_thread_id=395373/*!*/; CREATE TEMPORARY TABLE `norep_tmpcampaign` ( `campaignid` INTEGER(11) NOT NULL DEFAULT '0' , `status` INTEGER(11) NOT NULL DEFAULT '0' , `updated` DATETIME, KEY `campaignid` (`campaignid`) )ENGINE=MEMORY /*!*/; # at 123873618 #120405 0:42:04 server id 3 end_log_pos 123873755 Query thread_id=395373 exec_time=0 error_code=0 SET TIMESTAMP=1333561324/*!*/; DROP TABLE IF EXISTS `norep_tmpcampaign1` /* generated by server */ "temporary" here means that they are dropped after calculation is done. I also tells the slave not to replicate any statement matches the norep_ wildcard pattern: replicate-wild-ignore-table=%.norep_% But there is an exception table in the binary logs: # at 123828094 #120405 0:37:21 server id 3 end_log_pos 123828495 Query thread_id=395209 exec_time=0 error_code=0 SET TIMESTAMP=1333561041/*!*/; INSERT INTO sessions (SessionId, ApplicationName, Created, Expires, LockDate, LockId, Timeout, Locked, SessionItems, Fla gs) Values('pgv2exo4y4vo4ccz44vwznu0', '/', '2012-04-05 00:37:21', '2012-04-05 00:57:21', '2012-04-05 00:37:21', 0, 20, 0, 'AwAAAP////8IdXNlcm5hbWUGdXNlcmlkCHBlcm1pdGlkAgAAAAQAAAAGAAAAAQABAAEA', 0) /*!*/; Description: mysql> desc reportingdb.sessions; +-----------------+------------------+------+-----+---------------------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------------------+-------+ | SessionId | varchar(64) | NO | PRI | | | | ApplicationName | varchar(255) | NO | | | | | Created | timestamp | NO | | 0000-00-00 00:00:00 | | | Expires | timestamp | NO | | 0000-00-00 00:00:00 | | | LockDate | timestamp | NO | | 0000-00-00 00:00:00 | | | LockId | int(11) unsigned | NO | | NULL | | | Timeout | int(11) unsigned | NO | | NULL | | | Locked | bit(1) | NO | | NULL | | | SessionItems | varchar(255) | YES | | NULL | | | Flags | int(11) | NO | | NULL | | +-----------------+------------------+------+-----+---------------------+-------+ I'm sure all these queries are posting by MySQL, not Infobright: $ mysql-ib -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 48971 Server version: 5.1.40 build number (revision)=IB_4.0.5_r15240_15370(ice) (static) Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> select * from information_schema.tables where table_name='sessions'; Empty set (0.02 sec) I've been trying some INSERT/UPDATE queries with testing tables on the master, it is copied to the relay logs, not binary logs on slave: # at 311664029 #120405 0:15:23 server id 1 end_log_pos 311664006 Query thread_id=10458250 exec_time=0 error_code=0 use testuser/*!*/; SET TIMESTAMP=1333559723/*!*/; update users set email2='[email protected]' where id=22 /*!*/; Pay attention to the server id, in the relay logs, server id is master's (1) and in the binary log, server id is slave's (3 in this case). Reply to @RolandoMySQLDBA: Thu Apr 5 10:06:00 ICT 2012 Run CREATE DATABASE quantatest; on the Master now, please. Tell me if CREATE DATABASE quantatest; showed up in the Slave's Binary Logs. As I said above: I've been trying some INSERT/UPDATE queries with testing tables on the master, it is copied to the relay logs, not binary logs and you can guess, IO thread copied it to the relay logs, not binary logs. #120405 10:07:25 server id 1 end_log_pos 347573819 Query thread_id=10480775 exec_time=0 error_code=0 SET TIMESTAMP=1333595245/*!*/; /*!\C latin1 *//*!*/; SET @@session.character_set_client=8,@@session.collation_connection=8,@@session.collation_server=8/*!*/; create database quantatest /*!*/; The question must probably change to: why some update queries still logged to the slave binary logs althrough --log-slave-updates is disabled? Where they come from? Here are few last: /*!*/; # at 27492197 #120405 10:12:45 server id 3 end_log_pos 27492370 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; CREATE TEMPORARY TABLE norep_SplitValues ( value VARCHAR(1000) NOT NULL ) ENGINE=MEMORY /*!*/; # at 27492370 #120405 10:12:45 server id 3 end_log_pos 27492445 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; BEGIN /*!*/; # at 27492445 #120405 10:12:45 server id 3 end_log_pos 27492619 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'119577' COLLATE 'utf8_general_ci')) /*!*/; # at 27492619 #120405 10:12:45 server id 3 end_log_pos 27492695 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; COMMIT /*!*/; # at 27492918 #120405 10:12:46 server id 3 end_log_pos 27493115 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595566/*!*/; SELECT `reportingdb`.`selfserving_get_locationad`(_utf8'3' COLLATE 'utf8_general_ci',_utf8'' COLLATE 'utf8_general_ci') /*!*/; # at 27493199 #120405 10:12:46 server id 3 end_log_pos 27493353 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595566/*!*/; /*!\C utf8 *//*!*/; SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/; DROP TEMPORARY TABLE IF EXISTS `norep_SplitValues` /* generated by server */ /*!*/;

    Read the article

  • SQL Server add primary key

    - by Paul
    I have a table that needs to be given a new primary key, as my predecesor used a varchar(8) row as the primary key, and we are having problems with it now. I know how to add the primary key, but am not sure of the correct way to add this new primary key to other tables that have the foreign key. Here is what I have: users table: old_user_id varchar(8) ... ... new_user_id int(11) orders table: order_id int(11) ... ... old_user_fk varchar(8) new_user_fk int(11) I need to get the same results whether I join the tables on users.old_user_id=orders.old_user_fk or users.new_user_id=orders.new_user_fk. Any help is appreciated.

    Read the article

  • PostgreSQL to Data-Warehouse: Best approach for near-real-time ETL / extraction of data

    - by belvoir
    Background: I have a PostgreSQL (v8.3) database that is heavily optimized for OLTP. I need to extract data from it on a semi real-time basis (some-one is bound to ask what semi real-time means and the answer is as frequently as I reasonably can but I will be pragmatic, as a benchmark lets say we are hoping for every 15min) and feed it into a data-warehouse. How much data? At peak times we are talking approx 80-100k rows per min hitting the OLTP side, off-peak this will drop significantly to 15-20k. The most frequently updated rows are ~64 bytes each but there are various tables etc so the data is quite diverse and can range up to 4000 bytes per row. The OLTP is active 24x5.5. Best Solution? From what I can piece together the most practical solution is as follows: Create a TRIGGER to write all DML activity to a rotating CSV log file Perform whatever transformations are required Use the native DW data pump tool to efficiently pump the transformed CSV into the DW Why this approach? TRIGGERS allow selective tables to be targeted rather than being system wide + output is configurable (i.e. into a CSV) and are relatively easy to write and deploy. SLONY uses similar approach and overhead is acceptable CSV easy and fast to transform Easy to pump CSV into the DW Alternatives considered .... Using native logging (http://www.postgresql.org/docs/8.3/static/runtime-config-logging.html). Problem with this is it looked very verbose relative to what I needed and was a little trickier to parse and transform. However it could be faster as I presume there is less overhead compared to a TRIGGER. Certainly it would make the admin easier as it is system wide but again, I don't need some of the tables (some are used for persistent storage of JMS messages which I do not want to log) Querying the data directly via an ETL tool such as Talend and pumping it into the DW ... problem is the OLTP schema would need tweaked to support this and that has many negative side-effects Using a tweaked/hacked SLONY - SLONY does a good job of logging and migrating changes to a slave so the conceptual framework is there but the proposed solution just seems easier and cleaner Using the WAL Has anyone done this before? Want to share your thoughts?

    Read the article

  • Auto_increment values in InnoDB?

    - by Timmy
    I've been using InnoDB for a project, and relying on auto_increment. This is not a problem for most of the tables, but for tables with deletion, this might be an issue: AUTO_INCREMENT Handling in InnoDB particularly this part: AUTO_INCREMENT column named ai_col: After a server startup, for the first insert into a table t, InnoDB executes the equivalent of this statement: SELECT MAX(ai_col) FROM t FOR UPDATE; InnoDB increments by one the value retrieved by the statement and assigns it to the column and to the auto-increment counter for the table. This is a problem because while it ensures that within the table, the key is unique, there are foreign keys to this table where those keys are no longer unique. The mysql server does/should not restart often, but this is breaking. Are there any easy ways around this?

    Read the article

  • SQL query to return a data that both creteria exist in one table

    - by Ali
    Dear all, I have a TWO tables of data with following fields table1=(ITTAG,ITCODE,ITDESC,SUPcode) table2=(ACCODE,ACNAME,ROUTE,SALMAN) this my customer master tables that contains my customer data such as customer code, customer name and so on... Every Route has a supervisor(table1=supcode) and I need to know supervisor name in my table which both supervisor name and code exist in one table. table1 has contain all names separated by ITTAG. for example, supervisor name's ITTAG='K' also salesamn name's ITTAG='S'. ITTAG ITCODE ITDESC SUPCODE ------ ------ ------ ------- S JT JOHN TOMAS TF K WK VIKI KOO NULL NOW THIS IS A RESULT WHICH I WANT ACCODE ACNAME ROUTE SALEMANNAME SUPERVISORNAME ------- ------ ------ ------------ --------------- IMC1010 ABC HOTEL 01 JOHN TOMAS VIKI KOO i hope this this information is sufficient to get the query.. Thanks Ali

    Read the article

  • Entity Framework One-To-One Mapping Issues

    - by Baddie
    Using VS 2010 beta 2, ASP.NET MVC. I tried to create an Entity framework file and got the data from my database. There were some issues with the relationships, so I started to tweak things around, but I kept getting the following error for simple one-to-one relationships Error 1 Error 113: Multiplicity is not valid in Role 'UserProfile' in relationship 'FK_UserProfiles_Users'. Because the Dependent Role properties are not the key properties, the upper bound of the multiplicity of the Dependent Role must be *. myEntities.edmx 2024 My Users table is consists of some other many-to-many relationships to other tables, but when I try to make a one-to-one relationship with other tables, that error pops up. Users Table UserID Username Email etc.. UserProfiles Table UserProfileID UserID (FK for Users Table) Location Birthday

    Read the article

  • Can not delete row from MySQL

    - by Drew
    Howdy all, I've got a table, which won't delete a row. Specifically, when I try to delete any row with a GEO_SHAPE_ID over 150000000 it simply does not disappear from the DB. I have tried: SQLyog to erase it. DELETE FROM TABLE WHERE GEO_SHAPE_ID = 150000042 (0 rows affected). UNLOCK TABLES then 2. As far as I am aware, bigint is a valid candidate for auto_increment. Anyone know what could be up? You gotta help us, Doc. We’ve tried nothin’ and we’re all out of ideas! DJS. PS. Here is the table construct and some sample data just for giggles. CREATE TABLE `GEO_SHAPE` ( `GEO_SHAPE_ID` bigint(11) NOT NULL auto_increment, `RADIUS` float default '0', `LATITUDE` float default '0', `LONGITUDE` float default '0', `SHAPE_TYPE` enum('Custom','Region') default NULL, `PARENT_ID` int(11) default NULL, `SHAPE_POLYGON` polygon default NULL, `SHAPE_TITLE` varchar(45) default NULL, `SHAPE_ABBREVIATION` varchar(45) default NULL, PRIMARY KEY (`GEO_SHAPE_ID`) ) ENGINE=MyISAM AUTO_INCREMENT=150000056 DEFAULT CHARSET=latin1 CHECKSUM=1 DELAY_KEY_WRITE=1 ROW_FORMAT=DYNAMIC; SET FOREIGN_KEY_CHECKS = 0; LOCK TABLES `GEO_SHAPE` WRITE; INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (57, NULL, NULL, NULL, 'Region', 10, NULL, 'Washington', 'WA'); INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (58, NULL, NULL, NULL, 'Region', 10, NULL, 'West Virginia', 'WV'); INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (59, NULL, NULL, NULL, 'Region', 10, NULL, 'Wisconsin', 'WI'); INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (150000042, 10, -33.8833, 151.217, 'Custom', NULL, NULL, 'Sydney%2C%20New%20South%20Wales%20%2810km%20r', NULL); INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (150000043, 10, -33.8833, 151.167, 'Custom', NULL, NULL, 'Annandale%2C%20New%20South%20Wales%20%2810km%', NULL); INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (150000048, 10, -27.5, 153.017, 'Custom', NULL, NULL, 'Brisbane%2C%20Queensland%20%2810km%20radius%2', NULL); INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (150000045, 10, 43.1002, -75.2956, 'Custom', NULL, NULL, 'New%20York%20Mills%2C%20New%20York%20%2810km%', NULL); INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (150000046, 10, 40.1117, -78.9258, 'Custom', NULL, NULL, 'Region1', NULL); UNLOCK TABLES; SET FOREIGN_KEY_CHECKS = 1;

    Read the article

  • DataMapper and Codeigniter - how to fully manage a many-to-many relationship

    - by user252160
    I have the following relationship CType has many Field Field has many Ctype and of course the tables used are fields, ctypes, and ctypes_fields. Normally, the ctypes_fields table should contain only ids referencing the other two tables. However, i'd like to put there a "options" field that will contain some content for the concrete instance only. I walked through all DataMapper tuts but I could'n find relevant info on how I could extract these specific values (the ones in the relation table) Should I create a separate model for this relation table or there is a clever way to do this ?

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >