Search Results

Search found 16329 results on 654 pages for 'b long'.

Page 636/654 | < Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >

  • Do I must expose the aggregate children as public properties to implement the Persistence ignorance?

    - by xuehua
    Hi all, I'm very glad that i found this website recently, I've learned a lot from here. I'm from China, and my English is not so good. But i will try to express myself what i want to say. Recently, I've started learning about Domain Driven Design, and I'm very interested about it. And I plan to develop a Forum website using DDD. After reading lots of threads from here, I understood that persistence ignorance is a good practice. Currently, I have two questions about what I'm thinking for a long time. Should the domain object interact with repository to get/save data? If the domain object doesn't use repository, then how does the Infrastructure layer (like unit of work) know which domain object is new/modified/removed? For the second question. There's an example code: Suppose i have a user class: public class User { public Guid Id { get; set; } public string UserName { get; set; } public string NickName { get; set; } /// <summary> /// A Roles collection which represents the current user's owned roles. /// But here i don't want to use the public property to expose it. /// Instead, i use the below methods to implement. /// </summary> //public IList<Role> Roles { get; set; } private List<Role> roles = new List<Role>(); public IList<Role> GetRoles() { return roles; } public void AddRole(Role role) { roles.Add(role); } public void RemoveRole(Role role) { roles.Remove(role); } } Based on the above User class, suppose i get an user from the IUserRepository, and add an Role for it. IUserRepository userRepository; User user = userRepository.Get(Guid.NewGuid()); user.AddRole(new Role() { Name = "Administrator" }); In this case, i don't know how does the repository or unit of work can know that user has a new role? I think, a real persistence ignorance ORM framework should support POCO, and any changes occurs on the POCO itself, the persistence framework should know automatically. Even if change the object status through the method(AddRole, RemoveRole) like the above example. I know a lot of ORM can automatically persistent the changes if i use the Roles property, but sometimes i don't like this way because of the performance reason. Could anyone give me some ideas for this? Thanks. This is my first question on this site. I hope my English can be understood. Any answers will be very appreciated.

    Read the article

  • database design help for game / user levels / progress

    - by sprugman
    Sorry this got long and all prose-y. I'm creating my first truly gamified web app and could use some help thinking about how to structure the data. The Set-up Users need to accomplish tasks in each of several categories before they can move up a level. I've got my Users, Tasks, and Categories tables, and a UserTasks table which joins the three. ("User 3 has added Task 42 in Category 8. Now they've completed it.") That's all fine and working wonderfully. The Challenge I'm not sure of the best way to track the progress in the individual categories toward each level. The "business" rules are: You have to achieve a certain number of points in each category to move up. If you get the number of points needed in Cat 8, but still have other work to do to complete the level, any new Cat 8 points count toward your overall score, but don't "roll over" into the next level. The number of Categories is small (five currently) and unlikely to change often, but by no means absolutely fixed. The number of points needed to level-up will vary per level, probably by a formula, or perhaps a lookup table. So the challenge is to track each user's progress toward the next level in each category. I've thought of a few potential approaches: Possible Solutions Add a column to the users table for each category and reset them all to zero each time a user levels-up. Have a separate UserProgress table with a row for each category for each user and the number of points they have. (Basically a Many-to-Many version of #1.) Add a userLevel column to the UserTasks table and use that to derive their progress with some kind of SUM statement. Their current level will be a simple int in the User table. Pros & Cons (1) seems like by far the most straightforward, but it's also the least flexible. Perhaps I could use a naming convention based on the category ids to help overcome some of that. (With code like "select cats; for each cat, get the value from Users.progress_{cat.id}.") It's also the one where I lose the most data -- I won't know which points counted toward leveling up. I don't have a need in mind for that, so maybe I don't care about that. (2) seems complicated: every time I add or subtract a user or a category, I have to maintain the other table. I foresee synchronization challenges. (3) Is somewhere in between -- cleaner than #2, but less intuitive than #1. In order to find out where a user is, I'd have mildly complex SQL like: SELECT categoryId, SUM(points) from UserTasks WHERE userId={user.id} & countsTowardLevel={user.level} groupBy categoryId Hmm... that doesn't seem so bad. I think I'm talking myself into #3 here, but would love any input, advice or other ideas.

    Read the article

  • Reflector error or optimisation?

    - by David_001
    Long story short: I used reflector on the System.Security.Util.Tokenizer class, and there's loads of goto statements in there. Here's a brief example snippet: Label_0026: if (this._inSavedCharacter != -1) { num = this._inSavedCharacter; this._inSavedCharacter = -1; } else { switch (this._inTokenSource) { case TokenSource.UnicodeByteArray: if ((this._inIndex + 1) < this._inSize) { break; } stream.AddToken(-1); return; case TokenSource.UTF8ByteArray: if (this._inIndex < this._inSize) { goto Label_00CF; } stream.AddToken(-1); return; case TokenSource.ASCIIByteArray: if (this._inIndex < this._inSize) { goto Label_023C; } stream.AddToken(-1); return; case TokenSource.CharArray: if (this._inIndex < this._inSize) { goto Label_0272; } stream.AddToken(-1); return; case TokenSource.String: if (this._inIndex < this._inSize) { goto Label_02A8; } stream.AddToken(-1); return; case TokenSource.NestedStrings: if (this._inNestedSize == 0) { goto Label_030D; } if (this._inNestedIndex >= this._inNestedSize) { goto Label_0306; } num = this._inNestedString[this._inNestedIndex++]; goto Label_0402; default: num = this._inTokenReader.Read(); if (num == -1) { stream.AddToken(-1); return; } goto Label_0402; } num = (this._inBytes[this._inIndex + 1] << 8) + this._inBytes[this._inIndex]; this._inIndex += 2; } goto Label_0402; Label_00CF: num = this._inBytes[this._inIndex++]; if ((num & 0x80) != 0) { switch (((num & 240) >> 4)) { case 8: case 9: case 10: case 11: throw new XmlSyntaxException(this.LineNo); case 12: case 13: num &= 0x1f; num3 = 2; break; case 14: num &= 15; num3 = 3; break; case 15: throw new XmlSyntaxException(this.LineNo); } if (this._inIndex >= this._inSize) { throw new XmlSyntaxException(this.LineNo, Environment.GetResourceString("XMLSyntax_UnexpectedEndOfFile")); } byte num2 = this._inBytes[this._inIndex++]; if ((num2 & 0xc0) != 0x80) { throw new XmlSyntaxException(this.LineNo); } num = (num << 6) | (num2 & 0x3f); if (num3 != 2) { if (this._inIndex >= this._inSize) { throw new XmlSyntaxException(this.LineNo, Environment.GetResourceString("XMLSyntax_UnexpectedEndOfFile")); } num2 = this._inBytes[this._inIndex++]; if ((num2 & 0xc0) != 0x80) { throw new XmlSyntaxException(this.LineNo); } num = (num << 6) | (num2 & 0x3f); } } goto Label_0402; Label_023C: num = this._inBytes[this._inIndex++]; goto Label_0402; Label_0272: num = this._inChars[this._inIndex++]; goto Label_0402; Label_02A8: num = this._inString[this._inIndex++]; goto Label_0402; Label_0306: this._inNestedSize = 0; I essentially wanted to know how the class worked, but the number of goto's makes it impossible. Arguably something like a Tokenizer class needs to be heavily optimised, so my question is: is Reflector getting it wrong, or is goto an optimisation for this class?

    Read the article

  • How is it possible my array is broken?

    - by user1812765
    I have this piece of code: public lot merge (lot otherlot){ wafer[] mWaferarray = new wafer[16]; byte[] bytearray = new byte[16]; wafer resultwafer = new wafer(bytearray); wafer w1; wafer w2; int i; int[][] assignmentmatrix = HungarianAlgorithm.computeAssignments(convertinttofloat (solutionmatrix(otherlot))); for (i=0; i != assignmentmatrix.length ;i++){ w1 = otherlot.getWaferarray()[assignmentmatrix[i][0]]; w2 = getWaferarray()[assignmentmatrix[i][1]]; resultwafer.setWafer(w1.wafercompare(w2)); mWaferarray[i] = resultwafer; mWaferarray[i].print(); } System.out.println("HERE\n"); mWaferarray[5].toString(); resultlot = new lot(mWaferarray); resultlot.print();// Problem occurs here. return resultlot; } As you can see I create an array of wafers (selfdefined class). Then I fill this up with new wafers. When I print this array (mWaferarray[i].print()) it gives me the wanted results. But when I go out of the "for"-loop the array is broken and it is as if the last item I add to mWaferarray fills it up (the entire array, 16 long, is filled with this wafer). So if run this program this is what I get: 1011110010111100 0011011111111110 0111110111101101 1010111001101111 0110110111101111 1010110101111010 1010110111011110 1011111010111100 1111110011101110 0111111111011011 1111111111011010 1101111011111010 1010110101011110 0101111011011010 1011111011011000 0101111011011010 HERE 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 As you can see it is as if the array is filled with the last wafer. I have been looking at this for some time now, I hope you guy can help me out. Thx in advance PS: my print functions are written like this: void print(){ int j; for (j=0; j != waferarray.length ;j++){ waferarray[j].print(); } } EDIT: added code for lot this is the beginning of the lot class public class lot { wafer[] waferarray = new wafer[16]; lot resultlot; public lot (wafer wafer1,wafer wafer2,wafer wafer3,wafer wafer4, wafer wafer5,wafer wafer6,wafer wafer7,wafer wafer8, wafer wafer9,wafer wafer10,wafer wafer11,wafer wafer12, wafer wafer13,wafer wafer14,wafer wafer15,wafer wafer16){ waferarray[0] = wafer1; waferarray[1] = wafer2; waferarray[2] = wafer3; waferarray[3] = wafer4; waferarray[4] = wafer5; waferarray[5] = wafer6; waferarray[6] = wafer7; waferarray[7] = wafer8; waferarray[8] = wafer9; waferarray[9] = wafer10; waferarray[10] = wafer11; waferarray[11] = wafer12; waferarray[12] = wafer13; waferarray[13] = wafer14; waferarray[14] = wafer15; waferarray[15] = wafer16; } public lot (wafer[] thiswaferarray){ waferarray = thiswaferarray; }

    Read the article

  • jQuery - Stucked Animation

    - by v1n_vampire
    I'm kind of tired with Javascript long script for animation and decide to try jQuery, but it seems I'm stuck even at the simplest code. CSS: #menu {float: right; font: italic 16px/16px Verdana, Geneva, sans-serif;} ul#nav {list-style: none;} ul#nav li {float: left; padding-right: 10px;} ul#nav li a {color: white;} ul#subnav {float: right; list-style: none; padding: 0; display: none;} ul#subnav li {float: left; padding: 10px 5px; white-space: nowrap;} ul#subnav li a {color: white;} Script: $(document).ready(function() { $('.nav').hover(function() { $(this).find('#subnav').stop().animate({width:'toggle'},350); }); }); HTML: <div id="menu"> <ul id="nav"> <li class="nav"> <a href="#"><img src="images/icon-home.png" width="36" height="36"/></a> <ul id="subnav"> <li><a href="#">Home</a></li> </ul> </li> <li class="nav"> <a href="#"><img src="images/icon-signin.png" width="36" height="36"/></a> <ul id="subnav"> <li><a href="#">Sign In</a></li> </ul> </li> <li class="nav"> <a href="#"><img src="images/icon-register.png" width="36" height="36"/></a> <ul id="subnav"> <li><a href="#">Create Account</a></li> </ul> </li> <li class="nav" style="padding-right: 0;"> <a href="#"><img src="images/icon-mail.png" width="36" height="36"/></a> <ul id="subnav"> <li style="padding-right: 0;"><a href="#">Contact Us</a></li> </ul> </li> </ul> </div> Here's the sample page: http://v1n-vampire.com/dev/jq-animation-stuck If you continue to hover on the nav from left to right then back to left, eventually the animation will stuck. How to solve this? Thank you in advance.

    Read the article

  • VBScript Multiple folder check if then statement

    - by user2868186
    I had this working before just fine with the exception of getting an error if one of the folders was not there, so I tried to fix it. Searched for a while (as much as I can at work) for a solution and tried different methods, still no luck and my IT tickets are stacking up at work, lol, woohoo. Thanks for any help provided. Getting syntax error on line 60 character 60, thanks again. Option Explicit Dim objFSO, Folder1, Folder2, Folder3, zipFile Dim ShellApp, zip, oFile, CurDate, MacAdd, objWMIService Dim MyTarget, MyHex, MyBinary, i, strComputer, objItem, FormatMAC Dim oShell, oCTF, CurDir, scriptPath, oRegEx, colItems Dim FoldPath1, FoldPath2, FoldPath3, foldPathArray Const FOF_SIMPLEPROGRESS = 256 'Grabs MAC from current machine strComputer = "." Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\cimv2") Set colItems = objWMIService.ExecQuery _ ("Select * From Win32_NetworkAdapterConfiguration Where IPEnabled = True") For Each objItem in colItems MacAdd = objItem.MACAddress Next 'Finds the pattern of a MAC address then changes it for 'file naming purposes. You can change the FormatMAC line of the code 'in parenthesis where the periods are, to whatever you like 'as long as its within the standard file naming convention Set oRegEx = CreateObject("VBScript.RegExp") oRegEx.Pattern = "([\dA-F]{2}).?([\dA-F]{2}).?([\dA-F]" _ & "{2}).?([\dA-F]{2}).?([\dA-F]{2}).?([\dA-F]{2})" FormatMAC = oRegEx.Replace(MacAdd, "$1.$2.$3.$4.$5.$6") 'Gets current date in a format for file naming 'Periods can be replaced with anything that is standard to 'file naming convention CurDate = Month(Date) & "." & Day(Date) & "." & Year(Date) 'Gets path of the directory where the script is being ran from Set objFSO = CreateObject("Scripting.FileSystemObject") scriptPath = Wscript.ScriptFullName Set oFile = objFSO.GetFile(scriptPath) CurDir = objFSO.GetParentFolderName(oFile) 'where and what the zip file will be called/saved MyTarget = CurDir & "\" & "IRAP_LOGS_" & CurDate & "_" & FormatMAC & ".zip" 'Actual creation of the zip file MyHex = Array(80, 75, 5, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,0, 0) For i = 0 To UBound(MyHex) MyBinary = MyBinary & Chr(MyHex(i)) Next Set oShell = CreateObject("WScript.Shell") Set oCTF = objFSO.CreateTextFile(MyTarget, True) oCTF.Write MyBinary oCTF.Close Set oCTF = Nothing wScript.Sleep(3000) folder1 = True folder2 = True folder3 = True 'Adds folders to the zip file created earlier 'change these folders to whatever is needing to be copied into the zip folder 'Folder1 If not objFSO.FolderExists("C:\Windows\Temp\SMSTSLog") and If not objFSO.FolderExists("X:\Windows\Temp\SMSTSLog") then Folder1 = false End If If objFSO.FolderExists("C:\Windows\Temp\SMSTSLog") Then Folder1 = "C:\Windows\Temp\SMSTSLog" Set FoldPath1 = objFSO.getFolder(Folder1) Else Folder1 = "X:\windows\Temp\SMSTSLog" Set FoldPath1 = objFSO.getFolder(Folder1) End If 'Folder2 If not objFSO.FolderExists("C:\Windows\System32\CCM\Logs") and If not objFSO.FolderExists("X:\Windows\System32\CCM\Logs") then Folder2 = false End If If objFSO.FolderEXists("C:\Windows\System32\CCM\Logs") Then Folder2 = "C:\Windows\System32\CCM\Logs" Set FoldPath2 = objFSO.getFolder(Folder2) Else Folder2 = "X:\Windows\System32\CCM\Logs" Set FoldPath2 = objFSO.getFolder(Folder2) End If 'Folder3 If not objFSO.FolderExists("C:\Windows\SysWOW64\CCM\Logs") and If not objFSO.FolderExists("X:\Windows\SysWOW64\CCM\Logs") then Folder3 = false End If If objFSO.FolderExists("C:\Windows\SysWOW64\CCM\Logs") Then Folder3 = "C:\Windows\SysWOW64\CCM\Logs" set FolderPath3 =objFSO.getFolder(Folder3) Else Folder3 = "X:\Windows\SysWOW64\CCM\Logs" Set FoldPath3 = objFSO.getFolder(Folder3) End If set objFSO = CreateObject("Scripting.FileSystemObject") objFSO.OpenTextFile(MyTarget, 2, True).Write "PK" & Chr(5) & Chr(6) _ & String(18, Chr(0)) Set ShellApp = CreateObject("Shell.Application") Set zip = ShellApp.NameSpace(MyTarget) 'checks if files are there before trying to copy 'otherwise it will error out If folder1 = True And FoldPath1.files.Count >= 1 Then zip.CopyHere Folder1 End If WScript.Sleep 3000 If folder2 = true And FoldPath2.files.Count >= 1 Then zip.CopyHere Folder2 End If WScript.Sleep 3000 If folder3 = true And FoldPath3.files.Count >= 1 Then zip.CopyHere Folder3 End If WScript.Sleep 5000 set ShellApp = Nothing set ZipFile = Nothing Set Folder1 = Nothing Set Folder2 = Nothing Set Folder3 = Nothing createobject("wscript.shell").popup "Zip File Created Successfully", 3

    Read the article

  • Best way to program a call to php

    - by hairdresser-101
    I've recently posted here http://stackoverflow.com/questions/2627645/accessing-session-when-using-file-get-contents-in-php about a problem I was having and the general consensus is that I'm not doing it right... while I generally think "as long as it works..." I thought I'd get some feedback on how I could do it better... I was to send the exact same email in the exact same format from multiple different areas. When a job is entered (automatically as a part of the POST) Manually when reviewing jobs to re-assign to another installer The original script is a php page which is called using AJAX to send the work order request - this worked by simply calling a standard php page, returning the success or error message and then displaying within the calling page. Now I have tried to use the same page within the automated job entry so it accepts the job via a form, logs it and mails it. My problem is (as you can see from the original post) the function file_get_contents() is not good for this cause in the automated script... My problem is that from an AJAX call I need to do things like include the database connection initialiser, start the session and do whatever else needs to be done in a standalone page... Some or all of these are not required if it is an include so it makes the file only good for one purpose... How do I make the file good for both purposes? I guess I'm looking for recommendations for the best file layout and structure to cater for both scenarios... The current file looks like: <?php session_start(); $order_id = $_GET['order_id']; include('include/database.php'); function getLineItems($order_id) { $query = mysql_query("SELECT ...lineItems..."); //Print rows with data while($row = mysql_fetch_object($query)) { $lineItems .= '...Build Line Item String...'; } return $lineItems; } function send_email($order_id) { //Get data for current job to display $query = mysql_query("SELECT ...Job Details..."); $row = mysql_fetch_object($query); $subject = 'Work Order Request'; $email_message = '...Build Email... ...Include Job Details... '.getLineItems($order_id).' ...Finish Email...'; $headers = '...Create Email Headers...'; if (mail($row->primary_email, $subject, $email_message, $headers)) { $query = mysql_query("...log successful send..."); if (mysql_error()!="") { $message .= '...display mysqlerror()..'; } $message .= '...create success message...'; } else { $query = mysql_query("...log failed send..."); if (mysql_error()!="") { $message .= '...display mysqlerror()..'; } $message .= '...create failed message...'; } return $message; } // END send_email() function //Check supplier info $query = mysql_query("...get suppliers info attached to order_id..."); if (mysql_num_rows($query) > 0) { while($row = mysql_fetch_object($query)) { if ($row->primary_email=="") { $message .= '...no email message...'; } else if ($row->notification_email=="") { $message .= '...no notifications message...'; } else { $message .= send_email($order_id); } } } else { $message .= '...no supplier matched message...'; } print $message; ?>

    Read the article

  • Shrinking image by 57% and centering inside css structure

    - by Johua
    Hy, i'm really stuck. I'll go step by step and hope to make it short. This is the html structure: <li class="FAVwithimage"> <a href=""> <img src="pics/Joshua.png"> <span class="name">Joshua</span> <span class="comment">Developer</span> <span class="arrow"></span> </a> </li> Before i paste the css classes, some info about the exact goal to accomplish: Resize the picture (img) by 57%. If it cannot be done with css, then jquery/javascript solution. For example: Original pic is 240x240px, i need to resize it by 57%. That means that a pic of 400x400 would be bigger after resizing. After resizing, the picture needs to be centered vertical&horizontal inside a: 68x90 boundaries. So you have an LI element, wich has an A element, and inside A we have IMG, IMG is resized by 57% and centered where the maximum width can be of course 68px and maximum height 90px. No for that to work i was adding a SPAN element arround the IMG. This is what i was thinking: <li class="FAVwithimage"> <a href=""> <span class="picHolder"><img src="pics/Joshua.png"></span> <span class="name">Joshua</span> <span class="comment">Developer</span> <span class="arrow"></span> </a> </li> Then i would give the span element: display:block and w=68px, h=90px. But unforunatelly that didn't work. I know it's a long post but i'v did my best to describe it very simple. Beneath are the css classes and a picture to see what i need. li.FAVwithimage { height: 90px!important; } li.FAVwithimage a, li.FAVwithimage:hover a { height: 81px!important; } That's it what's relevant. I have not included the classes for: name,comment,arrow And now the classes that are incomplete and refer to IMG. li.FAVwithimage a span.picHolder{ /*put the picHolder to the beginning of the LI element*/ position: absolute; left: 0; top: 0; width: 68px; height: 90px; diplay:block; border:1px solid #F00; } Border is used just temporary to show the actuall picHolder. It is now on the beginning of LI, width and height is set. li.FAVwithimage span.picHolder img { max-width:68px!important; max-height:90px!important; } This is the class wich should shrink the pic by 57% and center inside picHolder Here I have a drawing describing what i need:

    Read the article

  • Strange behaviour of CUDA kernel

    - by username_4567
    I'm writing code for calculating prefix sum. Here is my kernel __global__ void prescan(int *indata,int *outdata,int n,long int *sums) { extern __shared__ int temp[]; int tid=threadIdx.x; int offset=1,start_id,end_id; int *global_sum=&temp[n+2]; if(tid==0) { temp[n]=blockDim.x*blockIdx.x; temp[n+1]=blockDim.x*(blockIdx.x+1)-1; start_id=temp[n]; end_id=temp[n+1]; //cuPrintf("Value of start %d and end %d\n",start_id,end_id); } __syncthreads(); start_id=temp[n]; end_id=temp[n+1]; temp[tid]=indata[start_id+tid]; temp[tid+1]=indata[start_id+tid+1]; for(int d=n>>1;d>0;d>>=1) { __syncthreads(); if(tid<d) { int ai=offset*(2*tid+1)-1; int bi=offset*(2*tid+2)-1; temp[bi]+=temp[ai]; } offset*=2; } if(tid==0) { sums[blockIdx.x]=temp[n-1]; temp[n-1]=0; cuPrintf("sums %d\n",sums[blockIdx.x]); } for(int d=1;d<n;d*=2) { offset>>=1; __syncthreads(); if(tid<d) { int ai=offset*(2*tid+1)-1; int bi=offset*(2*tid+2)-1; int t=temp[ai]; temp[ai]=temp[bi]; temp[bi]+=t; } } __syncthreads(); if(tid==0) { outdata[start_id]=0; } __threadfence_block(); __syncthreads(); outdata[start_id+tid]=temp[tid]; outdata[start_id+tid+1]=temp[tid+1]; __syncthreads(); if(tid==0) { temp[0]=0; outdata[start_id]=0; } __threadfence_block(); __syncthreads(); if(blockIdx.x==0 && threadIdx.x==0) { for(int i=1;i<gridDim.x;i++) { sums[i]=sums[i]+sums[i-1]; } } __syncthreads(); __threadfence(); if(blockIdx.x==0 && threadIdx.x==0) { for(int i=0;i<gridDim.x;i++) { cuPrintf("****sums[%d]=%d ",i,sums[i]); } } __syncthreads(); __threadfence(); if(blockIdx.x!=gridDim.x-1) { int tid=(blockIdx.x+1)*blockDim.x+threadIdx.x; if(threadIdx.x==0) cuPrintf("Adding %d \n",sums[blockIdx.x]); outdata[tid]+=sums[blockIdx.x]; } __syncthreads(); } In above kernel, sums array will accumulate prefix sum per block and and then first thread will calculate prefix sum of this sum array. Now if I print this sum array from device side it'll show correct results while in cuPrintf("Adding %d \n",sums[blockIdx.x]); this line it prints that it is taking old value. What could be the reason?

    Read the article

  • SOA Suite 11g Native Format Builder Complex Format Example

    - by bob.webster
    This rather long posting details the steps required to process a grouping of fixed length records using Format Builder.   If it’s 10 pm and you’re feeling beat you might want to leave this until tomorrow.  But if it’s 10 pm and you need to get a Format Builder Complex template done, read on… The goal is to process individual orders from a file using the 11g File Adapter and Format Builder Sample Data =========== 001Square Widget            0245.98 102Triagular Widget         1120.00 403Circular Widget           0099.45 ORD8898302/01/2011 301Hexagon Widget         1150.98 ORD6735502/01/2011 The records are fixed length records representing a number of logical Order records. Each order record consists of a number of item records starting with a 3 digit number, followed by a single Summary Record which starts with the constant ORD. How can this file be processed so that the first poll returns the first order? 001Square Widget            0245.98 102Triagular Widget         1120.00 403Circular Widget           0099.45 ORD8898302/01/2011 And the second poll returns the second order? 301Hexagon Widget           1150.98 ORD6735502/01/2011 Note: if you need more than one order per poll, that’s also possible, see the “Multiple Messages” field in the “File Adapter Step 6 of 9” snapshot further down.   To follow along with this example you will need - Studio Edition Version 11.1.1.4.0    with the   - SOA Extension for JDeveloper 11.1.1.4.0 installed Both can be downloaded from here:  http://www.oracle.com/technetwork/middleware/soasuite/downloads/index.html You will not need a running WebLogic Server domain to complete the steps and Format Builder tests in this article.     Start with a SOA Composite containing a File Adapter The Format Builder is part of the File Adapter so start by creating a new SOA Project and Composite. Here is a quick summary for those not familiar with these steps - Start JDeveloper - From the Main Menu choose File->New - In the New Gallery window that opens Expand the “General” category and Select the Applications node.   Then choose SOA Application from the Items section on the right.  Finally press the OK button. - In Step 1 of the “Create SOA Application wizard” that appears enter an Application Name and an Directory of your     choice,   then press the Next button. - In Step 2 of the “Create SOA Application wizard”, press the Next button leaving all entries as defaulted. - In Step 3 of the “Create SOA Application wizard”, Enter a composite name of your choice and Press the Finish   Button These steps result in a new Application and SOA Project. The SOA Project contains a composite.xml file which is opened and shown below. For our example we have not defined a Mediator or a BPEL process to minimize the steps, but one or the other would eventually be needed to use the File Adapter we are about to create. Drag and drop the File Adapter icon from the Component Pallette onto either the LEFT side of the diagram under “Exposed Services” or the right side under “External References”.  (See the Green Circle in the image below).  Placing the adapter on the left side would indicate the file being processed is inbound to the composite, if the adapter is placed on the right side then the data is outbound to a file.     Note that the same Format Builder definition can be used in both directions.  For example we could use the format with a File Adapter on the left side of the composite to parse fixed data into XML, modify the data in our Composite or BPEL process and then use the same Format Builder definition with a File adapter on the right side of the composite to write the data back out in the same fixed data format When the File Adapter is dropped on the Composite the File Adapter Wizard Appears. Skip Past the first page, Step 1 of 9 by pressing the Next button. In Step 2 enter a service name of your choice as shown below, then press Next   When the Native Format Builder appears, skip the welcome page by pressing next. Also press the Next button to accept the settings on Step 3 of 9 On Step 4, select Read File and press the Next button as shown below.   On Step 5 enter a directory that will contain a file with the input data, then  Press the Next button as shown below. In step 6, enter *.txt or another file format to select input files from the input directory mentioned in step 5. ALSO check the “Files contain Multiple Messages” checkbox and set the “Publish Messages in Batches of” field to 1.  The value can be set higher to increase the number of logical order group records returned on each poll of the file adapter.  In other words, it determines the number of Orders that will be sent to each instance of a Mediator or Composite processing using the File Adapter.   Skip Step 7 by pressing the Next button In Step 8 press the Gear Icon on the right side to load the Native Format Builder.       Native Format Builder  appears Before diving into the format, here is an overview of the process. Approach - Bottom up Assuming an Order is a grouping of item records and a summary record…. - Define a separate  Complex Type for each Record Type found in the group.    (One for itemRecord and one for summaryRecord) - Define a Complex Type to contain the Group of Record types defined above   (LogicalOrderRecord) - Define a top level element to represent an order.  (order)   The order element will be of type LogicalOrderRecord   Defining the Format In Step 1 select   “Create new”  and  “Complex Type” and “Next”   In Step two browse to and select a file containing the test data shown at the start of this article. A link is provided at the end of this article to download a file containing the test data. Press the Next button     In Step 3 Complex types must be define for each type of input record. Select the Root-Element and Click on the Add Complex Type icon This creates a new empty complex type definition shown below. The fastest way to create the definition is to highlight the first line of the Sample File data and drag the line onto the  <new_complex_type> Format Builder introspects the data and provides a grid to define additional fields. Change the “Complex Type Name” to  “itemRecord” Then click on the ruler to indicate the position of fixed columns.  Drag the red triangle icons to the exact columns if necessary. Double click on an existing red triangle to remove an unwanted entry. In the case below fields are define in columns 0-3, 4-28, 29-eol When the field definitions are correct, press the “Generate Fields” button. Field entries named C1, C2 and C3 will be created as shown below. Click on the field names and rename them from C1->itemNum, C2->itemDesc and C3->itemCost  When all the fields are correctly defined press OK to save the complex type.        Next, the process is repeated to define a Complex Type for the SummaryRecord. Select the Root-Element in the schema tree and press the new complex type icon Then highlight and drag the Summary Record from the sample data onto the <new_complex_type>   Change the complex type name to “summaryRecord” Mark the fixed fields for Order Number and Order Date. Press the Generate Fields button and rename C1 and C2 to itemNum and orderDate respectively.   The last complex type to be defined is a type to hold the group of items and the summary record. Select the Root-Element in the schema tree and click the new complex type icon Select the “<new_complex_type>” entry and click the pencil icon   On the Complex Type Details page change the name and type of each input field. Change line 1 to be named item and set the Type  to “itemRecord” Change line 2 to be named summary and set the Type to “summaryRecord” We also need to indicate that itemRecords repeat in the input file. Click the pencil icon at the right side of the item line. On the Edit Details page change the “Max Occurs” entry from 1 to UNBOUNDED. We also need to indicate how to identify an itemRecord.  Since each item record has “.” in column 32 we can use this fact to differentiate an item record from a summary record. Change the “Look Ahead” field to value 32 and enter a period in the “Look For” field Press the OK button to save entry.     Finally, its time to create a top level element to represent an order. Select the “Root-Element” in the schema tree and press the New element icon Click on the <new_element> and press the pencil icon.   Set the Element Name to “order” and change the Data Type to “logicalOrderRecord” Press the OK button to save the element definition.   The final definition should match the screenshot below. Press the Next Button to view the definition source.     Press the Test Button to test the definition   Press the Green Triangle Icon to run the test.   And we are presented with an unwelcome error. The error states that the processor ran out of data while working through the definition. The processor was unable to differentiate between itemRecords and summaryRecords and therefore treated the entire file as a list of itemRecords.  At end of file, the “summary” portion of the logicalOrderRecord remained unprocessed but mandatory.   This root cause of this error is the loss of our “lookAhead” definition used to identify itemRecords. This appears to be a bug in the  Native Format Builder 11.1.1.4.0 Luckily, a simple workaround exists. Press the Cancel button and return to the “Step 4 of 4” Window. Manually add    nxsd:lookAhead="32" nxsd:lookFor="."   attributes after the maxOccurs attribute of the item element. as shown in the highlighted text below.   When the lookAhead and lookFor attributes have been added Press the Test button and on the Test page press the Green Triangle. The test is now successful, the first order in the file is returned by the File Adapter.     Below is a complete listing of the Result XML from the right column of the screen above   Try running it The downloaded input test file and completed schema file can be used for testing without following all the Native Format Builder steps in this example. Use the following link to download a file containing the sample data. Download Sample Input Data This is the best approach rather than cutting and pasting the input data at the top of the article.  Since the data is fixed length it’s very important to watch out for trailing spaces in the data and to ensure an eol character at the end of every line. The download file is correctly formatted. The final schema definition can be downloaded at the following link Download Completed Schema Definition   - Save the inputData.txt file to a known location like the xsd folder in your project. - Save the inputData_6.xsd file to the xsd folder in your project. - At step 1 in the Native Format Builder wizard  (as shown above) check the “Edit existing” radio button,    then browse and select the inputData_6.xsd file - At step 2 of the Format Builder configuration Wizard (as shown above) supply the path and filename for    the inputData.txt file. - You can then proceed to the test page and run a test. - Remember the wizard bug will drop the lookAhead and lookFor attributes,  you will need to manually add   nxsd:lookAhead="32" nxsd:lookFor="."    after the maxOccurs attribute of the item element in the   LogicalOrderRecord Complex Type.  (as shown above)   Good Luck with your Format Project

    Read the article

  • checkbox unchecked when i scroll listview in android

    - by Mathew
    I am new to android development. I created a listview with textbox and checkbox. When I check the checkbox and scroll it down to check some other items in the list view, the older ones are unchecked. How to avoid this problem in listview? Please guide me with my code. Here is the code: main.xml: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent"> <TextView android:id="@+id/TextView01" android:layout_height="wrap_content" android:text="List of items" android:textStyle="normal|bold" android:gravity="center_vertical|center_horizontal" android:layout_width="fill_parent"></TextView> <ListView android:id="@+id/ListView01" android:layout_height="250px" android:layout_width="fill_parent"> </ListView> <Button android:text="Save" android:id="@+id/btnSave" android:layout_width="wrap_content" android:layout_height="wrap_content"> </Button> </LinearLayout> This is the xml page I used to create dynamic list row: listview.xml: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_height="wrap_content" android:gravity="left|center" android:layout_width="wrap_content" android:paddingBottom="5px" android:paddingTop="5px" android:paddingLeft="5px"> <TextView android:id="@+id/TextView01" android:layout_width="wrap_content" android:layout_height="wrap_content" android:gravity="center" android:textColor="#FFFF00" android:text="hi"></TextView> <TextView android:text="hello" android:id="@+id/TextView02" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginLeft="10px" android:textColor="#0099CC"></TextView> <EditText android:id="@+id/txtbox" android:layout_width="120px" android:layout_height="wrap_content" android:textSize="12sp" android:layout_x="211px" android:layout_y="13px"> </EditText> <CheckBox android:id="@+id/chkbox1" android:layout_width="wrap_content" android:layout_height="wrap_content" /> </LinearLayout> This is my activity class. CustomListViewActivity.java: package com.listivew; import android.app.Activity; import android.os.Bundle; import android.content.Context; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.BaseAdapter; import android.widget.Button; import android.widget.CheckBox; import android.widget.EditText; import android.widget.ListView; import android.widget.TextView; import android.widget.Toast; public class CustomListViewActivity extends Activity { ListView lstView; static Context mContext; Button btnSave; private static class EfficientAdapter extends BaseAdapter { private LayoutInflater mInflater; public EfficientAdapter(Context context) { mInflater = LayoutInflater.from(context); } public int getCount() { return country.length; } public Object getItem(int position) { return position; } public long getItemId(int position) { return position; } public View getView(int position, View convertView, ViewGroup parent) { final ViewHolder holder; if (convertView == null) { convertView = mInflater.inflate(R.layout.listview, parent, false); holder = new ViewHolder(); holder.text = (TextView) convertView .findViewById(R.id.TextView01); holder.text2 = (TextView) convertView .findViewById(R.id.TextView02); holder.txt = (EditText) convertView.findViewById(R.id.txtbox); holder.cbox = (CheckBox) convertView.findViewById(R.id.chkbox1); convertView.setTag(holder); } else { holder = (ViewHolder) convertView.getTag(); } holder.text.setText(curr[position]); holder.text2.setText(country[position]); holder.txt.setText(""); holder.cbox.setChecked(false); return convertView; } public class ViewHolder { TextView text; TextView text2; EditText txt; CheckBox cbox; } } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); lstView = (ListView) findViewById(R.id.ListView01); lstView.setAdapter(new EfficientAdapter(this)); btnSave = (Button)findViewById(R.id.btnSave); mContext = this; btnSave.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // I want to print the text which is in the listview one by one. //Later i will insert it in the database // Toast.makeText(getBaseContext(), "EditText Value, checkbox value and other values", Toast.LENGTH_SHORT).show(); for (int i = 0; i < lstView.getCount(); i++) { View listOrderView; listOrderView = lstView.getChildAt(i); try{ EditText txtAmt = (EditText)listOrderView.findViewById(R.id.txtbox); CheckBox cbValue = (CheckBox)listOrderView.findViewById(R.id.chkbox1); if(cbValue.isChecked()== true){ String amt = txtAmt.getText().toString(); Toast.makeText(getBaseContext(), "Amount is :"+amt, Toast.LENGTH_SHORT).show(); } }catch (Exception e) { // TODO: handle exception } } } }); } private static final String[] country = { "item1", "item2", "item3", "item4", "item5", "item6","item7", "item8", "item9", "item10", "item11", "item12" }; private static final String[] curr = { "1", "2", "3", "4", "5", "6","7", "8", "9", "10", "11", "12" }; } Please help me to slove this problem. I have referred in many places. But I could not get proper answer to solve this problem. Please provide me the code to avoid unchecking the checkbox while scrolling up and down. Thank you.

    Read the article

  • jQuery Globalization Plugin from Microsoft

    - by ScottGu
    Last month I blogged about how Microsoft is starting to make code contributions to jQuery, and about some of the first code contributions we were working on: jQuery Templates and Data Linking support. Today, we released a prototype of a new jQuery Globalization Plugin that enables you to add globalization support to your JavaScript applications. This plugin includes globalization information for over 350 cultures ranging from Scottish Gaelic, Frisian, Hungarian, Japanese, to Canadian English.  We will be releasing this plugin to the community as open-source. You can download our prototype for the jQuery Globalization plugin from our Github repository: http://github.com/nje/jquery-glob You can also download a set of samples that demonstrate some simple use-cases with it here. Understanding Globalization The jQuery Globalization plugin enables you to easily parse and format numbers, currencies, and dates for different cultures in JavaScript. For example, you can use the Globalization plugin to display the proper currency symbol for a culture: You also can use the Globalization plugin to format dates so that the day and month appear in the right order and the day and month names are correctly translated: Notice above how the Arabic year is displayed as 1431. This is because the year has been converted to use the Arabic calendar. Some cultural differences, such as different currency or different month names, are obvious. Other cultural differences are surprising and subtle. For example, in some cultures, the grouping of numbers is done unevenly. In the "te-IN" culture (Telugu in India), groups have 3 digits and then 2 digits. The number 1000000 (one million) is written as "10,00,000". Some cultures do not group numbers at all. All of these subtle cultural differences are handled by the jQuery Globalization plugin automatically. Getting dates right can be especially tricky. Different cultures have different calendars such as the Gregorian and UmAlQura calendars. A single culture can even have multiple calendars. For example, the Japanese culture uses both the Gregorian calendar and a Japanese calendar that has eras named after Japanese emperors. The Globalization Plugin includes methods for converting dates between all of these different calendars. Using Language Tags The jQuery Globalization plugin uses the language tags defined in the RFC 4646 and RFC 5646 standards to identity cultures (see http://tools.ietf.org/html/rfc5646). A language tag is composed out of one or more subtags separated by hyphens. For example: Language Tag Language Name (in English) en-AU English (Australia) en-BZ English (Belize) en-CA English (Canada) Id Indonesian zh-CHS Chinese (Simplified) Legacy Zu isiZulu Notice that a single language, such as English, can have several language tags. Speakers of English in Canada format numbers, currencies, and dates using different conventions than speakers of English in Australia or the United States. You can find the language tag for a particular culture by using the Language Subtag Lookup tool located here:  http://rishida.net/utils/subtags/ The jQuery Globalization plugin download includes a folder named globinfo that contains the information for each of the 350 cultures. Actually, this folder contains more than 700 files because the folder includes both minified and un-minified versions of each file. For example, the globinfo folder includes JavaScript files named jQuery.glob.en-AU.js for English Australia, jQuery.glob.id.js for Indonesia, and jQuery.glob.zh-CHS for Chinese (Simplified) Legacy. Example: Setting a Particular Culture Imagine that you have been asked to create a German website and want to format all of the dates, currencies, and numbers using German formatting conventions correctly in JavaScript on the client. The HTML for the page might look like this: Notice the span tags above. They mark the areas of the page that we want to format with the Globalization plugin. We want to format the product price, the date the product is available, and the units of the product in stock. To use the jQuery Globalization plugin, we’ll add three JavaScript files to the page: the jQuery library, the jQuery Globalization plugin, and the culture information for a particular language: In this case, I’ve statically added the jQuery.glob.de-DE.js JavaScript file that contains the culture information for German. The language tag “de-DE” is used for German as spoken in Germany. Now that I have all of the necessary scripts, I can use the Globalization plugin to format the product price, date available, and units in stock values using the following client-side JavaScript: The jQuery Globalization plugin extends the jQuery library with new methods - including new methods named preferCulture() and format(). The preferCulture() method enables you to set the default culture used by the jQuery Globalization plugin methods. Notice that the preferCulture() method accepts a language tag. The method will find the closest culture that matches the language tag. The $.format() method is used to actually format the currencies, dates, and numbers. The second parameter passed to the $.format() method is a format specifier. For example, passing “c” causes the value to be formatted as a currency. The ReadMe file at github details the meaning of all of the various format specifiers: http://github.com/nje/jquery-glob When we open the page in a browser, everything is formatted correctly according to German language conventions. A euro symbol is used for the currency symbol. The date is formatted using German day and month names. Finally, a period instead of a comma is used a number separator: You can see a running example of the above approach with the 3_GermanSite.htm file in this samples download. Example: Enabling a User to Dynamically Select a Culture In the previous example we explicitly said that we wanted to globalize in German (by referencing the jQuery.glob.de-DE.js file). Let’s now look at the first of a few examples that demonstrate how to dynamically set the globalization culture to use. Imagine that you want to display a dropdown list of all of the 350 cultures in a page. When someone selects a culture from the dropdown list, you want all of the dates in the page to be formatted using the selected culture. Here’s the HTML for the page: Notice that all of the dates are contained in a <span> tag with a data-date attribute (data-* attributes are a new feature of HTML 5 that conveniently also still work with older browsers). We’ll format the date represented by the data-date attribute when a user selects a culture from the dropdown list. In order to display dates for any possible culture, we’ll include the jQuery.glob.all.js file like this: The jQuery Globalization plugin includes a JavaScript file named jQuery.glob.all.js. This file contains globalization information for all of the more than 350 cultures supported by the Globalization plugin.  At 367KB minified, this file is not small. Because of the size of this file, unless you really need to use all of these cultures at the same time, we recommend that you add the individual JavaScript files for particular cultures that you intend to support instead of the combined jQuery.glob.all.js to a page. In the next sample I’ll show how to dynamically load just the language files you need. Next, we’ll populate the dropdown list with all of the available cultures. We can use the $.cultures property to get all of the loaded cultures: Finally, we’ll write jQuery code that grabs every span element with a data-date attribute and format the date: The jQuery Globalization plugin’s parseDate() method is used to convert a string representation of a date into a JavaScript date. The plugin’s format() method is used to format the date. The “D” format specifier causes the date to be formatted using the long date format. And now the content will be globalized correctly regardless of which of the 350 languages a user visiting the page selects.  You can see a running example of the above approach with the 4_SelectCulture.htm file in this samples download. Example: Loading Globalization Files Dynamically As mentioned in the previous section, you should avoid adding the jQuery.glob.all.js file to a page whenever possible because the file is so large. A better alternative is to load the globalization information that you need dynamically. For example, imagine that you have created a dropdown list that displays a list of languages: The following jQuery code executes whenever a user selects a new language from the dropdown list. The code checks whether the globalization file associated with the selected language has already been loaded. If the globalization file has not been loaded then the globalization file is loaded dynamically by taking advantage of the jQuery $.getScript() method. The globalizePage() method is called after the requested globalization file has been loaded, and contains the client-side code to perform the globalization. The advantage of this approach is that it enables you to avoid loading the entire jQuery.glob.all.js file. Instead you only need to load the files that you need and you don’t need to load the files more than once. The 5_Dynamic.htm file in this samples download demonstrates how to implement this approach. Example: Setting the User Preferred Language Automatically Many websites detect a user’s preferred language from their browser settings and automatically use it when globalizing content. A user can set a preferred language for their browser. Then, whenever the user requests a page, this language preference is included in the request in the Accept-Language header. When using Microsoft Internet Explorer, you can set your preferred language by following these steps: Select the menu option Tools, Internet Options. Select the General tab. Click the Languages button in the Appearance section. Click the Add button to add a new language to the list of languages. Move your preferred language to the top of the list. Notice that you can list multiple languages in the Language Preference dialog. All of these languages are sent in the order that you listed them in the Accept-Language header: Accept-Language: fr-FR,id-ID;q=0.7,en-US;q=0.3 Strangely, you cannot retrieve the value of the Accept-Language header from client JavaScript. Microsoft Internet Explorer and Mozilla Firefox support a bevy of language related properties exposed by the window.navigator object, such as windows.navigator.browserLanguage and window.navigator.language, but these properties represent either the language set for the operating system or the language edition of the browser. These properties don’t enable you to retrieve the language that the user set as his or her preferred language. The only reliable way to get a user’s preferred language (the value of the Accept-Language header) is to write server code. For example, the following ASP.NET page takes advantage of the server Request.UserLanguages property to assign the user’s preferred language to a client JavaScript variable named acceptLanguage (which then allows you to access the value using client-side JavaScript): In order for this code to work, the culture information associated with the value of acceptLanguage must be included in the page. For example, if someone’s preferred culture is fr-FR (French in France) then you need to include either the jQuery.glob.fr-FR.js or the jQuery.glob.all.js JavaScript file in the page or the culture information won’t be available.  The “6_AcceptLanguages.aspx” sample in this samples download demonstrates how to implement this approach. If the culture information for the user’s preferred language is not included in the page then the $.preferCulture() method will fall back to using the neutral culture (for example, using jQuery.glob.fr.js instead of jQuery.glob.fr-FR.js). If the neutral culture information is not available then the $.preferCulture() method falls back to the default culture (English). Example: Using the Globalization Plugin with the jQuery UI DatePicker One of the goals of the Globalization plugin is to make it easier to build jQuery widgets that can be used with different cultures. We wanted to make sure that the jQuery Globalization plugin could work with existing jQuery UI plugins such as the DatePicker plugin. To that end, we created a patched version of the DatePicker plugin that can take advantage of the Globalization plugin when rendering a calendar. For example, the following figure illustrates what happens when you add the jQuery Globalization and the patched jQuery UI DatePicker plugin to a page and select Indonesian as the preferred culture: Notice that the headers for the days of the week are displayed using Indonesian day name abbreviations. Furthermore, the month names are displayed in Indonesian. You can download the patched version of the jQuery UI DatePicker from our github website. Or you can use the version included in this samples download and used by the 7_DatePicker.htm sample file. Summary I’m excited about our continuing participation in the jQuery community. This Globalization plugin is the third jQuery plugin that we’ve released. We’ve really appreciated all of the great feedback and design suggestions on the jQuery templating and data-linking prototypes that we released earlier this year.  We also want to thank the jQuery and jQuery UI teams for working with us to create these plugins. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. You can follow me at: twitter.com/scottgu

    Read the article

  • Quick guide to Oracle IRM 11g: Configuring SSL

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index So far in this guide we have an IRM Server up and running, however I skipped over SSL configuration in the previous article because I wanted to focus in more detail now. You can, if you wish, not bother with setting up SSL, but considering this is a security technology it is worthwhile doing. Contents Setting up a one way, self signed SSL certificate in WebLogic Setting up an official SSL certificate in Apache 2.x Configuring Apache to proxy traffic to the IRM server There are two common scenarios in which an Oracle IRM server is configured. For a development or evaluation system, people usually communicate directly to the WebLogic Server running the IRM service. However in a production environment and for some proof of concept evaluations that require a setup reflecting a production system, the traffic to the IRM server travels via a web server proxy, commonly Apache. In this guide we are building an Oracle Enterprise Linux based IRM service and this article will go over the configuration of SSL in WebLogic and also in Apache. Like in the past articles, we are going to use two host names in the configuration below,irm.company.com will refer to the public Apache server irm.company.internal will refer to the internal WebLogic IRM server Setting up a one way, self signed SSL certificate in WebLogic First lets look at creating just a simple self signed SSL certificate to be used in WebLogic. This is a quick and easy way to get SSL working in your environment, however the downside is that no browsers are going to trust this certificate you create and you'll need to manually install the certificate onto any machine's communicating with the server. This is fine for development or when you have only a few users evaluating the system, but for any significant use it's usually better to have a fully trusted certificate in use and I explain that in the next section. But for now lets go through creating, installing and testing a self signed certificate. We use a library in Java to create the certificates, open a console and running the following commands. Note you should choose your own secure passwords whenever you see password below. [oracle@irm /] source /oracle/middleware/wlserver_10.3/server/bin/setWLSEnv.sh [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irm /] java utils.CertGen -selfsigned -certfile MyOwnSelfCA.cer -keyfile MyOwnSelfKey.key -keyfilepass password -cn "irm.oracle.demo" [oracle@irm /] java utils.ImportPrivateKey -keystore MyOwnIdentityStore.jks -storepass password -keypass password -alias trustself -certfile MyOwnSelfCA.cer.pem -keyfile MyOwnSelfKey.key.pem -keyfilepass password [oracle@irm /] keytool -import -trustcacerts -alias trustself -keystore TrustMyOwnSelf.jks -file MyOwnSelfCA.cer.der -keyalg RSA We now have two Java Key Stores, MyOwnIdentityStore.jks and TrustMyOwnSelf.jks. These contain keys and certificates which we will use in WebLogic Server. Now we need to tell the IRM server to use these stores when setting up SSL connections for incoming requests. Make sure the Admin server is running and login into the WebLogic Console at http://irm.company.intranet:7001/console and do the following; In the menu on the left, select the + next to Environment to expose the submenu, then click on Servers. You will see two servers in the list, AdminServer(admin) and IRM_server1. If the IRM server is running, shut it down either by hitting CONTROL + C in the console window it was started from, or you can switch to the CONTROL tab, select IRM_server1 and then select the Shutdown menu and then Force Shutdown Now. In the Configuration tab select IRM_server1 and switch to the Keystores tab. By default WebLogic Server uses it's own demo identity and trust. We are now going to switch to the self signed one's we've just created. So select the Change button and switch to Custom Identity and Custom Trust and hit save. Now we have to complete the resulting fields, the setting's i've used in my evaluation server are below. IdentityCustom Identity Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/MyOwnIdentityStore.jks Custom Identity Keystore Type: JKS Custom Identity Keystore Passphrase: password Confirm Custom Identity Keystore Passphrase: password TrustCustom Trust Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/TrustMyOwnSelf.jks Custom Trust Keystore Type: JKS Custom Trust Keystore Passphrase: password Confirm Custom Trust Keystore Passphrase: password Now click on the SSL tab for the IRM_server1 and enter in the alias and passphrase, in my demo here the details are; IdentityPrivate Key Alias: trustself Private Key Passphrase: password Confirm Private Key Passphrase: password And hit save. Now lets test a connection to the IRM server over HTTPS using SSL. Go back to a console window and start the IRM server, a quick reminder on how to do this is... [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/bin [oracle@irm /] ./startManagedWeblogic IRM_server1 Once running, open a browser and head to the SSL port of the server. By default the IRM server will be listening on the URL https://irm.company.intranet:16101/irm_rights. Note in the example image on the right the port is 7002 because it's a system that has the IRM services installed on the Admin server, this isn't typical (or advisable). Your system is going to have a separate managed server which will be listening on port 16101. Once you open this address you will notice that your browser is going to complain that the server certificate is untrusted. The images on the right show how Firefox displays this error. You are going to be prompted every time you create a new SSL session with the server, both from the browser and more annoyingly from the IRM Desktop. If you plan on always using a self signed certificate, it is worth adding it to the Windows certificate store so that when you are accessing sealed content you do not keep being informed this certificate is not trusted. Follow these instructions (which are for Internet Explorer 8, they may vary for your version of IE.) Start Internet Explorer and open the URL to your IRM server over SSL, e.g. https://irm.company.intranet:16101/irm_rights. IE will complain that about the certificate, click on Continue to this website (not recommended). From the IE Tools menu select Internet Options and from the resulting dialog select Security and then click on Trusted Sites and then the Sites button. Add to the list of trusted sites a URL which mates the server you are accessing, e.g. https://irm.company.intranet/ and select OK. Now refresh the page you were accessing and next to the URL you should see a red cross and the words Certificate Error. Click on this button and select View Certificates. You will now see a dialog with the details of the self signed certificate and the Install Certificate... button should be enabled. Click on this to start the wizard. Click next and you'll be asked where you should install the certificate. Change the option to Place all certificates in the following store. Select browse and choose the Trusted Root Certification Authorities location and hit OK. You'll then be prompted to install the certificate and answer yes. You also need to import the root signed certificate into the same location, so once again select the red Certificate Error option and this time when viewing the certificate, switch to the Certification Path tab and you should see a CertGenCAB certificate. Select this and then click on View Certificate and go through the same process as above to import the certificate into the store. Finally close all instances of the IE browser and re-access the IRM server URL again, this time you should not receive any errors. Setting up an official SSL certificate in Apache 2.x At this point we now have an IRM server that you can communicate with over SSL. However this certificate isn't trusted by any browser because it's path of trust doesn't end in a recognized certificate authority (CA). Also you are communicating directly to the WebLogic Server over a non standard SSL port, 16101. In a production environment it is common to have another device handle the initial public internet traffic and then proxy this to the WebLogic server. The diagram below shows a very simplified view of this type of deployment. What i'm going to walk through next is configuring Apache to proxy traffic to a WebLogic server and also to use a real SSL certificate from an official CA. First step is to configure Apache to handle incoming requests over SSL. In this guide I am configuring the IRM service in Oracle Enterprise Linux 5 update 3 and Apache 2.2.3 which came with OpenSSL and mod_ssl components. Before I purchase an SSL certificate, I need to generate a certificate request from the server. Oracle.com uses Verisign and for my own personal needs I use cheaper certificates from GoDaddy. The following instructions are specific to Apache, but there are many references out there for other web servers. For Apache I have OpenSSL and the commands are; [oracle@irm /] cd /usr/bin [oracle@irm bin] openssl genrsa -des3 -out irm-apache-server.key 2048 Generating RSA private key, 2048 bit long modulus ............................+++ .........+++ e is 65537 (0x10001) Enter pass phrase for irm-apache-server.key: Verifying - Enter pass phrase for irm-apache-server.key: [oracle@irm bin] openssl req -new -key irm-apache-server.key -out irm-apache-server.csr Enter pass phrase for irm-apache-server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [GB]:US State or Province Name (full name) [Berkshire]:CA Locality Name (eg, city) [Newbury]:San Francisco Organization Name (eg, company) [My Company Ltd]:Oracle Organizational Unit Name (eg, section) []:Security Common Name (eg, your name or your server's hostname) []:irm.company.com Email Address []:[email protected] Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []:testing An optional company name []: You must make sure to remember the pass phrase you used in the initial key generation, you will need this when later configuring Apache. In the /usr/bin directory there are now two new files. The irm-apache-server.csr contains our certificate request and is what you cut and paste, or upload, to your certificate authority when you purchase and validate your SSL certificate. In response you will typically get two files. Your server certificate and another certificate file that will likely contain a set of certificates from your CA which validate your certificate's trust. Next we need to configure Apache to use these files. Typically there is an ssl.conf file which is where all the SSL configuration is done. On my Oracle Enterprise Linux server this file is located in /etc/httpd/conf.d/ssl.conf and i've added the following lines. <VirtualHost irm.company.com> # Setup SSL for irm.company.com ServerName irm.company.com SSLEngine On SSLCertificateFile /oracle/secure/irm.company.com.crt SSLCertificateKeyFile /oracle/secure/irm.company.com.key SSLCertificateChainFile /oracle/secure/gd_bundle.crt </VirtualHost> Restarting Apache (apachectl restart) and I can now attempt to connect to the Apache server in a web browser, https://irm.company.com/. If all is configured correctly I should now see an Apache test page delivered to me over HTTPS. Configuring Apache to proxy traffic to the IRM server Final piece in setting up SSL is to have Apache proxy requests for the IRM server but do so securely. So the requests to Apache will be over HTTPS using a legitimate certificate, but we can also configure Apache to proxy these requests internally across to the IRM server using SSL with the self signed certificate we generated at the start of this article. To do this proxying we use the WebLogic Web Server plugin for Apache which you can download here from Oracle. Download the zip file and extract onto the server. The file extraction reveals a set of zip files, each one specific to a supported web server. In my instance I am using Apache 2.2 32bit on an Oracle Enterprise Linux, 64 bit server. If you are not sure what version your Apache server is, run the command /usr/sbin/httpd -V and you'll see version and it its 32 or 64 bit. Mine is a 32bit server so I need to extract the file WLSPlugin1.1-Apache2.2-linux32-x86.zip. The from the resulting lib folder copy the file mod_wl.so into /usr/lib/httpd/modules/. First we want to test that the plug in will work for regular HTTP traffic. Edit the httpd.conf for Apache and add the following section at the bottom. LoadModule weblogic_module modules/mod_wl.so <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16100    WLLogFile /tmp/wl-proxy.log </IfModule> <Location /irm_rights>    SetHandler weblogic-handler </Location> <Location /irm_desktop>    SetHandler weblogic-handler </Location> <Location /irm_sealing>    SetHandler weblogic-handler </Location> <Location /irm_services>    SetHandler weblogic-handler </Location> Now restart Apache again (apachectl restart) and now open a browser to http://irm.company.com/irm_rights. Apache will proxy the HTTP traffic from the port 80 of your Apache server to the IRM service listening on port 16100 of the WebLogic Managed server. Note above I have included all four of the Locations you might wish to proxy. http://irm.company.internalirm_rights is the URL to the management website, /irm_desktop is the URL used for the IRM Desktop to communicate. irm_sealing is for web services based document sealing and irm_services is for IRM server web services. The last two are typically only used when you have the IRM server integrated with another application and it is unlikely you'd be accessing these resources from the public facing Apache server. However, just in case, i've mentioned them above. Now let's enable SSL communication from Apache to WebLogic. In the ZIP file we extracted were some more modules we need to copy into the Apache folder. Looking back in the lib that we extracted, there are some more files. Copy the following into the /usr/lib/httpd/modules/ folder. libwlssl.so libnnz11.so libclntsh.so.11.1 Now the documentation states that should only need to do this, but I found that I also needed to create an environment variable called LD_LIBRARY_PATH and point this to the folder /usr/lib/httpd/modules/. If I didn't do this, starting Apache with the WebLogic module configured to SSL would throw the error. [crit] (20014)Internal error: WL SSL Init failed for server: (null) on 0 So I had to edit the file /etc/profile and add the following lines at the bottom. You may already have the LD_LIBRARY_PATH variable defined, therefore simply add this path to it. LD_LIBRARY_PATH=/usr/lib/httpd/modules/ export LD_LIBRARY_PATH Now the WebLogic plug in uses an Oracle Wallet to store the required certificates.You'll need to copy the self signed certificate from the IRM server over to the Apache server. Copy over the MyOwnSelfCA.cer.der into the same folder where you are storing your public certificates, in my example this is /oracle/secure. It's worth mentioning these files should ONLY be readable by root (the user Apache runs as). Now lets create an Oracle Wallet and import the self signed certificate from the IRM server. The file orapki was included in the bin folder of the Apache 1.1 plugin zip you extracted. orapki wallet create -wallet /oracle/secure/my-wallet -auto_login_only orapki wallet add -wallet /oracle/secure/my-wallet -trusted_cert -cert MyOwnSelfCA.cer.der -auto_login_only Finally change the httpd.conf to reflect that we want the WebLogic Apache plug-in to use HTTPS/SSL and not just plain HTTP. <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16101    SecureProxy ON    WLSSLWallet /oracle/secure/my-wallet    WLLogFile /tmp/wl-proxy.log </IfModule> Then restart Apache once more and you can go back to the browser to test the communication. Opening the URL https://irm.company.com/irm_rights will proxy your request to the WebLogic server at https://irm.company.internal:16101/irm_rights. At this point you have a fully functional Oracle IRM service, the next step is to create a sealed document and test the entire system.

    Read the article

  • CodePlex Daily Summary for Friday, March 12, 2010

    CodePlex Daily Summary for Friday, March 12, 2010New Projects.NET DEPENDENCY INJECTION: Abel Perez Enterprise FrameworkAutodocs - WCF REST Automatic API Documentation Generator: Autodocs is an automatic API documentation generator for .NET applications that use Windows Communication Foundation (WCF) to establish REST API's.BlockBlock: Block Block is a free game. You know Lumines and you will like BlockBlock.C4F XNA ASCII Post-Processing: This is the source code for the Coding4Fun article "XNA Effects – ASCII Art in 3D"ChequePrinter: this is ChequePrinterCompiladores MSIL usando Phoenix (PLP 2008.1 - CIn/UFPE): Este projeto foi feito com o intuito de explorar a plataforma Microsoft Phoenix para a construção de compiladores para MSIL de duas linguagens de E...CRM External View: CRM External View enables more robust control over exposing Microsoft CRM data (in a form of views) for external parties. The solution uses web ser...CS Project2: This is for the projectDotNetNuke IM Module of Facebook Like Messenger: Help you integrate 123 Web Messenger into DotNetNuke, and add a powerful 1-to-1 IM Software named "Facebook Messenger Style Web Chat Bar" at the bo...DotNetNuke® RadPanelBar: DNNRadPanelBar makes it easy to add telerik RadPanelBar functionality to your module or skin. Licensing permits anyone to use the components (incl...DotNetNuke® Skin Blocks: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by Armand Datema of Schwingsoft. This skin uses a bit of jQu...Drilltrough and filtering on SSAS-cubes in SSRS: We will describe a technique to create Reporting services (SSRS) reports that use Analysis services (SSAS) cubes as data sources, have a very intu...Ecosystem Diagnosis & Treatment: The Ecosystem DIagnosis & Treatment community provides tools, analyses and applications of the medical model to natural resource problems. EDT sof...ExIf 35: A utility for use by film photographers for keeping track of critical facts about images taken on a roll of film, just as digital cameras do automa...FabricadeTI: Desenvolvimento do framework FabricadeTI.Find and Replace word in the sentences: This program used Java Development Kid 6.0 and i were using HighLighter class. It was completed code with source code and then everybody can use in...Flash Nut: Flash Nut is a flash card program. You can build and review decks of flash cards. The project is a vs2008 wpf application.Free DotNetNuke Chat Module (Popup Mode): With this free DotNetNuke Chat Module (Popup Mode), master will assist to integrate DotNetNuke with 123 Flash Chat seamlessly, and add a popup mode...Free DotNetNuke IM of 123 Web Messenger -- Web-based Friend List: With this FREE application, you could integrate DNN website Database with 123 Web Messenger seamlessly and embed a web-based Friends List into anyw...Free DotNetNuke Live Help Module: With DotNetNuke Live Help Module, integrate 123 Live Help into DotNetNuke website and add Live Chat Button anywhere you like. Let visitors to chat ...G52GRP Videowall: NottinghamHappy Turtle Plugins for BVI :: Repository Based Versioning for Visual Studio: The Happy Turtle project creates plugins for the Build Version Increment Add-In for Visual Studio (BVI). The focus is to automatically version asse...Hasher: Hasher es capaz de generar el hash MD5 y SHA de textos de hasta 100.000 caracteres y ficheros. También te permitirá comprobar dos hash para verifi...Infragistics Silverlight Extended Controls: This project is a group of controls that extend or add functionality to the Infragistics Silverlight control suite. This control requires Infragis...Insert Video Jnr: This is a baby version of my Video plugin, it is intended for Hosted Wordpress blogs only and shouldn't be used with other blog providers.jccc .NET smart framework: jccc .NET smart framework allows the creation of fast connections to MSSQL or MYSQL databases, and the data manipulation by using of c# class's tha...LytScript: 函数式脚本语言Microsoft - DDD NLayerApp .NET 4.0 Example (Microsoft Spain): DDD NLayered App .NET 4.0 Example By Microsoft - Spain Domain Driven Design NLayered App .NET 4.0 Example Implementation Example of our local Arc...mimiKit: Lightweight ASP.NET MVC / Javascript Framework for creating mobile applications PHPWord: With PHPWord you can easily create a Word document with PHP. PHPWord creates docx Files that can include all major word functions like TextElements...Protocol Transition with BizTalk: An example solution the shows how todo Protocol Transition with BizTalk. This also shows you how to create a WCF extension to allow this to happen.Raid Runner: Raid Runner makes it easier to run and manage raid in World of Warcraft. It is a Silverlight application developed in c#SQL Server Authentication Troubleshooter: SQL Server Authentication Troubleshooter is a tool to help investigate a root cause of ‘Login Failed’ error in SQL Server. There could be number of...SuperviseObjects: SuperviseObjects consists of a collection which is derived from ObservableCollection<T>. This collection fires ItemPropertyChanging and ItemPropert...Viuto: Viuto.NET project aims to create a fully track and trace application. It is developed in: - Java & C: Firmware - C#: Parser - Asp.net: Tracki...Zealand IT MSBuild Tasks: Zealand IT MSBuild Tasks is a collection that you cannot do without if you are serious about continous integration. Ever wish you could specify an...New ReleasesASP.NET: ASP.NET MVC 2 RTM: This release contains the source code for ASP.NET MVC 2 RTM as well as the ASP.NET MVC Futures project. The futures project contains features that ...C#Mail: Higuchi.Mail.dll (2010.3.11 ver): Higuchi.Mail.dll at 2010-3-11 version.C#Mail: Higuchi.MailServer.dll (2010.3.11 ver): Higuchi.MailServer.dll at 2010.3.11 version.C4F XNA ASCII Post-Processing: XNA ASCII FPS v1 - Full Version: This is the full, complete example of the XNA ASCII FPS.C4F XNA ASCII Post-Processing: XNA ASCII FPS v1.0 - Base Project: This is the base project to be used by those who plan to follow along the Coding4Fun article.CRM External View: 1.0: Release 1.0DevTreks -social budgeting that improves lives and livelihoods: Social Budgeting Web Software, DevTreks alpha 3c: Alpha 3c upgrades custom/virtual uris (devpacks), temp uris, and zip packages. This is believed to be the first fully functional/performant release.DotNetNuke® RadPanelBar: DNNRadPanelBar 1.0.0: DNNRadPanelBar makes it easy to add telerik RadPanelBar functionality to your module or skin. Licensing permits anyone to use the components (inclu...Drilltrough and filtering on SSAS-cubes in SSRS: Release 1: Release 1ExIf 35: ExIf 35: Daily build of ExIf 35Family Tree Analyzer: Version 1.0.3.0: Version 1.0.3.0 Added options to check for updates on load and on help menu Disable use of US census for now until dealt with years being differen...Family Tree Analyzer: Version 1.0.4.0: Version 1.0.4.0 Added support for display of Ahnenfatel numbers Added filter to hide individuals from Lost Cousins report that have been flagged a...Flash Nut: Flash Nut 1.0 Setup: Flash Nut SetupFluent Validation for .NET: 1.2 RC: This is the release candidate for FluentValidation 1.2. If no bugs are found within the next couple of weeks, then this will become the 1.2 Final b...Free DotNetNuke Chat Module (Popup Mode): Download DNN Chat Module (Popup Mode)+Source Code: Feel free to download DotNetNuke Chat Module (Popup Mode), integrating DotNetNuke with 123 Flash Chat Software, and add a free popup mode flash cha...Free DotNetNuke Live Help Module: Download DNN Live Support Module and Source Code: In Readme file, there are detailed Installation and Integration Manual for you. This module is compatible with DotNetNuke v5.x.Happy Turtle Plugins for BVI :: Repository Based Versioning for Visual Studio: Happy Turtle 1.0.44927: This is the first release of the SVN based version incrementor. How To InstallMake sure that Build Version Increment v2.2.10065.1524 or newer is i...Hasher: 1.0: Versión inicial de la aplicación: Obtención de hash MD5 y SHA. Codificación en tiempo real de textos de hasta 100.000 caracteres. Codificación ...Jamolina: PhotosynthDemo: PhotosynthDemoMapWindow GIS: MapWindow 6.0 msi (March 11): This fixes an PixelToProj problem for the Extended Buffer case, as well as adding fixes to the WKBFeatureReader to fix an X,Y reversal and some ext...Math.NET Numerics: 2010.3.11.291 Build: Latest alpha buildMicrosoft - DDD NLayerApp .NET 4.0 Example (Microsoft Spain): V0.5 - N-Layer DDD Sample App: Required Software (Microsoft Base Software needed for Development environment) Unity Application Block 1.2 - October 2008 http://www.microsoft.com/...MiniTwitter: 1.09.2: MiniTwitter 1.09.2 更新内容 修正 タイムラインを削除すると落ちるバグを修正 稀にタイムラインのスクロールが出来ないバグを修正Nestoria.NET: Nestoria.NET 0.8: Provides access to the Nestoria API. Documentation contains a basic getting started guide. Please visit Darren Edge's blog for ongoing developmen...Pod Thrower: Version 1.0: Here is version 1.0. It has all the features I was looking to do in it. Please let me know if you use this and if you would like any changes.SharePoint Ad Rotator: SPAdRotator 2.0 Beta: This new release of the Ad Rotator contains many new features. One major new feature is that jQuery has been added to do image rotation without hav...SharePoint Objects: Democode Ton Stegeman: These download contains sample code for some SharePoint 2007 blog posts: TST.Themes_Build20100311.zip contains a feature receiver that registers Sh...SharePoint Taxonomy Extensions: SharePoint Taxonomy Extensions 1.2: Make Taxonomy Extensions useable in every list type. Not only in document libraries.SharePoint Video Player Web Part & SharePoint Video Library: Version 3.0.0: Absolutely killer feature - installing multiple players on a page without any loss of performance.SilverLight Interface for Mapserver: SLMapViewer v. 1.0: SLMapviewer sample application version 1.0. This new release includes the following enhancements: Silverlight 3.0 native Added a new init parame...Spark View Engine: Spark v1.1: Changes since RC1Built against ASP.NET MVC 2 RTMSPSS .NET interop library: 2.0: This new version supports SPSS 15, and includes spssio32.dll and other native .dll dependencies so that it works out of the box without SPSS being ...stefvanhooijdonk.com: SharePoint2010.ProfilePicturesLoader: So, with the help of Reflector, I wrote a small tool that would import all our profile pictures and update the user profiles. http://wp.me/pMnlQ-6G SuperviseObjects: SuperviseObjects 1.0: First releaseTortoiseSVN Addin for Visual Studio: TortoiseSVN Addin 1.0.5: Feature: Visual Studio/svn action synchronization on Item in Solution explorer like add, move, delete and rename. Note: Move action does not rememb...VCC: Latest build, v2.1.30311.0: Automatic drop of latest buildVivoSocial: VivoSocial 7.0.4: Business Management ■This release fixes a Could not load type error on the main view of the module. Groups ■Group requests were failing in some i...WikiPlex – a Regex Wiki Engine: WikiPlex 1.3: Info: Official Version: 1.3.0.215 | Full Release Notes Documentation - This new documentation includes Full Markup Guide with Examples Articles ...Zealand IT MSBuild Tasks: Zealand IT MSBuild Tasks: Initial beta release of Zealand IT MSBuild Tasks. Contains the following tasks: RunAs - Same as Exec task, but provides parameters for impersonat...ZoomBarPlus: V1 (Beta): This is the initial release. It should be considered a beta test version as it has not been tested for very long on my device.Most Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NET Ajax LibraryASP.NETMicrosoft SQL Server Community & SamplesMost Active ProjectsUmbraco CMSRawrN2 CMSBlogEngine.NETFasterflect - A Fast and Simple Reflection APIjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryFarseer Physics EngineCaliburn: An Application Framework for WPF and SilverlightSharePoint Team-Mailer

    Read the article

  • Where does ASP.NET Web API Fit?

    - by Rick Strahl
    With the pending release of ASP.NET MVC 4 and the new ASP.NET Web API, there has been a lot of discussion of where the new Web API technology fits in the ASP.NET Web stack. There are a lot of choices to build HTTP based applications available now on the stack - we've come a long way from when WebForms and Http Handlers/Modules where the only real options. Today we have WebForms, MVC, ASP.NET Web Pages, ASP.NET AJAX, WCF REST and now Web API as well as the core ASP.NET runtime to choose to build HTTP content with. Web API definitely squarely addresses the 'API' aspect - building consumable services - rather than HTML content, but even to that end there are a lot of choices you have today. So where does Web API fit, and when doesn't it? But before we get into that discussion, let's talk about what a Web API is and why we should care. What's a Web API? HTTP 'APIs' (Microsoft's new terminology for a service I guess)  are becoming increasingly more important with the rise of the many devices in use today. Most mobile devices like phones and tablets run Apps that are using data retrieved from the Web over HTTP. Desktop applications are also moving in this direction with more and more online content and synching moving into even traditional desktop applications. The pending Windows 8 release promises an app like platform for both the desktop and other devices, that also emphasizes consuming data from the Cloud. Likewise many Web browser hosted applications these days are relying on rich client functionality to create and manipulate the browser user interface, using AJAX rather than server generated HTML data to load up the user interface with data. These mobile or rich Web applications use their HTTP connection to return data rather than HTML markup in the form of JSON or XML typically. But an API can also serve other kinds of data, like images or other binary files, or even text data and HTML (although that's less common). A Web API is what feeds rich applications with data. ASP.NET Web API aims to service this particular segment of Web development by providing easy semantics to route and handle incoming requests and an easy to use platform to serve HTTP data in just about any content format you choose to create and serve from the server. But .NET already has various HTTP Platforms The .NET stack already includes a number of technologies that provide the ability to create HTTP service back ends, and it has done so since the very beginnings of the .NET platform. From raw HTTP Handlers and Modules in the core ASP.NET runtime, to high level platforms like ASP.NET MVC, Web Forms, ASP.NET AJAX and the WCF REST engine (which technically is not ASP.NET, but can integrate with it), you've always been able to handle just about any kind of HTTP request and response with ASP.NET. The beauty of the raw ASP.NET platform is that it provides you everything you need to build just about any type of HTTP application you can dream up from low level APIs/custom engines to high level HTML generation engine. ASP.NET as a core platform clearly has stood the test of time 10+ years later and all other frameworks like Web API are built on top of this ASP.NET core. However, although it's possible to create Web APIs / Services using any of the existing out of box .NET technologies, none of them have been a really nice fit for building arbitrary HTTP based APIs. Sure, you can use an HttpHandler to create just about anything, but you have to build a lot of plumbing to build something more complex like a comprehensive API that serves a variety of requests, handles multiple output formats and can easily pass data up to the server in a variety of ways. Likewise you can use ASP.NET MVC to handle routing and creating content in various formats fairly easily, but it doesn't provide a great way to automatically negotiate content types and serve various content formats directly (it's possible to do with some plumbing code of your own but not built in). Prior to Web API, Microsoft's main push for HTTP services has been WCF REST, which was always an awkward technology that had a severe personality conflict, not being clear on whether it wanted to be part of WCF or purely a separate technology. In the end it didn't do either WCF compatibility or WCF agnostic pure HTTP operation very well, which made for a very developer-unfriendly environment. Personally I didn't like any of the implementations at the time, so much so that I ended up building my own HTTP service engine (as part of the West Wind Web Toolkit), as have a few other third party tools that provided much better integration and ease of use. With the release of Web API for the first time I feel that I can finally use the tools in the box and not have to worry about creating and maintaining my own toolkit as Web API addresses just about all the features I implemented on my own and much more. ASP.NET Web API provides a better HTTP Experience ASP.NET Web API differentiates itself from the previous Microsoft in-box HTTP service solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics. Unlike WCF REST or ASP.NET AJAX with ASMX, it’s a brand new platform rather than bolted on technology that is supposed to work in the context of an existing framework. The strength of the new ASP.NET Web API is that it combines the best features of the platforms that came before it, to provide a comprehensive and very usable HTTP platform. Because it's based on ASP.NET and borrows a lot of concepts from ASP.NET MVC, Web API should be immediately familiar and comfortable to most ASP.NET developers. Here are some of the features that Web API provides that I like: Strong Support for URL Routing to produce clean URLs using familiar MVC style routing semantics Content Negotiation based on Accept headers for request and response serialization Support for a host of supported output formats including JSON, XML, ATOM Strong default support for REST semantics but they are optional Easily extensible Formatter support to add new input/output types Deep support for more advanced HTTP features via HttpResponseMessage and HttpRequestMessage classes and strongly typed Enums to describe many HTTP operations Convention based design that drives you into doing the right thing for HTTP Services Very extensible, based on MVC like extensibility model of Formatters and Filters Self-hostable in non-Web applications  Testable using testing concepts similar to MVC Web API is meant to handle any kind of HTTP input and produce output and status codes using the full spectrum of HTTP functionality available in a straight forward and flexible manner. Looking at the list above you can see that a lot of functionality is very similar to ASP.NET MVC, so many ASP.NET developers should feel quite comfortable with the concepts of Web API. The Routing and core infrastructure of Web API are very similar to how MVC works providing many of the benefits of MVC, but with focus on HTTP access and manipulation in Controller methods rather than HTML generation in MVC. There’s much improved support for content negotiation based on HTTP Accept headers with the framework capable of detecting automatically what content the client is sending and requesting and serving the appropriate data format in return. This seems like such a little and obvious thing, but it's really important. Today's service backends often are used by multiple clients/applications and being able to choose the right data format for what fits best for the client is very important. While previous solutions were able to accomplish this using a variety of mixed features of WCF and ASP.NET, Web API combines all this functionality into a single robust server side HTTP framework that intrinsically understands the HTTP semantics and subtly drives you in the right direction for most operations. And when you need to customize or do something that is not built in, there are lots of hooks and overrides for most behaviors, and even many low level hook points that allow you to plug in custom functionality with relatively little effort. No Brainers for Web API There are a few scenarios that are a slam dunk for Web API. If your primary focus of an application or even a part of an application is some sort of API then Web API makes great sense. HTTP ServicesIf you're building a comprehensive HTTP API that is to be consumed over the Web, Web API is a perfect fit. You can isolate the logic in Web API and build your application as a service breaking out the logic into controllers as needed. Because the primary interface is the service there's no confusion of what should go where (MVC or API). Perfect fit. Primary AJAX BackendsIf you're building rich client Web applications that are relying heavily on AJAX callbacks to serve its data, Web API is also a slam dunk. Again because much if not most of the business logic will probably end up in your Web API service logic, there's no confusion over where logic should go and there's no duplication. In Single Page Applications (SPA), typically there's very little HTML based logic served other than bringing up a shell UI and then filling the data from the server with AJAX which means the business logic required for data retrieval and data acceptance and validation too lives in the Web API. Perfect fit. Generic HTTP EndpointsAnother good fit are generic HTTP endpoints that to serve data or handle 'utility' type functionality in typical Web applications. If you need to implement an image server, or an upload handler in the past I'd implement that as an HTTP handler. With Web API you now have a well defined place where you can implement these types of generic 'services' in a location that can easily add endpoints (via Controller methods) or separated out as more full featured APIs. Granted this could be done with MVC as well, but Web API seems a clearer and more well defined place to store generic application services. This is one thing I used to do a lot of in my own libraries and Web API addresses this nicely. Great fit. Mixed HTML and AJAX Applications: Not a clear Choice  For all the commonality that Web API and MVC share they are fundamentally different platforms that are independent of each other. A lot of people have asked when does it make sense to use MVC vs. Web API when you're dealing with typical Web application that creates HTML and also uses AJAX functionality for rich functionality. While it's easy to say that all 'service'/AJAX logic should go into a Web API and all HTML related generation into MVC, that can often result in a lot of code duplication. Also MVC supports JSON and XML result data fairly easily as well so there's some confusion where that 'trigger point' is of when you should switch to Web API vs. just implementing functionality as part of MVC controllers. Ultimately there's a tradeoff between isolation of functionality and duplication. A good rule of thumb I think works is that if a large chunk of the application's functionality serves data Web API is a good choice, but if you have a couple of small AJAX requests to serve data to a grid or autocomplete box it'd be overkill to separate out that logic into a separate Web API controller. Web API does add overhead to your application (it's yet another framework that sits on top of core ASP.NET) so it should be worth it .Keep in mind that MVC can generate HTML and JSON/XML and just about any other content easily and that functionality is not going away, so just because you Web API is there it doesn't mean you have to use it. Web API is not a full replacement for MVC obviously either since there's not the same level of support to feed HTML from Web API controllers (although you can host a RazorEngine easily enough if you really want to go that route) so if you're HTML is part of your API or application in general MVC is still a better choice either alone or in combination with Web API. I suspect (and hope) that in the future Web API's functionality will merge even closer with MVC so that you might even be able to mix functionality of both into single Controllers so that you don't have to make any trade offs, but at the moment that's not the case. Some Issues To think about Web API is similar to MVC but not the Same Although Web API looks a lot like MVC it's not the same and some common functionality of MVC behaves differently in Web API. For example, the way single POST variables are handled is different than MVC and doesn't lend itself particularly well to some AJAX scenarios with POST data. Code Duplication I already touched on this in the Mixed HTML and Web API section, but if you build an MVC application that also exposes a Web API it's quite likely that you end up duplicating a bunch of code and - potentially - infrastructure. You may have to create authentication logic both for an HTML application and for the Web API which might need something different altogether. More often than not though the same logic is used, and there's no easy way to share. If you implement an MVC ActionFilter and you want that same functionality in your Web API you'll end up creating the filter twice. AJAX Data or AJAX HTML On a recent post's comments, David made some really good points regarding the commonality of MVC and Web API's and its place. One comment that caught my eye was a little more generic, regarding data services vs. HTML services. David says: I see a lot of merit in the combination of Knockout.js, client side templates and view models, calling Web API for a responsive UI, but sometimes late at night that still leaves me wondering why I would no longer be using some of the nice tooling and features that have evolved in MVC ;-) You know what - I can totally relate to that. On the last Web based mobile app I worked on, we decided to serve HTML partials to the client via AJAX for many (but not all!) things, rather than sending down raw data to inject into the DOM on the client via templating or direct manipulation. While there are definitely more bytes on the wire, with this, the overhead ended up being actually fairly small if you keep the 'data' requests small and atomic. Performance was often made up by the lack of client side rendering of HTML. Server rendered HTML for AJAX templating gives so much better infrastructure support without having to screw around with 20 mismatched client libraries. Especially with MVC and partials it's pretty easy to break out your HTML logic into very small, atomic chunks, so it's actually easy to create small rendering islands that can be used via composition on the server, or via AJAX calls to small, tight partials that return HTML to the client. Although this is often frowned upon as to 'heavy', it worked really well in terms of developer effort as well as providing surprisingly good performance on devices. There's still plenty of jQuery and AJAX logic happening on the client but it's more manageable in small doses rather than trying to do the entire UI composition with JavaScript and/or 'not-quite-there-yet' template engines that are very difficult to debug. This is not an issue directly related to Web API of course, but something to think about especially for AJAX or SPA style applications. Summary Web API is a great new addition to the ASP.NET platform and it addresses a serious need for consolidation of a lot of half-baked HTTP service API technologies that came before it. Web API feels 'right', and hits the right combination of usability and flexibility at least for me and it's a good fit for true API scenarios. However, just because a new platform is available it doesn't meant that other tools or tech that came before it should be discarded or even upgraded to the new platform. There's nothing wrong with continuing to use MVC controller methods to handle API tasks if that's what your app is running now - there's very little to be gained by upgrading to Web API just because. But going forward Web API clearly is the way to go, when building HTTP data interfaces and it's good to see that Microsoft got this one right - it was sorely needed! Resources ASP.NET Web API AspConf Ask the Experts Session (first 5 minutes) © Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to prevent Android bluetooth RFCOMM connection from dying immediately after .connect()?

    - by Gilead
    I'm trying to connect to a Zeemote (http://zeemote.com/) gaming controller from Moto Droid running 2.0.1 firmware. The test application below does connect to the device (LED flashes) but connection is dropped immediately after that. I can connect to the device perfectly fine using bluez tools (log attached as well). I'm quite at a loss here, I work on it for so long that I ran out of ideas so any help would be very much appreciated. Thanks, Max =========================================== Code: public class ZeeTest extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); try { test(); } catch (IOException e) { e.printStackTrace(); } } public void test() throws IOException { BluetoothDevice zee = BluetoothAdapter.getDefaultAdapter(). getRemoteDevice("00:1C:4D:02:A6:55"); Log.d("ZeeTest", "++++ Creating socket"); BluetoothSocket sock = zee.createRfcommSocketToServiceRecord( UUID.fromString("8e1f0cf7-508f-4875-b62c-fbb67fd34812")); Log.d("ZeeTest", "++++ Connecting"); sock.connect(); Log.d("ZeeTest", "++++ Connected"); final InputStream in = sock.getInputStream(); new Thread() { @Override public void run() { byte[] buffer = new byte[32]; int bytes = 0; int x = 0; Log.d("ZeeTest", "++++ Listening..."); while (x < 200) { x++; try { bytes = in.read(buffer); Log.d("ZeeTest", "++++ Read "+ bytes +" bytes"); } catch (IOException e) { // java.io.IOException: Software caused connection abort if (x % 50 == 0) { Log.d("ZeeTest", "Tried "+ x +" times ("+ bytes +")"); } try { Thread.sleep(100); } catch (InterruptedException ie) {} } } Log.d("ZeeTest", "++++ Done: thread exit"); } }.start(); Log.d("ZeeTest", "++++ Done: test()"); } } =========================================== Log: I/ActivityManager( 1169): Start proc zee.test for activity zee.test/.ZeeTest: pid=4294 uid=10084 gids={3002, 3001, 3003} I/dalvikvm( 4294): Debugger thread not active, ignoring DDM send (t=0x41504e4d l=38) D/dalvikvm( 4287): LinearAlloc 0x0 used 640700 of 5242880 (12%) I/dalvikvm( 4294): Debugger thread not active, ignoring DDM send (t=0x41504e4d l=20) D/ZeeTest ( 4294): ++++ Creating socket D/ZeeTest ( 4294): ++++ Connecting E/BluetoothEventLoop.cpp( 1169): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/1240/hci0/dev_00_1C_4D_02_A6_55 I/usbd ( 1068): process_usb_uevent_message(): buffer = add@/devices/virtual/bluetooth/hci0/hci0:1 I/usbd ( 1068): main(): call select(...) E/BluetoothEventLoop.cpp( 1169): event_filter: Received signal org.bluez.Adapter:DeviceFound from /org/bluez/1240/hci0 V/BluetoothEventRedirector( 1242): Received android.bluetooth.device.action.FOUND V/BluetoothEventRedirector( 1242): Received android.bleutooth.device.action.UUID D/ZeeTest ( 4294): ++++ Connected D/ZeeTest ( 4294): ++++ Done: test() D/ZeeTest ( 4294): ++++ Listening... I/ActivityManager( 1169): Displayed activity zee.test/.ZeeTest: 2296 ms (total 2296 ms) E/BluetoothEventLoop.cpp( 1169): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/1240/hci0/dev_00_1C_4D_02_A6_55 I/usbd ( 1068): process_usb_uevent_message(): buffer = remove@/devices/virtual/bluetooth/hci0/hci0:1 I/usbd ( 1068): main(): call select(...) V/BluetoothEventRedirector( 1242): Received android.bleutooth.device.action.UUID D/ZeeTest ( 4294): Tried 50 times (0) D/ZeeTest ( 4294): Tried 100 times (0) D/ZeeTest ( 4294): Tried 150 times (0) D/ZeeTest ( 4294): Tried 200 times (0) D/ZeeTest ( 4294): ++++ Done: thread exit =========================================== Terminal log: $ sdptool browse Inquiring ... Browsing 00:1C:4D:02:A6:55 ... $ sdptool records 00:1C:4D:02:A6:55 Service Name: Zeemote Service RecHandle: 0x10015 Service Class ID List: UUID 128: 8e1f0cf7-508f-4875-b62c-fbb67fd34812 Protocol Descriptor List: "L2CAP" (0x0100) "RFCOMM" (0x0003) Channel: 1 Language Base Attr List: code_ISO639: 0x656e encoding: 0x6a base_offset: 0x100 $ rfcomm connect /dev/tty10 00:1C:4D:02:A6:55 Connected /dev/rfcomm0 to 00:1C:4D:02:A6:55 on channel 1 Press CTRL-C for hangup # rfcomm show /dev/tty10 rfcomm0: 00:1F:3A:E4:C8:40 - 00:1C:4D:02:A6:55 channel 1 connected [reuse-dlc release-on-hup tty-attached] # cat /dev/tty10 (nothing here) # hcidump HCI sniffer - Bluetooth packet analyzer ver 1.42 device: hci0 snap_len: 1028 filter: 0xffffffff < HCI Command: Create Connection (0x01|0x0005) plen 13 > HCI Event: Command Status (0x0f) plen 4 > HCI Event: Connect Complete (0x03) plen 11 < HCI Command: Read Remote Supported Features (0x01|0x001b) plen 2 > HCI Event: Read Remote Supported Features (0x0b) plen 11 < ACL data: handle 11 flags 0x02 dlen 10 L2CAP(s): Info req: type 2 > HCI Event: Command Status (0x0f) plen 4 > HCI Event: Page Scan Repetition Mode Change (0x20) plen 7 > HCI Event: Max Slots Change (0x1b) plen 3 < HCI Command: Remote Name Request (0x01|0x0019) plen 10 > HCI Event: Command Status (0x0f) plen 4 > ACL data: handle 11 flags 0x02 dlen 16 L2CAP(s): Info rsp: type 2 result 0 Extended feature mask 0x0000 < ACL data: handle 11 flags 0x02 dlen 12 L2CAP(s): Connect req: psm 3 scid 0x0040 > HCI Event: Number of Completed Packets (0x13) plen 5 > ACL data: handle 11 flags 0x02 dlen 16 L2CAP(s): Connect rsp: dcid 0x04fb scid 0x0040 result 1 status 2 Connection pending - Authorization pending > HCI Event: Remote Name Req Complete (0x07) plen 255 > ACL data: handle 11 flags 0x02 dlen 16 L2CAP(s): Connect rsp: dcid 0x04fb scid 0x0040 result 0 status 0 Connection successful < ACL data: handle 11 flags 0x02 dlen 16 L2CAP(s): Config req: dcid 0x04fb flags 0x00 clen 4 MTU 1013 (events are properly received using bluez)

    Read the article

  • NHibernate, and odd "Session is Closed!" errors

    - by Sekhat
    Note: Now that I've typed this out, I have to apologize for the super long question, however, I think all the code and information presented here is in some way relevant. Okay, I'm getting odd "Session Is Closed" errors, at random points in my ASP.NET webforms application. Today, however, it's finally happening in the same place over and over again. I am near certain that nothing is disposing or closing the session in my code, as the bits of code that use are well contained away from all other code as you'll see below. I'm also using ninject as my IOC, which may / may not be important. Okay, so, First my SessionFactoryProvider and SessionProvider classes: SessionFactoryProvider public class SessionFactoryProvider : IDisposable { ISessionFactory sessionFactory; public ISessionFactory GetSessionFactory() { if (sessionFactory == null) sessionFactory = Fluently.Configure() .Database( MsSqlConfiguration.MsSql2005.ConnectionString(p => p.FromConnectionStringWithKey("QoiSqlConnection"))) .Mappings(m => m.FluentMappings.AddFromAssemblyOf<JobMapping>()) .BuildSessionFactory(); return sessionFactory; } public void Dispose() { if (sessionFactory != null) sessionFactory.Dispose(); } } SessionProvider public class SessionProvider : IDisposable { ISessionFactory sessionFactory; ISession session; public SessionProvider(SessionFactoryProvider sessionFactoryProvider) { this.sessionFactory = sessionFactoryProvider.GetSessionFactory(); } public ISession GetCurrentSession() { if (session == null) session = sessionFactory.OpenSession(); return session; } public void Dispose() { if (session != null) { session.Dispose(); } } } These two classes are wired up with Ninject as so: NHibernateModule public class NHibernateModule : StandardModule { public override void Load() { Bind<SessionFactoryProvider>().ToSelf().Using<SingletonBehavior>(); Bind<SessionProvider>().ToSelf().Using<OnePerRequestBehavior>(); } } and as far as I can tell work as expected. Now my BaseDao<T> class: BaseDao public class BaseDao<T> : IDao<T> where T : EntityBase { private SessionProvider sessionManager; protected ISession session { get { return sessionManager.GetCurrentSession(); } } public BaseDao(SessionProvider sessionManager) { this.sessionManager = sessionManager; } public T GetBy(int id) { return session.Get<T>(id); } public void Save(T item) { using (var transaction = session.BeginTransaction()) { session.SaveOrUpdate(item); transaction.Commit(); } } public void Delete(T item) { using (var transaction = session.BeginTransaction()) { session.Delete(item); transaction.Commit(); } } public IList<T> GetAll() { return session.CreateCriteria<T>().List<T>(); } public IQueryable<T> Query() { return session.Linq<T>(); } } Which is bound in Ninject like so: DaoModule public class DaoModule : StandardModule { public override void Load() { Bind(typeof(IDao<>)).To(typeof(BaseDao<>)) .Using<OnePerRequestBehavior>(); } } Now the web request that is causing this is when I'm saving an object, it didn't occur till I made some model changes today, however the changes to my model has not changed the data access code in anyway. Though it changed a few NHibernate mappings (I can post these too if anyone is interested) From as far as I can tell, BaseDao<SomeClass>.Get is called then BaseDao<SomeOtherClass>.Get is called then BaseDao<TypeImTryingToSave>.Save is called. it's the third call at the line in Save() using (var transaction = session.BeginTransaction()) that fails with "Session is Closed!" or rather the exception: Session is closed! Object name: 'ISession'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ObjectDisposedException: Session is closed! Object name: 'ISession'. And indeed following through on the Debugger shows the third time the session is requested from the SessionProvider it is indeed closed and not connected. I have verified that Dispose on my SessionFactoryProvider and on my SessionProvider are called at the end of the request and not before the Save call is made on my Dao. So now I'm a little stuck. A few things pop to mind. Am I doing anything obviously wrong? Does NHibernate ever close sessions without me asking to? Any workarounds or ideas on what I might do? Thanks in advance

    Read the article

  • Use IIS Application Initialization for keeping ASP.NET Apps alive

    - by Rick Strahl
    I've been working quite a bit with Windows Services in the recent months, and well, it turns out that Windows Services are quite a bear to debug, deploy, update and maintain. The process of getting services set up,  debugged and updated is a major chore that has to be extensively documented and or automated specifically. On most projects when a service is built, people end up scrambling for the right 'process' to use for administration. Web app deployment and maintenance on the other hand are common and well understood today, as we are constantly dealing with Web apps. There's plenty of infrastructure and tooling built into Web Tools like Visual Studio to facilitate the process. By comparison Windows Services or anything self-hosted for that matter seems convoluted.In fact, in a recent blog post I mentioned that on a recent project I'd been using self-hosting for SignalR inside of a Windows service, because the application is in fact a 'service' that also needs to send out lots of messages via SignalR. But the reality is that it could just as well be an IIS application with a service component that runs in the background. Either way you look at it, it's either a Windows Service with a built in Web Server, or an IIS application running a Service application, neither of which follows the standard Service or Web App template.Personally I much prefer Web applications. Running inside of IIS I get all the benefits of the IIS platform including service lifetime management (crash and restart), controlled shutdowns, the whole security infrastructure including easy certificate support, hot-swapping of code and the the ability to publish directly to IIS from within Visual Studio with ease.Because of these benefits we set out to move from the self hosted service into an ASP.NET Web app instead.The Missing Link for ASP.NET as a Service: Auto-LoadingI've had moments in the past where I wanted to run a 'service like' application in ASP.NET because when you think about it, it's so much easier to control a Web application remotely. Services are locked into start/stop operations, but if you host inside of a Web app you can write your own ticket and control it from anywhere. In fact nearly 10 years ago I built a background scheduling application that ran inside of ASP.NET and it worked great and it's still running doing its job today.The tricky part for running an app as a service inside of IIS then and now, is how to get IIS and ASP.NET launched so your 'service' stays alive even after an Application Pool reset. 7 years ago I faked it by using a web monitor (my own West Wind Web Monitor app) I was running anyway to monitor my various web sites for uptime, and having the monitor ping my 'service' every 20 seconds to effectively keep ASP.NET alive or fire it back up after a reload. I used a simple scheduler class that also includes some logic for 'self-reloading'. Hacky for sure, but it worked reliably.Luckily today it's much easier and more integrated to get IIS to launch ASP.NET as soon as an Application Pool is started by using the Application Initialization Module. The Application Initialization Module basically allows you to turn on Preloading on the Application Pool and the Site/IIS App, which essentially fires a request through the IIS pipeline as soon as the Application Pool has been launched. This means that effectively your ASP.NET app becomes active immediately, Application_Start is fired making sure your app stays up and running at all times. All the other features like Application Pool recycling and auto-shutdown after idle time still work, but IIS will then always immediately re-launch the application.Getting started with Application InitializationAs of IIS 8 Application Initialization is part of the IIS feature set. For IIS 7 and 7.5 there's a separate download available via Web Platform Installer. Using IIS 8 Application Initialization is an optional install component in Windows or the Windows Server Role Manager: This is an optional component so make sure you explicitly select it.IIS Configuration for Application InitializationInitialization needs to be applied on the Application Pool as well as the IIS Application level. As of IIS 8 these settings can be made through the IIS Administration console.Start with the Application Pool:Here you need to set both the Start Automatically which is always set, and the StartMode which should be set to AlwaysRunning. Both have to be set - the Start Automatically flag is set true by default and controls the starting of the application pool itself while Always Running flag is required in order to launch the application. Without the latter flag set the site settings have no effect.Now on the Site/Application level you can specify whether the site should pre load: Set the Preload Enabled flag to true.At this point ASP.NET apps should auto-load. This is all that's needed to pre-load the site if all you want is to get your site launched automatically.If you want a little more control over the load process you can add a few more settings to your web.config file that allow you to show a static page while the App is starting up. This can be useful if startup is really slow, so rather than displaying blank screen while the user is fiddling their thumbs you can display a static HTML page instead: <system.webServer> <applicationInitialization remapManagedRequestsTo="Startup.htm" skipManagedModules="true"> <add initializationPage="ping.ashx" /> </applicationInitialization> </system.webServer>This allows you to specify a page to execute in a dry run. IIS basically fakes request and pushes it directly into the IIS pipeline without hitting the network. You specify a page and IIS will fake a request to that page in this case ping.ashx which just returns a simple OK string - ie. a fast pipeline request. This request is run immediately after Application Pool restart, and while this request is running and your app is warming up, IIS can display an alternate static page - Startup.htm above. So instead of showing users an empty loading page when clicking a link on your site you can optionally show some sort of static status page that says, "we'll be right back".  I'm not sure if that's such a brilliant idea since this can be pretty disruptive in some cases. Personally I think I prefer letting people wait, but at least get the response they were supposed to get back rather than a random page. But it's there if you need it.Note that the web.config stuff is optional. If you don't provide it IIS hits the default site link (/) and even if there's no matching request at the end of that request it'll still fire the request through the IIS pipeline. Ideally though you want to make sure that an ASP.NET endpoint is hit either with your default page, or by specify the initializationPage to ensure ASP.NET actually gets hit since it's possible for IIS fire unmanaged requests only for static pages (depending how your pipeline is configured).What about AppDomain Restarts?In addition to full Worker Process recycles at the IIS level, ASP.NET also has to deal with AppDomain shutdowns which can occur for a variety of reasons:Files are updated in the BIN folderWeb Deploy to your siteweb.config is changedHard application crashThese operations don't cause the worker process to restart, but they do cause ASP.NET to unload the current AppDomain and start up a new one. Because the features above only apply to Application Pool restarts, AppDomain restarts could also cause your 'ASP.NET service' to stop processing in the background.In order to keep the app running on AppDomain recycles, you can resort to a simple ping in the Application_End event:protected void Application_End() { var client = new WebClient(); var url = App.AdminConfiguration.MonitorHostUrl + "ping.aspx"; client.DownloadString(url); Trace.WriteLine("Application Shut Down Ping: " + url); }which fires any ASP.NET url to the current site at the very end of the pipeline shutdown which in turn ensures that the site immediately starts back up.Manual Configuration in ApplicationHost.configThe above UI corresponds to the following ApplicationHost.config settings. If you're using IIS 7, there's no UI for these flags so you'll have to manually edit them.When you install the Application Initialization component into IIS it should auto-configure the module into ApplicationHost.config. Unfortunately for me, with Mr. Murphy in his best form for me, the module registration did not occur and I had to manually add it.<globalModules> <add name="ApplicationInitializationModule" image="%windir%\System32\inetsrv\warmup.dll" /> </globalModules>Most likely you won't need ever need to add this, but if things are not working it's worth to check if the module is actually registered.Next you need to configure the ApplicationPool and the Web site. The following are the two relevant entries in ApplicationHost.config.<system.applicationHost> <applicationPools> <add name="West Wind West Wind Web Connection" autoStart="true" startMode="AlwaysRunning" managedRuntimeVersion="v4.0" managedPipelineMode="Integrated"> <processModel identityType="LocalSystem" setProfileEnvironment="true" /> </add> </applicationPools> <sites> <site name="Default Web Site" id="1"> <application path="/MPress.Workflow.WebQueueMessageManager" applicationPool="West Wind West Wind Web Connection" preloadEnabled="true"> <virtualDirectory path="/" physicalPath="C:\Clients\…" /> </application> </site> </sites> </system.applicationHost>On the Application Pool make sure to set the autoStart and startMode flags to true and AlwaysRunning respectively. On the site make sure to set the preloadEnabled flag to true.And that's all you should need. You can still set the web.config settings described above as well.ASP.NET as a Service?In the particular application I'm working on currently, we have a queue manager that runs as standalone service that polls a database queue and picks out jobs and processes them on several threads. The service can spin up any number of threads and keep these threads alive in the background while IIS is running doing its own thing. These threads are newly created threads, so they sit completely outside of the IIS thread pool. In order for this service to work all it needs is a long running reference that keeps it alive for the life time of the application.In this particular app there are two components that run in the background on their own threads: A scheduler that runs various scheduled tasks and handles things like picking up emails to send out outside of IIS's scope and the QueueManager. Here's what this looks like in global.asax:public class Global : System.Web.HttpApplication { private static ApplicationScheduler scheduler; private static ServiceLauncher launcher; protected void Application_Start(object sender, EventArgs e) { // Pings the service and ensures it stays alive scheduler = new ApplicationScheduler() { CheckFrequency = 600000 }; scheduler.Start(); launcher = new ServiceLauncher(); launcher.Start(); // register so shutdown is controlled HostingEnvironment.RegisterObject(launcher); }}By keeping these objects around as static instances that are set only once on startup, they survive the lifetime of the application. The code in these classes is essentially unchanged from the Windows Service code except that I could remove the various overrides required for the Windows Service interface (OnStart,OnStop,OnResume etc.). Otherwise the behavior and operation is very similar.In this application ASP.NET serves two purposes: It acts as the host for SignalR and provides the administration interface which allows remote management of the 'service'. I can start and stop the service remotely by shutting down the ApplicationScheduler very easily. I can also very easily feed stats from the queue out directly via a couple of Web requests or (as we do now) through the SignalR service.Registering a Background Object with ASP.NETNotice also the use of the HostingEnvironment.RegisterObject(). This function registers an object with ASP.NET to let it know that it's a background task that should be notified if the AppDomain shuts down. RegisterObject() requires an interface with a Stop() method that's fired and allows your code to respond to a shutdown request. Here's what the IRegisteredObject::Stop() method looks like on the launcher:public void Stop(bool immediate = false) { LogManager.Current.LogInfo("QueueManager Controller Stopped."); Controller.StopProcessing(); Controller.Dispose(); Thread.Sleep(1500); // give background threads some time HostingEnvironment.UnregisterObject(this); }Implementing IRegisterObject should help with reliability on AppDomain shutdowns. Thanks to Justin Van Patten for pointing this out to me on Twitter.RegisterObject() is not required but I would highly recommend implementing it on whatever object controls your background processing to all clean shutdowns when the AppDomain shuts down.Testing it outI'm still in the testing phase with this particular service to see if there are any side effects. But so far it doesn't look like it. With about 50 lines of code I was able to replace the Windows service startup to Web start up - everything else just worked as is. An honorable mention goes to SignalR 2.0's oWin hosting, because with the new oWin based hosting no code changes at all were required, merely a couple of configuration file settings and an assembly directive needed, to point at the SignalR startup class. Sweet!It also seems like SignalR is noticeably faster running inside of IIS compared to self-host. Startup feels faster because of the preload.Starting and Stopping the 'Service'Because the application is running as a Web Server, it's easy to have a Web interface for starting and stopping the services running inside of the service. For our queue manager the SignalR service and front monitoring app has a play and stop button for toggling the queue.If you want more administrative control and have it work more like a Windows Service you can also stop the application pool explicitly from the command line which would be equivalent to stopping and restarting a service.To start and stop from the command line you can use the IIS appCmd tool. To stop:> %windir%\system32\inetsrv\appcmd stop apppool /apppool.name:"Weblog"and to start> %windir%\system32\inetsrv\appcmd start apppool /apppool.name:"Weblog"Note that when you explicitly force the AppPool to stop running either in the UI (on the ApplicationPools page use Start/Stop) or via command line tools, the application pool will not auto-restart immediately. You have to manually start it back up.What's not to like?There are certainly a lot of benefits to running a background service in IIS, but… ASP.NET applications do have more overhead in terms of memory footprint and startup time is a little slower, but generally for server applications this is not a big deal. If the application is stable the service should fire up and stay running indefinitely. A lot of times this kind of service interface can simply be attached to an existing Web application, or if scalability requires be offloaded to its own Web server.Easier to work withBut the ultimate benefit here is that it's much easier to work with a Web app as opposed to a service. While developing I can simply turn off the auto-launch features and launch the service on demand through IIS simply by hitting a page on the site. If I want to shut down an IISRESET -stop will shut down the service easily enough. I can then attach a debugger anywhere I want and this works like any other ASP.NET application. Yes you end up on a background thread for debugging but Visual Studio handles that just fine and if you stay on a single thread this is no different than debugging any other code.SummaryUsing ASP.NET to run background service operations is probably not a super common scenario, but it probably should be something that is considered carefully when building services. Many applications have service like features and with the auto-start functionality of the Application Initialization module, it's easy to build this functionality into ASP.NET. Especially when combined with the notification features of SignalR it becomes very, very easy to create rich services that can also communicate their status easily to the outside world.Whether it's existing applications that need some background processing for scheduling related tasks, or whether you just create a separate site altogether just to host your service it's easy to do and you can leverage the same tool chain you're already using for other Web projects. If you have lots of service projects it's worth considering… give it some thought…© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  SignalR  IIS   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Help with Design for Vacation Tracking System (C#/.NET/Access/WebServices/SOA/Excel) [closed]

    - by Aaronaught
    I have been tasked with developing a system for tracking our company's paid time-off (vacation, sick days, etc.) At the moment we are using an Excel spreadsheet on a shared network drive, and it works pretty well, but we are concerned that we won't be able to "trust" employees forever and sometimes we run into locking issues when two people try to open the spreadsheet at once. So we are trying to build something a little more robust. I would like some input on this design in terms of maintainability, scalability, extensibility, etc. It's a pretty simple workflow we need to represent right now: I started with a basic MS Access schema like this: Employees (EmpID int, EmpName varchar(50), AllowedDays int) Vacations (VacationID int, EmpID int, BeginDate datetime, EndDate datetime) But we don't want to spend a lot of time building a schema and database like this and have to change it later, so I think I am going to go with something that will be easier to expand through configuration. Right now the vacation table has this schema: Vacations (VacationID int, PropName varchar(50), PropValue varchar(50)) And the table will be populated with data like this: VacationID | PropName | PropValue -----------+--------------+------------------ 1 | EmpID | 4 1 | EmpName | James Jones 1 | Reason | Vacation 1 | BeginDate | 2/24/2010 1 | EndDate | 2/30/2010 1 | Destination | Spectate Swamp 2 | ... | ... I think this is a pretty good, extensible design, we can easily add new properties to the vacation like the destination or maybe approval status, etc. I wasn't too sure how to go about managing the database of valid properties, I thought of putting them in a separate PropNames table but it gets complicated to manage all the different data types and people say that you shouldn't put CLR type names into a SQL database, so I decided to use XML instead, here is the schema: <VacationProperties> <PropertyNames>EmpID,EmpName,Reason,BeginDate,EndDate,Destination</PropertyNames> <PropertyTypes>System.Int32,System.String,System.String,System.DateTime,System.DateTime,System.String</PropertyTypes> <PropertiesRequired>true,true,false,true,true,false</PropertiesRequired> </VacationProperties> I might need more fields than that, I'm not completely sure. I'm parsing the XML like this (would like some feedback on the parsing code): string xml = File.ReadAllText("properties.xml"); Match m = Regex.Match(xml, "<(PropertyNames)>(.*?)</PropertyNames>"; string[] pn = m.Value.Split(','); // do the same for PropertyTypes, PropertiesRequired Then I use the following code to persist configuration changes to the database: string sql = "DROP TABLE VacationProperties"; sql = sql + " CREATE TABLE VacationProperties "; sql = sql + "(PropertyName varchar(100), PropertyType varchar(100) "; sql = sql + "IsRequired varchar(100))"; for (int i = 0; i < pn.Length; i++) { sql = sql + " INSERT VacationProperties VALUES (" + pn[i] + "," + pt[i] + "," + pv[i] + ")"; } // GlobalConnection is a singleton new SqlCommand(sql, GlobalConnection.Instance).ExecuteReader(); So far so good, but after a few days of this I then realized that a lot of this was just a more specific kind of a generic workflow which could be further abstracted, and instead of writing all of this boilerplate plumbing code I could just come up with a workflow and plug it into a workflow engine like Windows Workflow Foundation and have the users configure it: In order to support routing these configurations throw the workflow system, it seemed natural to implement generic XML Web Services for this instead of just using an XML file as above. I've used this code to implement the Web Services: public class VacationConfigurationService : WebService { [WebMethod] public void UpdateConfiguration(string xml) { // Above code goes here } } Which was pretty easy, although I'm still working on a way to validate that XML against some kind of schema as there's no error-checking yet. I also created a few different services for other operations like VacationSubmissionService, VacationReportService, VacationDataService, VacationAuthenticationService, etc. The whole Service Oriented Architecture looks like this: And because the workflow itself might change, I have been working on a way to integrate the WF workflow system with MS Visio, which everybody at the office already knows how to use so they could make changes pretty easily. We have a diagram that looks like the following (it's kind of hard to read but the main items are Activities, Authenticators, Validators, Transformers, Processors, and Data Connections, they're all analogous to the services in the SOA diagram above). The requirements for this system are: (Note - I don't control these, they were given to me by management) Main workflow must interface with Excel spreadsheet, probably through VBA macros (to ease the transition to the new system) Alerts should integrate with MS Outlook, Lotus Notes, and SMS (text messages). We also want to interface it with the company Voice Mail system but that is not a "hard" requirement. Performance requirements: Must handle 250,000 Transactions Per Second Should be able to handle up to 20,000 employees (right now we have 3) 99.99% uptime ("four nines") expected Must be secure against outside hacking, but users cannot be required to enter a username/password. Platforms: Must support Windows XP/Vista/7, Linux, iPhone, Blackberry, DOS 2.0, VAX, IRIX, PDP-11, Apple IIc. Time to complete: 6 to 8 weeks. My questions are: Is this a good design for the system so far? Am I using all of the recommended best practices for these technologies? How do I integrate the Visio diagram above with the Windows Workflow Foundation to call the ConfigurationService and persist workflow changes? Am I missing any important components? Will this be extensible enough to support any scenario via end-user configuration? Will the system scale to the above performance requirements? Will we need any expensive hardware to run it? Are there any "gotchas" I should know about with respect to cross-platform compatibility? For example would it be difficult to convert this to an iPhone app? How long would you expect this to take? (We've dedicated 1 week for testing so I'm thinking maybe 5 weeks?) Many thanks for your advices, Aaron

    Read the article

  • How to remove a package entirely?

    - by maria
    Hi I'm quite new to Linux, but before using it I was hearing that Windows programs, after uninstallation, leaves a lot of remains on the hard disc, and Linux removes all. I'm using Ubuntu 10.04. To uninstall packages I'm using sudo apt-get autoremove application_name or sudo aptitude purge application_name. Recently I have installed texlive-full and for some reasons I had quickly to uninstall it. After I've entered to terminal updatedb, then locate *texlive* and the output was very long: maria@marysia-ubuntu:~$ locate *texlive* /etc/texmf/fmt.d/10texlive-base.cnf /etc/texmf/fmt.d/10texlive-formats-extra.cnf /etc/texmf/fmt.d/10texlive-lang-cyrillic.cnf /etc/texmf/fmt.d/10texlive-lang-czechslovak.cnf /etc/texmf/fmt.d/10texlive-lang-polish.cnf /etc/texmf/fmt.d/10texlive-latex-base.cnf /etc/texmf/fmt.d/10texlive-math-extra.cnf /etc/texmf/fmt.d/10texlive-metapost.cnf /etc/texmf/fmt.d/10texlive-omega.cnf /etc/texmf/fmt.d/10texlive-xetex.cnf /etc/texmf/hyphen.d/09texlive-base.cnf /etc/texmf/hyphen.d/10texlive-lang-arabic.cnf /etc/texmf/hyphen.d/10texlive-lang-croatian.cnf /etc/texmf/hyphen.d/10texlive-lang-cyrillic.cnf /etc/texmf/hyphen.d/10texlive-lang-czechslovak.cnf /etc/texmf/hyphen.d/10texlive-lang-danish.cnf /etc/texmf/hyphen.d/10texlive-lang-dutch.cnf /etc/texmf/hyphen.d/10texlive-lang-finnish.cnf /etc/texmf/hyphen.d/10texlive-lang-french.cnf /etc/texmf/hyphen.d/10texlive-lang-german.cnf /etc/texmf/hyphen.d/10texlive-lang-greek.cnf /etc/texmf/hyphen.d/10texlive-lang-hungarian.cnf /etc/texmf/hyphen.d/10texlive-lang-indic.cnf /etc/texmf/hyphen.d/10texlive-lang-italian.cnf /etc/texmf/hyphen.d/10texlive-lang-latin.cnf /etc/texmf/hyphen.d/10texlive-lang-latvian.cnf /etc/texmf/hyphen.d/10texlive-lang-lithuanian.cnf /etc/texmf/hyphen.d/10texlive-lang-mongolian.cnf /etc/texmf/hyphen.d/10texlive-lang-norwegian.cnf /etc/texmf/hyphen.d/10texlive-lang-other.cnf /etc/texmf/hyphen.d/10texlive-lang-polish.cnf /etc/texmf/hyphen.d/10texlive-lang-portuguese.cnf /etc/texmf/hyphen.d/10texlive-lang-spanish.cnf /etc/texmf/hyphen.d/10texlive-lang-swedish.cnf /etc/texmf/hyphen.d/10texlive-lang-ukenglish.cnf /etc/texmf/updmap.d/10texlive-base.cfg /etc/texmf/updmap.d/10texlive-fonts-extra.cfg /etc/texmf/updmap.d/10texlive-fonts-recommended.cfg /etc/texmf/updmap.d/10texlive-games.cfg /etc/texmf/updmap.d/10texlive-lang-african.cfg /etc/texmf/updmap.d/10texlive-lang-arabic.cfg /etc/texmf/updmap.d/10texlive-lang-cyrillic.cfg /etc/texmf/updmap.d/10texlive-lang-czechslovak.cfg /etc/texmf/updmap.d/10texlive-lang-french.cfg /etc/texmf/updmap.d/10texlive-lang-greek.cfg /etc/texmf/updmap.d/10texlive-lang-hebrew.cfg /etc/texmf/updmap.d/10texlive-lang-indic.cfg /etc/texmf/updmap.d/10texlive-lang-lithuanian.cfg /etc/texmf/updmap.d/10texlive-lang-mongolian.cfg /etc/texmf/updmap.d/10texlive-lang-polish.cfg /etc/texmf/updmap.d/10texlive-lang-vietnamese.cfg /etc/texmf/updmap.d/10texlive-latex-base.cfg /etc/texmf/updmap.d/10texlive-latex-extra.cfg /etc/texmf/updmap.d/10texlive-math-extra.cfg /etc/texmf/updmap.d/10texlive-omega.cfg /etc/texmf/updmap.d/10texlive-pictures.cfg /etc/texmf/updmap.d/10texlive-science.cfg /var/cache/apt/archives/texlive-base_2009-7_all.deb /var/cache/apt/archives/texlive-bibtex-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-binaries_2009-5ubuntu0.2_i386.deb /var/cache/apt/archives/texlive-common_2009-7_all.deb /var/cache/apt/archives/texlive-doc-base_2009-2_all.deb /var/cache/apt/archives/texlive-doc-bg_2009-2_all.deb /var/cache/apt/archives/texlive-doc-cs+sk_2009-2_all.deb /var/cache/apt/archives/texlive-doc-de_2009-2_all.deb /var/cache/apt/archives/texlive-doc-en_2009-2_all.deb /var/cache/apt/archives/texlive-doc-es_2009-2_all.deb /var/cache/apt/archives/texlive-doc-fi_2009-2_all.deb /var/cache/apt/archives/texlive-doc-fr_2009-2_all.deb /var/cache/apt/archives/texlive-doc-it_2009-2_all.deb /var/cache/apt/archives/texlive-doc-ja_2009-2_all.deb /var/cache/apt/archives/texlive-doc-ko_2009-2_all.deb /var/cache/apt/archives/texlive-doc-mn_2009-2_all.deb /var/cache/apt/archives/texlive-doc-nl_2009-2_all.deb /var/cache/apt/archives/texlive-doc-pl_2009-2_all.deb /var/cache/apt/archives/texlive-doc-pt_2009-2_all.deb /var/cache/apt/archives/texlive-doc-ru_2009-2_all.deb /var/cache/apt/archives/texlive-doc-si_2009-2_all.deb /var/cache/apt/archives/texlive-doc-th_2009-2_all.deb /var/cache/apt/archives/texlive-doc-tr_2009-2_all.deb /var/cache/apt/archives/texlive-doc-uk_2009-2_all.deb /var/cache/apt/archives/texlive-doc-vi_2009-2_all.deb /var/cache/apt/archives/texlive-doc-zh_2009-2_all.deb /var/cache/apt/archives/texlive-extra-utils_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-font-utils_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-fonts-extra-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-fonts-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-fonts-recommended-doc_2009-7_all.deb /var/cache/apt/archives/texlive-fonts-recommended_2009-7_all.deb /var/cache/apt/archives/texlive-formats-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-full_2009-7_all.deb /var/cache/apt/archives/texlive-games_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-generic-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-generic-recommended_2009-7_all.deb /var/cache/apt/archives/texlive-humanities-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-humanities_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-lang-african_2009-3_all.deb /var/cache/apt/archives/texlive-lang-arabic_2009-3_all.deb /var/cache/apt/archives/texlive-lang-armenian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-croatian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-cyrillic_2009-3_all.deb /var/cache/apt/archives/texlive-lang-czechslovak_2009-3_all.deb /var/cache/apt/archives/texlive-lang-danish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-dutch_2009-3_all.deb /var/cache/apt/archives/texlive-lang-finnish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-french_2009-3_all.deb /var/cache/apt/archives/texlive-lang-german_2009-3_all.deb /var/cache/apt/archives/texlive-lang-greek_2009-3_all.deb /var/cache/apt/archives/texlive-lang-hebrew_2009-3_all.deb /var/cache/apt/archives/texlive-lang-hungarian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-indic_2009-3_all.deb /var/cache/apt/archives/texlive-lang-italian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-latin_2009-3_all.deb /var/cache/apt/archives/texlive-lang-latvian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-lithuanian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-mongolian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-norwegian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-other_2009-3_all.deb /var/cache/apt/archives/texlive-lang-polish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-portuguese_2009-3_all.deb /var/cache/apt/archives/texlive-lang-spanish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-swedish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-tibetan_2009-3_all.deb /var/cache/apt/archives/texlive-lang-ukenglish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-vietnamese_2009-3_all.deb /var/cache/apt/archives/texlive-latex-base-doc_2009-7_all.deb /var/cache/apt/archives/texlive-latex-base_2009-7_all.deb /var/cache/apt/archives/texlive-latex-extra-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-latex-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-latex-recommended-doc_2009-7_all.deb /var/cache/apt/archives/texlive-latex-recommended_2009-7_all.deb /var/cache/apt/archives/texlive-latex3_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-luatex_2009-7_all.deb /var/cache/apt/archives/texlive-math-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-metapost-doc_2009-7_all.deb /var/cache/apt/archives/texlive-metapost_2009-7_all.deb /var/cache/apt/archives/texlive-music_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-omega_2009-7_all.deb /var/cache/apt/archives/texlive-pictures-doc_2009-7_all.deb /var/cache/apt/archives/texlive-pictures_2009-7_all.deb /var/cache/apt/archives/texlive-plain-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-pstricks-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-pstricks_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-publishers-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-publishers_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-science-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-science_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-xetex_2009-7_all.deb /var/cache/apt/archives/texlive_2009-7_all.deb /var/lib/dpkg/info/texlive-base.list /var/lib/dpkg/info/texlive-base.postrm /var/lib/dpkg/info/texlive-bibtex-extra.list /var/lib/dpkg/info/texlive-bibtex-extra.postrm /var/lib/dpkg/info/texlive-doc-base.list /var/lib/dpkg/info/texlive-doc-base.postrm /var/lib/dpkg/info/texlive-doc-bg.list /var/lib/dpkg/info/texlive-doc-bg.postrm /var/lib/dpkg/info/texlive-doc-cs+sk.list /var/lib/dpkg/info/texlive-doc-cs+sk.postrm /var/lib/dpkg/info/texlive-doc-de.list /var/lib/dpkg/info/texlive-doc-de.postrm /var/lib/dpkg/info/texlive-doc-en.list /var/lib/dpkg/info/texlive-doc-en.postrm /var/lib/dpkg/info/texlive-doc-es.list /var/lib/dpkg/info/texlive-doc-es.postrm /var/lib/dpkg/info/texlive-doc-fi.list /var/lib/dpkg/info/texlive-doc-fi.postrm /var/lib/dpkg/info/texlive-doc-fr.list /var/lib/dpkg/info/texlive-doc-fr.postrm /var/lib/dpkg/info/texlive-doc-it.list /var/lib/dpkg/info/texlive-doc-it.postrm /var/lib/dpkg/info/texlive-doc-ja.list /var/lib/dpkg/info/texlive-doc-ja.postrm /var/lib/dpkg/info/texlive-doc-ko.list /var/lib/dpkg/info/texlive-doc-ko.postrm /var/lib/dpkg/info/texlive-doc-mn.list /var/lib/dpkg/info/texlive-doc-mn.postrm /var/lib/dpkg/info/texlive-doc-nl.list /var/lib/dpkg/info/texlive-doc-nl.postrm /var/lib/dpkg/info/texlive-doc-pl.list /var/lib/dpkg/info/texlive-doc-pl.postrm /var/lib/dpkg/info/texlive-doc-pt.list /var/lib/dpkg/info/texlive-doc-pt.postrm /var/lib/dpkg/info/texlive-doc-ru.list /var/lib/dpkg/info/texlive-doc-ru.postrm /var/lib/dpkg/info/texlive-doc-si.list /var/lib/dpkg/info/texlive-doc-si.postrm /var/lib/dpkg/info/texlive-doc-th.list /var/lib/dpkg/info/texlive-doc-th.postrm /var/lib/dpkg/info/texlive-doc-tr.list /var/lib/dpkg/info/texlive-doc-tr.postrm /var/lib/dpkg/info/texlive-doc-uk.list /var/lib/dpkg/info/texlive-doc-uk.postrm /var/lib/dpkg/info/texlive-doc-vi.list /var/lib/dpkg/info/texlive-doc-vi.postrm /var/lib/dpkg/info/texlive-doc-zh.list /var/lib/dpkg/info/texlive-doc-zh.postrm /var/lib/dpkg/info/texlive-extra-utils.list /var/lib/dpkg/info/texlive-extra-utils.postrm /var/lib/dpkg/info/texlive-font-utils.list /var/lib/dpkg/info/texlive-font-utils.postrm /var/lib/dpkg/info/texlive-fonts-extra-doc.list /var/lib/dpkg/info/texlive-fonts-extra-doc.postrm /var/lib/dpkg/info/texlive-fonts-extra.list /var/lib/dpkg/info/texlive-fonts-extra.postrm /var/lib/dpkg/info/texlive-fonts-recommended-doc.list /var/lib/dpkg/info/texlive-fonts-recommended-doc.postrm /var/lib/dpkg/info/texlive-fonts-recommended.list /var/lib/dpkg/info/texlive-fonts-recommended.postrm /var/lib/dpkg/info/texlive-formats-extra.list /var/lib/dpkg/info/texlive-formats-extra.postrm /var/lib/dpkg/info/texlive-games.list /var/lib/dpkg/info/texlive-games.postrm /var/lib/dpkg/info/texlive-generic-extra.list /var/lib/dpkg/info/texlive-generic-extra.postrm /var/lib/dpkg/info/texlive-generic-recommended.list /var/lib/dpkg/info/texlive-generic-recommended.postrm /var/lib/dpkg/info/texlive-humanities-doc.list /var/lib/dpkg/info/texlive-humanities-doc.postrm /var/lib/dpkg/info/texlive-humanities.list /var/lib/dpkg/info/texlive-humanities.postrm /var/lib/dpkg/info/texlive-lang-african.list /var/lib/dpkg/info/texlive-lang-african.postrm /var/lib/dpkg/info/texlive-lang-arabic.list /var/lib/dpkg/info/texlive-lang-arabic.postrm /var/lib/dpkg/info/texlive-lang-armenian.list /var/lib/dpkg/info/texlive-lang-armenian.postrm /var/lib/dpkg/info/texlive-lang-croatian.list /var/lib/dpkg/info/texlive-lang-croatian.postrm /var/lib/dpkg/info/texlive-lang-cyrillic.list /var/lib/dpkg/info/texlive-lang-cyrillic.postrm /var/lib/dpkg/info/texlive-lang-czechslovak.list /var/lib/dpkg/info/texlive-lang-czechslovak.postrm /var/lib/dpkg/info/texlive-lang-danish.list /var/lib/dpkg/info/texlive-lang-danish.postrm /var/lib/dpkg/info/texlive-lang-dutch.list /var/lib/dpkg/info/texlive-lang-dutch.postrm /var/lib/dpkg/info/texlive-lang-finnish.list /var/lib/dpkg/info/texlive-lang-finnish.postrm /var/lib/dpkg/info/texlive-lang-french.list /var/lib/dpkg/info/texlive-lang-french.postrm /var/lib/dpkg/info/texlive-lang-german.list /var/lib/dpkg/info/texlive-lang-german.postrm /var/lib/dpkg/info/texlive-lang-greek.list /var/lib/dpkg/info/texlive-lang-greek.postrm /var/lib/dpkg/info/texlive-lang-hebrew.list /var/lib/dpkg/info/texlive-lang-hebrew.postrm /var/lib/dpkg/info/texlive-lang-hungarian.list /var/lib/dpkg/info/texlive-lang-hungarian.postrm /var/lib/dpkg/info/texlive-lang-indic.list /var/lib/dpkg/info/texlive-lang-indic.postrm /var/lib/dpkg/info/texlive-lang-italian.list /var/lib/dpkg/info/texlive-lang-italian.postrm /var/lib/dpkg/info/texlive-lang-latin.list /var/lib/dpkg/info/texlive-lang-latin.postrm /var/lib/dpkg/info/texlive-lang-latvian.list /var/lib/dpkg/info/texlive-lang-latvian.postrm /var/lib/dpkg/info/texlive-lang-lithuanian.list /var/lib/dpkg/info/texlive-lang-lithuanian.postrm /var/lib/dpkg/info/texlive-lang-mongolian.list /var/lib/dpkg/info/texlive-lang-mongolian.postrm /var/lib/dpkg/info/texlive-lang-norwegian.list /var/lib/dpkg/info/texlive-lang-norwegian.postrm /var/lib/dpkg/info/texlive-lang-other.list /var/lib/dpkg/info/texlive-lang-other.postrm /var/lib/dpkg/info/texlive-lang-polish.list /var/lib/dpkg/info/texlive-lang-polish.postrm /var/lib/dpkg/info/texlive-lang-portuguese.list /var/lib/dpkg/info/texlive-lang-portuguese.postrm /var/lib/dpkg/info/texlive-lang-spanish.list /var/lib/dpkg/info/texlive-lang-spanish.postrm /var/lib/dpkg/info/texlive-lang-swedish.list /var/lib/dpkg/info/texlive-lang-swedish.postrm /var/lib/dpkg/info/texlive-lang-tibetan.list /var/lib/dpkg/info/texlive-lang-tibetan.postrm /var/lib/dpkg/info/texlive-lang-ukenglish.list /var/lib/dpkg/info/texlive-lang-ukenglish.postrm /var/lib/dpkg/info/texlive-lang-vietnamese.list /var/lib/dpkg/info/texlive-lang-vietnamese.postrm /var/lib/dpkg/info/texlive-latex-base-doc.list /var/lib/dpkg/info/texlive-latex-base-doc.postrm /var/lib/dpkg/info/texlive-latex-base.list /var/lib/dpkg/info/texlive-latex-base.postrm /var/lib/dpkg/info/texlive-latex-extra-doc.list /var/lib/dpkg/info/texlive-latex-extra-doc.postrm /var/lib/dpkg/info/texlive-latex-extra.list /var/lib/dpkg/info/texlive-latex-extra.postrm /var/lib/dpkg/info/texlive-latex-recommended-doc.list /var/lib/dpkg/info/texlive-latex-recommended-doc.postrm /var/lib/dpkg/info/texlive-latex-recommended.list /var/lib/dpkg/info/texlive-latex-recommended.postrm /var/lib/dpkg/info/texlive-latex3.list /var/lib/dpkg/info/texlive-latex3.postrm /var/lib/dpkg/info/texlive-luatex.list /var/lib/dpkg/info/texlive-luatex.postrm /var/lib/dpkg/info/texlive-math-extra.list /var/lib/dpkg/info/texlive-math-extra.postrm /var/lib/dpkg/info/texlive-metapost-doc.list /var/lib/dpkg/info/texlive-metapost-doc.postrm /var/lib/dpkg/info/texlive-metapost.list /var/lib/dpkg/info/texlive-metapost.postrm /var/lib/dpkg/info/texlive-music.list /var/lib/dpkg/info/texlive-music.postrm /var/lib/dpkg/info/texlive-omega.list /var/lib/dpkg/info/texlive-omega.postrm /var/lib/dpkg/info/texlive-pictures-doc.list /var/lib/dpkg/info/texlive-pictures-doc.postrm /var/lib/dpkg/info/texlive-pictures.list /var/lib/dpkg/info/texlive-pictures.postrm /var/lib/dpkg/info/texlive-plain-extra.list /var/lib/dpkg/info/texlive-plain-extra.postrm /var/lib/dpkg/info/texlive-pstricks-doc.list /var/lib/dpkg/info/texlive-pstricks-doc.postrm /var/lib/dpkg/info/texlive-pstricks.list /var/lib/dpkg/info/texlive-pstricks.postrm /var/lib/dpkg/info/texlive-publishers-doc.list /var/lib/dpkg/info/texlive-publishers-doc.postrm /var/lib/dpkg/info/texlive-publishers.list /var/lib/dpkg/info/texlive-publishers.postrm /var/lib/dpkg/info/texlive-science-doc.list /var/lib/dpkg/info/texlive-science-doc.postrm /var/lib/dpkg/info/texlive-science.list /var/lib/dpkg/info/texlive-science.postrm /var/lib/dpkg/info/texlive-xetex.list /var/lib/dpkg/info/texlive-xetex.postrm maria@marysia-ubuntu:~$ I've used sudo apt-get autoclean without any change. I've installed deborphan and it showed nothing (maybe I've used it in wrong way: just entered command deborphan). Am I doing something wrong or I was told something which is not true? I would like to know two things: how to remove packages (if I'm doing it in wrong way) and how to clean hard disc from remains of all packages I've uninstalled till now (even if I don't remember what it was exactly). I have Ubuntu Tweak installed but I don't know how to use it and I think I prefere terminal commnands. Thanks

    Read the article

  • Windows Phone 7 development: reading RSS feeds

    - by DigiMortal
    One limitation on Windows Phone 7 is related to System.Net namespace classes. There is no convenient way to read data from web. There is no WebClient class. There is no GetResponse() method – we have to do it all asynchronously because compact framework has limited set of classes we can use in our applications to communicate with internet. In this posting I will show you how to read RSS-feeds on Windows Phone 7. NB! This is my draft code and it may contain some design flaws and some questionable solutions. This code is intended to use as test-drive for Windows Phone 7 CTP developer tools and I don’t suppose you are going to use this code in production environment. Current state of my RSS-reader Currently my RSS-reader for Windows Phone 7 is very simple, primitive and uses almost all defaults that come out-of-box with Windows Phone 7 CTP developer tools. My first goal before going on with nicer user interface design was making RSS-reading work because instead of convenient classes from .NET Framework we have to use very limited classes from .NET Framework CE. This is why I took the reading of RSS-feeds as my first task. There are currently more things to solve regarding user-interface. As I am pretty new to all this Silverlight stuff I am not very sure if I can modify default controls easily or should I write my own controls that have better look and that work faster. The image on right shows you how my RSS-reader looks like right now. Upper side of screen is filled with list that shows headlines from this blog. The bottom part of screen is used to show description of selected posting. You can click on the image to see it in original size. In my next posting I will show you some improvements of my RSS-reader user interface that make it look nicer. But currently it is nice enough to make sure that RSS-feeds are read correctly. FeedItem class As this is most straight-forward part of the following code I will show you RSS-feed items class first. I think we have to stop on it because it is simple one. public class FeedItem {     public string Title { get; set; }     public string Description { get; set; }     public DateTime PublishDate { get; set; }     public List<string> Categories { get; set; }     public string Link { get; set; }       public FeedItem()     {         Categories = new List<string>();     } } RssClient RssClient takes feed URL and when asked it loads all items from feed and gives them back to caller through ItemsReceived event. Why it works this way? Because we can make responses only using asynchronous methods. I will show you in next section how to use this class. Although the code here is not very good but it works like expected. I will refactor this code later because it needs some more efforts and investigating. But let’s hope I find excellent solution. :) public class RssClient {     private readonly string _rssUrl;       public delegate void ItemsReceivedDelegate(RssClient client, IList<FeedItem> items);     public event ItemsReceivedDelegate ItemsReceived;       public RssClient(string rssUrl)     {         _rssUrl = rssUrl;     }       public void LoadItems()     {         var request = (HttpWebRequest)WebRequest.Create(_rssUrl);         var result = (IAsyncResult)request.BeginGetResponse(ResponseCallback, request);     }       void ResponseCallback(IAsyncResult result)     {         var request = (HttpWebRequest)result.AsyncState;         var response = request.EndGetResponse(result);           var stream = response.GetResponseStream();         var reader = XmlReader.Create(stream);         var items = new List<FeedItem>(50);           FeedItem item = null;         var pointerMoved = false;           while (!reader.EOF)         {             if (pointerMoved)             {                 pointerMoved = false;             }             else             {                 if (!reader.Read())                     break;             }               var nodeName = reader.Name;             var nodeType = reader.NodeType;               if (nodeName == "item")             {                 if (nodeType == XmlNodeType.Element)                     item = new FeedItem();                 else if (nodeType == XmlNodeType.EndElement)                     if (item != null)                     {                         items.Add(item);                         item = null;                     }                   continue;             }               if (nodeType != XmlNodeType.Element)                 continue;               if (item == null)                 continue;               reader.MoveToContent();             var nodeValue = reader.ReadElementContentAsString();             // we just moved internal pointer             pointerMoved = true;               if (nodeName == "title")                 item.Title = nodeValue;             else if (nodeName == "description")                 item.Description =  Regex.Replace(nodeValue,@"<(.|\n)*?>",string.Empty);             else if (nodeName == "feedburner:origLink")                 item.Link = nodeValue;             else if (nodeName == "pubDate")             {                 if (!string.IsNullOrEmpty(nodeValue))                     item.PublishDate = DateTime.Parse(nodeValue);             }             else if (nodeName == "category")                 item.Categories.Add(nodeValue);         }           if (ItemsReceived != null)             ItemsReceived(this, items);     } } This method is pretty long but it works. Now let’s try to use it in Windows Phone 7 application. Using RssClient And this is the fragment of code behing the main page of my application start screen. You can see how RssClient is initialized and how items are bound to list that shows them. public MainPage() {     InitializeComponent();       SupportedOrientations = SupportedPageOrientation.Portrait | SupportedPageOrientation.Landscape;     listBox1.Width = Width;       var rssClient = new RssClient("http://feedproxy.google.com/gunnarpeipman");     rssClient.ItemsReceived += new RssClient.ItemsReceivedDelegate(rssClient_ItemsReceived);     rssClient.LoadItems(); }   void rssClient_ItemsReceived(RssClient client, IList<FeedItem> items) {     Dispatcher.BeginInvoke(delegate()     {         listBox1.ItemsSource = items;     });            } Conclusion As you can see it was not very hard task to read RSS-feed and populate list with feed entries. Although we are not able to use more powerful classes that are part of full version on .NET Framework we can still live with limited set of classes that .NET Framework CE provides.

    Read the article

  • Recursion in Ecore-File?!

    - by Dominik
    Hey guys, just tried to convert towards a Ecore-Model from a given UML-Model. After this I am trying to create a Generator Model. Everytime I try to do this I get the Error Message, that there is a "Unhandled event loop exception" with this log: org.eclipse.swt.SWTException: Failed to execute runnable (java.lang.NullPointerException) at org.eclipse.swt.SWT.error(SWT.java:3884) at org.eclipse.swt.SWT.error(SWT.java:3799) at org.eclipse.swt.widgets.Synchronizer.runAsyncMessages(Synchronizer.java:137) at org.eclipse.swt.widgets.Display.runAsyncMessages(Display.java:3885) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3506) at org.eclipse.jface.window.Window.runEventLoop(Window.java:825) at org.eclipse.jface.window.Window.open(Window.java:801) at org.eclipse.gmf.internal.bridge.ui.dashboard.DashboardMediator$RunWizardAction.run(DashboardMediator.java:316) at org.eclipse.gmf.internal.bridge.ui.dashboard.HyperlinkFigure$1.mousePressed(HyperlinkFigure.java:63) at org.eclipse.draw2d.Figure.handleMousePressed(Figure.java:873) at org.eclipse.draw2d.SWTEventDispatcher.dispatchMousePressed(SWTEventDispatcher.java:214) at org.eclipse.draw2d.LightweightSystem$EventHandler.mouseDown(LightweightSystem.java:513) at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:179) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1003) at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:3910) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3503) at org.eclipse.ui.internal.Workbench.runEventLoop(Workbench.java:2405) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2369) at org.eclipse.ui.internal.Workbench.access$4(Workbench.java:2221) at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:500) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:493) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:113) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:194) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:368) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:559) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:514) at org.eclipse.equinox.launcher.Main.run(Main.java:1311) Caused by: java.lang.NullPointerException at org.eclipse.emf.converter.util.ConverterUtil.computeRequiredPackages(ConverterUtil.java:374) at org.eclipse.emf.converter.ui.contribution.base.ModelConverterPackagePage.validate(ModelConverterPackagePage.java:965) at org.eclipse.emf.importer.ui.contribution.base.ModelImporterPackagePage.validate(ModelImporterPackagePage.java:101) at org.eclipse.emf.converter.ui.contribution.base.ModelConverterPackagePage$1.run(ModelConverterPackagePage.java:155) at org.eclipse.swt.widgets.RunnableLock.run(RunnableLock.java:35) at org.eclipse.swt.widgets.Synchronizer.runAsyncMessages(Synchronizer.java:134) ... 34 more After this there occurs another exception with this text: "Unable to create editor ID org.eclipse.emf.codegen.ecore.genmodel.presentation.GenModelEditorID:An unexpected exception was thrown." The session data says: eclipse.buildId=unknown java.version=1.6.0_13 java.vendor=Sun Microsystems Inc. BootLoader constants: OS=win32, ARCH=x86, WS=win32, NL=de_DE Framework arguments: -product org.eclipse.epp.package.modeling.product Command-line arguments: -os win32 -ws win32 -arch x86 -product org.eclipse.epp.package.modeling.product -consoleLog With this long log: java.lang.NullPointerException at org.eclipse.emf.ecore.util.EcoreUtil.getURI(EcoreUtil.java:2887) at org.eclipse.emf.codegen.ecore.genmodel.impl.GenModelImpl.diagnose(GenModelImpl.java:2930) at org.eclipse.emf.codegen.ecore.genmodel.presentation.GenModelEditor.validate(GenModelEditor.java:1773) at org.eclipse.emf.codegen.ecore.genmodel.presentation.GenModelEditor.initialize(GenModelEditor.java:596) at org.eclipse.emf.codegen.ecore.genmodel.presentation.GenModelEditor.createPages(GenModelEditor.java:1080) at org.eclipse.ui.part.MultiPageEditorPart.createPartControl(MultiPageEditorPart.java:357) at org.eclipse.ui.internal.EditorReference.createPartHelper(EditorReference.java:662) at org.eclipse.ui.internal.EditorReference.createPart(EditorReference.java:462) at org.eclipse.ui.internal.WorkbenchPartReference.getPart(WorkbenchPartReference.java:595) at org.eclipse.ui.internal.EditorReference.getEditor(EditorReference.java:286) at org.eclipse.ui.internal.WorkbenchPage.busyOpenEditorBatched(WorkbenchPage.java:2857) at org.eclipse.ui.internal.WorkbenchPage.busyOpenEditor(WorkbenchPage.java:2762) at org.eclipse.ui.internal.WorkbenchPage.access$11(WorkbenchPage.java:2754) at org.eclipse.ui.internal.WorkbenchPage$10.run(WorkbenchPage.java:2705) at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:70) at org.eclipse.ui.internal.WorkbenchPage.openEditor(WorkbenchPage.java:2701) at org.eclipse.ui.internal.WorkbenchPage.openEditor(WorkbenchPage.java:2685) at org.eclipse.ui.internal.WorkbenchPage.openEditor(WorkbenchPage.java:2668) at org.eclipse.emf.converter.ui.contribution.base.ModelConverterWizard.openEditor(ModelConverterWizard.java:318) at org.eclipse.emf.importer.ui.contribution.base.ModelImporterWizard.performFinish(ModelImporterWizard.java:167) at org.eclipse.jface.wizard.WizardDialog.finishPressed(WizardDialog.java:752) at org.eclipse.gmf.internal.bridge.ui.dashboard.DashboardMediator$RunWizardAction$1.finishPressed(DashboardMediator.java:311) at org.eclipse.jface.wizard.WizardDialog.buttonPressed(WizardDialog.java:373) at org.eclipse.jface.dialogs.Dialog$2.widgetSelected(Dialog.java:624) at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:228) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1003) at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:3910) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3503) at org.eclipse.jface.window.Window.runEventLoop(Window.java:825) at org.eclipse.jface.window.Window.open(Window.java:801) at org.eclipse.gmf.internal.bridge.ui.dashboard.DashboardMediator$RunWizardAction.run(DashboardMediator.java:316) at org.eclipse.gmf.internal.bridge.ui.dashboard.HyperlinkFigure$1.mousePressed(HyperlinkFigure.java:63) at org.eclipse.draw2d.Figure.handleMousePressed(Figure.java:873) at org.eclipse.draw2d.SWTEventDispatcher.dispatchMousePressed(SWTEventDispatcher.java:214) at org.eclipse.draw2d.LightweightSystem$EventHandler.mouseDown(LightweightSystem.java:513) at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:179) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1003) at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:3910) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3503) at org.eclipse.ui.internal.Workbench.runEventLoop(Workbench.java:2405) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2369) at org.eclipse.ui.internal.Workbench.access$4(Workbench.java:2221) at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:500) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:493) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:113) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:194) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:368) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:559) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:514) at org.eclipse.equinox.launcher.Main.run(Main.java:1311) Has anyone an idea what is going wrong? I looked a while at my model but were not able to find something wrong. I just thought there might be a recursion due to the "Unhandled event loop exception" but is this even possible? Thanks in advance, Dominik

    Read the article

  • What’s new in ASP.NET 4.0: Core Features

    - by Rick Strahl
    Microsoft released the .NET Runtime 4.0 and with it comes a brand spanking new version of ASP.NET – version 4.0 – which provides an incremental set of improvements to an already powerful platform. .NET 4.0 is a full release of the .NET Framework, unlike version 3.5, which was merely a set of library updates on top of the .NET Framework version 2.0. Because of this full framework revision, there has been a welcome bit of consolidation of assemblies and configuration settings. The full runtime version change to 4.0 also means that you have to explicitly pick version 4.0 of the runtime when you create a new Application Pool in IIS, unlike .NET 3.5, which actually requires version 2.0 of the runtime. In this first of two parts I'll take a look at some of the changes in the core ASP.NET runtime. In the next edition I'll go over improvements in Web Forms and Visual Studio. Core Engine Features Most of the high profile improvements in ASP.NET have to do with Web Forms, but there are a few gems in the core runtime that should make life easier for ASP.NET developers. The following list describes some of the things I've found useful among the new features. Clean web.config Files Are Back! If you've been using ASP.NET 3.5, you probably have noticed that the web.config file has turned into quite a mess of configuration settings between all the custom handler and module mappings for the various web server versions. Part of the reason for this mess is that .NET 3.5 is a collection of add-on components running on top of the .NET Runtime 2.0 and so almost all of the new features of .NET 3.5 where essentially introduced as custom modules and handlers that had to be explicitly configured in the config file. Because the core runtime didn't rev with 3.5, all those configuration options couldn't be moved up to other configuration files in the system chain. With version 4.0 a consolidation was possible, and the result is a much simpler web.config file by default. A default empty ASP.NET 4.0 Web Forms project looks like this: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> </configuration> Need I say more? Configuration Transformation Files to Manage Configurations and Application Packaging ASP.NET 4.0 introduces the ability to create multi-target configuration files. This means it's possible to create a single configuration file that can be transformed based on relatively simple replacement rules using a Visual Studio and WebDeploy provided XSLT syntax. The idea is that you can create a 'master' configuration file and then create customized versions of this master configuration file by applying some relatively simplistic search and replace, add or remove logic to specific elements and attributes in the original file. To give you an idea, here's the example code that Visual Studio creates for a default web.Release.config file, which replaces a connection string, removes the debug attribute and replaces the CustomErrors section: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="MyDB" connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated Security=True" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors defaultRedirect="GenericError.htm" mode="RemoteOnly" xdt:Transform="Replace"> <error statusCode="500" redirect="InternalError.htm"/> </customErrors> </system.web> </configuration> You can see the XSL transform syntax that drives this functionality. Basically, only the elements listed in the override file are matched and updated – all the rest of the original web.config file stays intact. Visual Studio 2010 supports this functionality directly in the project system so it's easy to create and maintain these customized configurations in the project tree. Once you're ready to publish your application, you can then use the Publish <yourWebApplication> option on the Build menu which allows publishing to disk, via FTP or to a Web Server using Web Deploy. You can also create a deployment package as a .zip file which can be used by the WebDeploy tool to configure and install the application. You can manually run the Web Deploy tool or use the IIS Manager to install the package on the server or other machine. You can find out more about WebDeploy and Packaging here: http://tinyurl.com/2anxcje. Improved Routing Routing provides a relatively simple way to create clean URLs with ASP.NET by associating a template URL path and routing it to a specific ASP.NET HttpHandler. Microsoft first introduced routing with ASP.NET MVC and then they integrated routing with a basic implementation in the core ASP.NET engine via a separate ASP.NET routing assembly. In ASP.NET 4.0, the process of using routing functionality gets a bit easier. First, routing is now rolled directly into System.Web, so no extra assembly reference is required in your projects to use routing. The RouteCollection class now includes a MapPageRoute() method that makes it easy to route to any ASP.NET Page requests without first having to implement an IRouteHandler implementation. It would have been nice if this could have been extended to serve *any* handler implementation, but unfortunately for anything but a Page derived handlers you still will have to implement a custom IRouteHandler implementation. ASP.NET Pages now include a RouteData collection that will contain route information. Retrieving route data is now a lot easier by simply using this.RouteData.Values["routeKey"] where the routeKey is the value specified in the route template (i.e., "users/{userId}" would use Values["userId"]). The Page class also has a GetRouteUrl() method that you can use to create URLs with route data values rather than hardcoding the URL: <%= this.GetRouteUrl("users",new { userId="ricks" }) %> You can also use the new Expression syntax using <%$RouteUrl %> to accomplish something similar, which can be easier to embed into Page or MVC View code: <a runat="server" href='<%$RouteUrl:RouteName=user, id=ricks %>'>Visit User</a> Finally, the Response object also includes a new RedirectToRoute() method to build a route url for redirection without hardcoding the URL. Response.RedirectToRoute("users", new { userId = "ricks" }); All of these routines are helpers that have been integrated into the core ASP.NET engine to make it easier to create routes and retrieve route data, which hopefully will result in more people taking advantage of routing in ASP.NET. To find out more about the routing improvements you can check out Dan Maharry's blog which has a couple of nice blog entries on this subject: http://tinyurl.com/37trutj and http://tinyurl.com/39tt5w5. Session State Improvements Session state is an often used and abused feature in ASP.NET and version 4.0 introduces a few enhancements geared towards making session state more efficient and to minimize at least some of the ill effects of overuse. The first improvement affects out of process session state, which is typically used in web farm environments or for sites that store application sensitive data that must survive AppDomain restarts (which in my opinion is just about any application). When using OutOfProc session state, ASP.NET serializes all the data in the session statebag into a blob that gets carried over the network and stored either in the State server or SQL Server via the Session provider. Version 4.0 provides some improvement in this serialization of the session data by offering an enableCompression option on the web.Config <Session> section, which forces the serialized session state to be compressed. Depending on the type of data that is being serialized, this compression can reduce the size of the data travelling over the wire by as much as a third. It works best on string data, but can also reduce the size of binary data. In addition, ASP.NET 4.0 now offers a way to programmatically turn session state on or off as part of the request processing queue. In prior versions, the only way to specify whether session state is available is by implementing a marker interface on the HTTP handler implementation. In ASP.NET 4.0, you can now turn session state on and off programmatically via HttpContext.Current.SetSessionStateBehavior() as part of the ASP.NET module pipeline processing as long as it occurs before the AquireRequestState pipeline event. Output Cache Provider Output caching in ASP.NET has been a very useful but potentially memory intensive feature. The default OutputCache mechanism works through in-memory storage that persists generated output based on various lifetime related parameters. While this works well enough for many intended scenarios, it also can quickly cause runaway memory consumption as the cache fills up and serves many variations of pages on your site. ASP.NET 4.0 introduces a provider model for the OutputCache module so it becomes possible to plug-in custom storage strategies for cached pages. One of the goals also appears to be to consolidate some of the different cache storage mechanisms used in .NET in general to a generic Windows AppFabric framework in the future, so various different mechanisms like OutputCache, the non-Page specific ASP.NET cache and possibly even session state eventually can use the same caching engine for storage of persisted data both in memory and out of process scenarios. For developers, the OutputCache provider feature means that you can now extend caching on your own by implementing a custom Cache provider based on the System.Web.Caching.OutputCacheProvider class. You can find more info on creating an Output Cache provider in Gunnar Peipman's blog at: http://tinyurl.com/2vt6g7l. Response.RedirectPermanent ASP.NET 4.0 includes features to issue a permanent redirect that issues as an HTTP 301 Moved Permanently response rather than the standard 302 Redirect respond. In pre-4.0 versions you had to manually create your permanent redirect by setting the Status and Status code properties – Response.RedirectPermanent() makes this operation more obvious and discoverable. There's also a Response.RedirectToRoutePermanent() which provides permanent redirection of route Urls. Preloading of Applications ASP.NET 4.0 provides a new feature to preload ASP.NET applications on startup, which is meant to provide a more consistent startup experience. If your application has a lengthy startup cycle it can appear very slow to serve data to clients while the application is warming up and loading initial resources. So rather than serve these startup requests slowly in ASP.NET 4.0, you can force the application to initialize itself first before even accepting requests for processing. This feature works only on IIS 7.5 (Windows 7 and Windows Server 2008 R2) and works in combination with IIS. You can set up a worker process in IIS 7.5 to always be running, which starts the Application Pool worker process immediately. ASP.NET 4.0 then allows you to specify site-specific settings by setting the serverAutoStartEnabled on a particular site along with an optional serviceAutoStartProvider class that can be used to receive "startup events" when the application starts up. This event in turn can be used to configure the application and optionally pre-load cache data and other information required by the app on startup.  The configuration settings need to be made in applicationhost.config: <sites> <site name="WebApplication2" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PreWarmup" /> </site> </sites> <serviceAutoStartProviders> <add name="PreWarmup" type="PreWarmupProvider,MyAssembly" /> </serviceAutoStartProviders> Hooking up a warm up provider is optional so you can omit the provider definition and reference. If you do define it here's what it looks like: public class PreWarmupProvider System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // initialization for app } } This code fires and while it's running, ASP.NET/IIS will hold requests from hitting the pipeline. So until this code completes the application will not start taking requests. The idea is that you can perform any pre-loading of resources and cache values so that the first request will be ready to perform at optimal performance level without lag. Runtime Performance Improvements According to Microsoft, there have also been a number of invisible performance improvements in the internals of the ASP.NET runtime that should make ASP.NET 4.0 applications run more efficiently and use less resources. These features come without any change requirements in applications and are virtually transparent, except that you get the benefits by updating to ASP.NET 4.0. Summary The core feature set changes are minimal which continues a tradition of small incremental changes to the ASP.NET runtime. ASP.NET has been proven as a solid platform and I'm actually rather happy to see that most of the effort in this release went into stability, performance and usability improvements rather than a massive amount of new features. The new functionality added in 4.0 is minimal but very useful. A lot of people are still running pure .NET 2.0 applications these days and have stayed off of .NET 3.5 for some time now. I think that version 4.0 with its full .NET runtime rev and assembly and configuration consolidation will make an attractive platform for developers to update to. If you're a Web Forms developer in particular, ASP.NET 4.0 includes a host of new features in the Web Forms engine that are significant enough to warrant a quick move to .NET 4.0. I'll cover those changes in my next column. Until then, I suggest you give ASP.NET 4.0 a spin and see for yourself how the new features can help you out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 3 – Table per Concrete Type (TPC) and Choosing Strategy Guidelines

    - by mortezam
    This is the third (and last) post in a series that explains different approaches to map an inheritance hierarchy with EF Code First. I've described these strategies in previous posts: Part 1 – Table per Hierarchy (TPH) Part 2 – Table per Type (TPT)In today’s blog post I am going to discuss Table per Concrete Type (TPC) which completes the inheritance mapping strategies supported by EF Code First. At the end of this post I will provide some guidelines to choose an inheritance strategy mainly based on what we've learned in this series. TPC and Entity Framework in the Past Table per Concrete type is somehow the simplest approach suggested, yet using TPC with EF is one of those concepts that has not been covered very well so far and I've seen in some resources that it was even discouraged. The reason for that is just because Entity Data Model Designer in VS2010 doesn't support TPC (even though the EF runtime does). That basically means if you are following EF's Database-First or Model-First approaches then configuring TPC requires manually writing XML in the EDMX file which is not considered to be a fun practice. Well, no more. You'll see that with Code First, creating TPC is perfectly possible with fluent API just like other strategies and you don't need to avoid TPC due to the lack of designer support as you would probably do in other EF approaches. Table per Concrete Type (TPC)In Table per Concrete type (aka Table per Concrete class) we use exactly one table for each (nonabstract) class. All properties of a class, including inherited properties, can be mapped to columns of this table, as shown in the following figure: As you can see, the SQL schema is not aware of the inheritance; effectively, we’ve mapped two unrelated tables to a more expressive class structure. If the base class was concrete, then an additional table would be needed to hold instances of that class. I have to emphasize that there is no relationship between the database tables, except for the fact that they share some similar columns. TPC Implementation in Code First Just like the TPT implementation, we need to specify a separate table for each of the subclasses. We also need to tell Code First that we want all of the inherited properties to be mapped as part of this table. In CTP5, there is a new helper method on EntityMappingConfiguration class called MapInheritedProperties that exactly does this for us. Here is the complete object model as well as the fluent API to create a TPC mapping: public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } }          public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } }          public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } }      public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; }              protected override void OnModelCreating(ModelBuilder modelBuilder)     {         modelBuilder.Entity<BankAccount>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("BankAccounts");         });         modelBuilder.Entity<CreditCard>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("CreditCards");         });                 } } The Importance of EntityMappingConfiguration ClassAs a side note, it worth mentioning that EntityMappingConfiguration class turns out to be a key type for inheritance mapping in Code First. Here is an snapshot of this class: namespace System.Data.Entity.ModelConfiguration.Configuration.Mapping {     public class EntityMappingConfiguration<TEntityType> where TEntityType : class     {         public ValueConditionConfiguration Requires(string discriminator);         public void ToTable(string tableName);         public void MapInheritedProperties();     } } As you have seen so far, we used its Requires method to customize TPH. We also used its ToTable method to create a TPT and now we are using its MapInheritedProperties along with ToTable method to create our TPC mapping. TPC Configuration is Not Done Yet!We are not quite done with our TPC configuration and there is more into this story even though the fluent API we saw perfectly created a TPC mapping for us in the database. To see why, let's start working with our object model. For example, the following code creates two new objects of BankAccount and CreditCard types and tries to add them to the database: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount();     CreditCard creditCard = new CreditCard() { CardType = 1 };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Running this code throws an InvalidOperationException with this message: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges. The reason we got this exception is because DbContext.SaveChanges() internally invokes SaveChanges method of its internal ObjectContext. ObjectContext's SaveChanges method on its turn by default calls AcceptAllChanges after it has performed the database modifications. AcceptAllChanges method merely iterates over all entries in ObjectStateManager and invokes AcceptChanges on each of them. Since the entities are in Added state, AcceptChanges method replaces their temporary EntityKey with a regular EntityKey based on the primary key values (i.e. BillingDetailId) that come back from the database and that's where the problem occurs since both the entities have been assigned the same value for their primary key by the database (i.e. on both BillingDetailId = 1) and the problem is that ObjectStateManager cannot track objects of the same type (i.e. BillingDetail) with the same EntityKey value hence it throws. If you take a closer look at the TPC's SQL schema above, you'll see why the database generated the same values for the primary keys: the BillingDetailId column in both BankAccounts and CreditCards table has been marked as identity. How to Solve The Identity Problem in TPC As you saw, using SQL Server’s int identity columns doesn't work very well together with TPC since there will be duplicate entity keys when inserting in subclasses tables with all having the same identity seed. Therefore, to solve this, either a spread seed (where each table has its own initial seed value) will be needed, or a mechanism other than SQL Server’s int identity should be used. Some other RDBMSes have other mechanisms allowing a sequence (identity) to be shared by multiple tables, and something similar can be achieved with GUID keys in SQL Server. While using GUID keys, or int identity keys with different starting seeds will solve the problem but yet another solution would be to completely switch off identity on the primary key property. As a result, we need to take the responsibility of providing unique keys when inserting records to the database. We will go with this solution since it works regardless of which database engine is used. Switching Off Identity in Code First We can switch off identity simply by placing DatabaseGenerated attribute on the primary key property and pass DatabaseGenerationOption.None to its constructor. DatabaseGenerated attribute is a new data annotation which has been added to System.ComponentModel.DataAnnotations namespace in CTP5: public abstract class BillingDetail {     [DatabaseGenerated(DatabaseGenerationOption.None)]     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } As always, we can achieve the same result by using fluent API, if you prefer that: modelBuilder.Entity<BillingDetail>()             .Property(p => p.BillingDetailId)             .HasDatabaseGenerationOption(DatabaseGenerationOption.None); Working With The Object Model Our TPC mapping is ready and we can try adding new records to the database. But, like I said, now we need to take care of providing unique keys when creating new objects: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount()      {          BillingDetailId = 1                          };     CreditCard creditCard = new CreditCard()      {          BillingDetailId = 2,         CardType = 1     };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Polymorphic Associations with TPC is Problematic The main problem with this approach is that it doesn’t support Polymorphic Associations very well. After all, in the database, associations are represented as foreign key relationships and in TPC, the subclasses are all mapped to different tables so a polymorphic association to their base class (abstract BillingDetail in our example) cannot be represented as a simple foreign key relationship. For example, consider the the domain model we introduced here where User has a polymorphic association with BillingDetail. This would be problematic in our TPC Schema, because if User has a many-to-one relationship with BillingDetail, the Users table would need a single foreign key column, which would have to refer both concrete subclass tables. This isn’t possible with regular foreign key constraints. Schema Evolution with TPC is Complex A further conceptual problem with this mapping strategy is that several different columns, of different tables, share exactly the same semantics. This makes schema evolution more complex. For example, a change to a base class property results in changes to multiple columns. It also makes it much more difficult to implement database integrity constraints that apply to all subclasses. Generated SQLLet's examine SQL output for polymorphic queries in TPC mapping. For example, consider this polymorphic query for all BillingDetails and the resulting SQL statements that being executed in the database: var query = from b in context.BillingDetails select b; Just like the SQL query generated by TPT mapping, the CASE statements that you see in the beginning of the query is merely to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type). TPC's SQL Queries are Union Based As you can see in the above screenshot, the first SELECT uses a FROM-clause subquery (which is selected with a red rectangle) to retrieve all instances of BillingDetails from all concrete class tables. The tables are combined with a UNION operator, and a literal (in this case, 0 and 1) is inserted into the intermediate result; (look at the lines highlighted in yellow.) EF reads this to instantiate the correct class given the data from a particular row. A union requires that the queries that are combined, project over the same columns; hence, EF has to pad and fill up nonexistent columns with NULL. This query will really perform well since here we can let the database optimizer find the best execution plan to combine rows from several tables. There is also no Joins involved so it has a better performance than the SQL queries generated by TPT where a Join is required between the base and subclasses tables. Choosing Strategy GuidelinesBefore we get into this discussion, I want to emphasize that there is no one single "best strategy fits all scenarios" exists. As you saw, each of the approaches have their own advantages and drawbacks. Here are some rules of thumb to identify the best strategy in a particular scenario: If you don’t require polymorphic associations or queries, lean toward TPC—in other words, if you never or rarely query for BillingDetails and you have no class that has an association to BillingDetail base class. I recommend TPC (only) for the top level of your class hierarchy, where polymorphism isn’t usually required, and when modification of the base class in the future is unlikely. If you do require polymorphic associations or queries, and subclasses declare relatively few properties (particularly if the main difference between subclasses is in their behavior), lean toward TPH. Your goal is to minimize the number of nullable columns and to convince yourself (and your DBA) that a denormalized schema won’t create problems in the long run. If you do require polymorphic associations or queries, and subclasses declare many properties (subclasses differ mainly by the data they hold), lean toward TPT. Or, depending on the width and depth of your inheritance hierarchy and the possible cost of joins versus unions, use TPC. By default, choose TPH only for simple problems. For more complex cases (or when you’re overruled by a data modeler insisting on the importance of nullability constraints and normalization), you should consider the TPT strategy. But at that point, ask yourself whether it may not be better to remodel inheritance as delegation in the object model (delegation is a way of making composition as powerful for reuse as inheritance). Complex inheritance is often best avoided for all sorts of reasons unrelated to persistence or ORM. EF acts as a buffer between the domain and relational models, but that doesn’t mean you can ignore persistence concerns when designing your classes. SummaryIn this series, we focused on one of the main structural aspect of the object/relational paradigm mismatch which is inheritance and discussed how EF solve this problem as an ORM solution. We learned about the three well-known inheritance mapping strategies and their implementations in EF Code First. Hopefully it gives you a better insight about the mapping of inheritance hierarchies as well as choosing the best strategy for your particular scenario. Happy New Year and Happy Code-Firsting! References ADO.NET team blog Java Persistence with Hibernate book a { color: #5A99FF; } a:visited { color: #5A99FF; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } .exception { background-color: #f0f0f0; font-style: italic; padding-bottom: 5px; padding-left: 5px; padding-top: 5px; padding-right: 5px; }

    Read the article

< Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >