Search Results

Search found 10719 results on 429 pages for 'temp tables'.

Page 202/429 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Is there a way to delay compilation of a stored procedure's execution plan?

    - by Ian Henry
    (At first glance this may look like a duplicate of http://stackoverflow.com/questions/421275 or http://stackoverflow.com/questions/414336, but my actual question is a bit different) Alright, this one's had me stumped for a few hours. My example here is ridiculously abstracted, so I doubt it will be possible to recreate locally, but it provides context for my question (Also, I'm running SQL Server 2005). I have a stored procedure with basically two steps, constructing a temp table, populating it with very few rows, and then querying a very large table joining against that temp table. It has multiple parameters, but the most relevant is a datetime "@MinDate." Essentially: create table #smallTable (ID int) insert into #smallTable select (a very small number of rows from some other table) select * from aGiantTable inner join #smallTable on #smallTable.ID = aGiantTable.ID inner join anotherTable on anotherTable.GiantID = aGiantTable.ID where aGiantTable.SomeDateField > @MinDate If I just execute this as a normal query, by declaring @MinDate as a local variable and running that, it produces an optimal execution plan that executes very quickly (first joins on #smallTable and then only considers a very small subset of rows from aGiantTable while doing other operations). It seems to realize that #smallTable is tiny, so it would be efficient to start with it. This is good. However, if I make that a stored procedure with @MinDate as a parameter, it produces a completely inefficient execution plan. (I am recompiling it each time, so it's not a bad cached plan...at least, I sure hope it's not) But here's where it gets weird. If I change the proc to the following: declare @LocalMinDate datetime set @LocalMinDate = @MinDate --where @MinDate is still a parameter create table #smallTable (ID int) insert into #smallTable select (a very small number of rows from some other table) select * from aGiantTable inner join #smallTable on #smallTable.ID = aGiantTable.ID inner join anotherTable on anotherTable.GiantID = aGiantTable.ID where aGiantTable.SomeDateField > @LocalMinDate Then it gives me the efficient plan! So my theory is this: when executing as a plain query (not as a stored procedure), it waits to construct the execution plan for the expensive query until the last minute, so the query optimizer knows that #smallTable is small and uses that information to give the efficient plan. But when executing as a stored procedure, it creates the entire execution plan at once, thus it can't use this bit of information to optimize the plan. But why does using the locally declared variables change this? Why does that delay the creation of the execution plan? Is that actually what's happening? If so, is there a way to force delayed compilation (if that indeed is what's going on here) even when not using local variables in this way? More generally, does anyone have sources on when the execution plan is created for each step of a stored procedure? Googling hasn't provided any helpful information, but I don't think I'm looking for the right thing. Or is my theory just completely unfounded? Edit: Since posting, I've learned of parameter sniffing, and I assume this is what's causing the execution plan to compile prematurely (unless stored procedures indeed compile all at once), so my question remains -- can you force the delay? Or disable the sniffing entirely? The question is academic, since I can force a more efficient plan by replacing the select * from aGiantTable with select * from (select * from aGiantTable where ID in (select ID from #smallTable)) as aGiantTable Or just sucking it up and masking the parameters, but still, this inconsistency has me pretty curious.

    Read the article

  • How can I display the clicked products by user on a list in another view?

    - by Avar
    I am using MVC3 Viewmodel pattern with Entity Framework on my webbapplication. My Index View is list of products with image, price and description and etc. Products with the information I mentioned above is in div boxes with a button that says "buy". I will be working with 2 views one that is the Index View that will display all the products and the other view that will display the products that got clicked by the buy button. What I am trying to achieve is when a user click on buy button the products should get stored in the other view that is cart view and be displayed. I have problems on how to begin the coding for that part. The index View with products is done and now its the buy button function left to do but I have no idea how to start. This is my IndexController: private readonly HomeRepository repository = new HomeRepository(); public ActionResult Index() { var Productlist = repository.GetAllProducts(); var model = new HomeIndexViewModel() { Productlist = new List<ProductsViewModel>() }; foreach (var Product in Productlist) { FillProductToModel(model, Product); } return View(model); } private void FillProductToModel(HomeIndexViewModel model, ProductImages productimage) { var productViewModel = new ProductsViewModel { Description = productimage.Products.Description, ProductId = productimage.Products.Id, price = productimage.Products.Price, Name = productimage.Products.Name, Image = productimage.ImageUrl, }; model.Productlist.Add(productViewModel); } In my ActionResult Index I am using my repository to get the products and then I am binding the data from the products to my ViewModel so I can use the ViewModel inside my view. Thats how I am displaying all the products in my View. This is my Index View: @model Avan.ViewModels.HomeIndexViewModel @foreach (var item in Model.Productlist) { <div id="productholder@(item.ProductId)" class="productholder"> <img src="@Html.DisplayFor(modelItem => item.Image)" alt="" /> <div class="productinfo"> <h2>@Html.DisplayFor(modelItem => item.Name)</h2> <p>@Html.DisplayFor(modelItem => item.Description)</p> @Html.Hidden("ProductId", item.ProductId, new { @id = "ProductId" }) </div> <div class="productprice"> <h2>@Html.DisplayFor(modelItem => item.price)</h2> <input type="button" value="Läs mer" class="button" id="button@(item.ProductId)"> @Html.ActionLink("x", "Cart", new { id = item.ProductId }) // <- temp its going to be a button </div> </div> } Since I can get the product ID per product I can use the ID in my controller to get the data from the database. But I still I have no idea how I can do that so when somebody click on the buy button I store the ID where? and how do I use it so I can achieve what I want to do? Right now I have been trying to do following thing in my IndexController: public ActionResult cart(int id) { var SelectedProducts = repository.GetProductByID(id); return View(); } What I did here is that I get the product by the id. So when someone press on the temp "x" Actionlink I will recieve the product. All I know is that something like that is needed to achieve what im trying to do but after that I have no idea what to do and in what kind of structure I should do it. Any kind of help is appreciated alot! Short Scenario: looking at the Index I see 5 products, I choose to buy 3 products so I click on three "Buy" buttons. Now I click on the "Cart" that is located on the nav menu. New View pops up and I see the three products that I clicked to buy.

    Read the article

  • RSA encryption/ Decryption in a client server application

    - by user308806
    Hi guys, probably missing something very straight forward on this, but please forgive me, I'm very naive! Have a client server application where the client identifies its self with an RSA encrypted username & password. Unfortunately I'm getting a "bad padding exception: data must start with zero" when i try to decrypt with the public key on the client side. I'm fairly sure the key is correct as I have tested encrypting with public key then decrypting with private key on the client side with no problems at all. Just seems when I transfer it over the connection it messses it up somehow?! Using PrintWriter & BufferedReader on the sockets if thats of importance. EncodeBASE64 & DecodeBASE64 encode byte[] to 64base and vice versa respectively. Any ideas guys?? Client side: Socket connectionToServer = new Socket("127.0.0.1", 7050); InputStream in = connectionToServer.getInputStream(); DataInputStream dis = new DataInputStream(in); int length = dis.readInt(); byte[] data = new byte[length]; // dis.readFully(data); dis.read(data); System.out.println("The received Data*****************************************"); System.out.println("The length of bits "+ length); System.out.println(data); System.out.println("***********************************************************"); Decryption d = new Decryption(); byte [] ttt = d.decrypt(data); System.out.print(data); String ss = new String(ttt); System.out.println("***********************"); System.out.println(ss); System.out.println("************************"); Server Side: in = connectionFromClient.getInputStream(); OutputStream out = connectionFromClient.getOutputStream(); DataOutputStream dataOut = new DataOutputStream(out); LicenseList licenses = new LicenseList(); String ValidIDs = licenses.getAllIDs(); System.out.println(ValidIDs); Encryption enc = new Encryption(); byte[] encrypted = enc.encrypt(ValidIDs); byte[] dd = enc.encrypt(ValidIDs); String tobesent = new String(dd); //byte[] rsult = enc.decrypt(dd); //String tt = String(rsult); System.out.println("The sent data**********************************************"); System.out.println(dd); String temp = new String(dd); System.out.println(temp); System.out.println("*************************************************************"); //BufferedWriter bf = new BufferedWriter(OutputStreamWriter(out)); //dataOut.write(ValidIDs.getBytes().length); dataOut.writeInt(ValidIDs.getBytes().length); dataOut.flush(); dataOut.write(encrypted); dataOut.flush(); System.out.println("********Testing**************"); System.out.println("Here are the ids:::"); System.out.println(licenses.getAllIDs()); System.out.println("**********************"); //bw.write("it is working well\n");

    Read the article

  • PROBLEM: PHP strip_tags & multi-dimensional array form parameter

    - by Tunji Gbadamosi
    I'm having problems stripping the tags from the textual inputs retrieved from my form so as to do something with them in checkout.php. The input is stored in a multi-dimensional array. Here's my form: echo '<form name="choose" action="checkout.php" method="post" onsubmit="return validate_second_form(this);">'; echo '<input type="hidden" name="hidden_value" value="'.$no_guests.'" />'; if($no_guests >= 1){ echo '<div class="volunteer">'; echo '<fieldset>'; echo '<legend>Volunteer:</legend>'; echo '<label>Table:</label>'; echo '<select name="volunteer_table">'; foreach($tables as $t){ echo '<option>'.$t.'</option>'; } echo '</select><br><br>'; echo '<label>Seat number:</label>'; echo '<select name="volunteer_seat">'; foreach($seats as $seat){ echo '<option>'.$seat.'</option>'; } echo '</select><br><br>'; //echo '<br>'; echo '</fieldset>'; echo '</div>'; for($i=0;$i<$no_guests;$i++){ $guest = "guest_".$i; echo '<div class="'.$guest.'">'; echo '<fieldset>'; echo '<legend>Guest '.$i.':</legend>'; echo '<label>First Name:</label>'; echo '<input type="text" name="guest['.$i.']['.$first_name.']" id="fn'.$i.'">'; echo '<label>Surname:</label>'; echo '<input type="text" name="guest['.$i.']['.$surname.']" id="surname'.$i.'"><br><br>'; echo '<label>Date of Birth:</label> <br>'; echo '<label>Day:</label>'; echo '<select name="guest['.$i.'][dob_day]">'; for($j=1;$j<32;$j++){ echo"<option value='$j'>$j</option>"; } echo '</select>'; echo '<label>Month:</label>'; echo '<select name="guest['.$i.'][dob_month]">'; for($j=0;$j<sizeof($month);$j++){ $value = ($j + 1); echo"<option value='$value'>$month[$j]</option>"; } echo '</select>'; echo '<label>Year:</label>'; echo '<select name="guest['.$i.'][dob_year]">'; for($j=1900;$j<$year_limit;$j++){ echo"<option value='$j'>$j</option>"; } echo '</select> <br><br>'; echo '<label>Sex:</label>'; echo '<select name="guest['.$i.']['.$sex.']">'; echo '<option>Female</option>'; echo '<option>Male</option>'; echo '</select><br><br>'; echo '<label>Table:</label>'; echo '<select name="guest['.$i.']['.$table.']">'; foreach($tables as $t){ echo '<option>'.$t.'</option>'; } echo '</select><br><br>'; echo '<label>Seat number:</label>'; echo '<select name="guest['.$i.']['.$seat_no.']">'; foreach($seats as $seat){ echo '<option>'.$seat.'</option>'; } echo '</select><br><br>'; //echo '<br>'; echo '</fieldset>'; echo '</div>'; } } else{ echo '<div id="volunteer">'; echo '<fieldset>'; echo '<legend>Volunteer:</legend>'; echo '<label>Table:</label>'; echo '<select name="volunteer['.$table.']">'; foreach($tables as $t){ echo '<option>'.$t.'</option>'; } echo '</select><br><br>'; echo '<label>Seat number:</label>'; echo '<select name="volunteer['.$seat_no.']">'; foreach($seats as $seat){ echo '<option>'.$seat.'</option>'; } echo '</select><br><br>'; //echo '<br>'; echo '</fieldset>'; echo '</div>'; } echo '<input type="submit" value="Submit form">'; echo '</form>'; here's checkout.php: if(isset($_POST['guest'])){ foreach($_POST['guest'] as $guest){ $guest['first_name'] = strip_tags($guest['first_name']); $guest['surname'] = strip_tags($guest['surname']); } //$_SESSION['guest'] = $guests; }

    Read the article

  • Overloading '-' for array subtraction

    - by Chris Wilson
    I am attempting to subtract two int arrays, stored as class members, using an overloaded - operator, but I'm getting some peculiar output when I run tests. The overload definition is Number& Number :: operator-(const Number& NumberObject) { for (int count = 0; count < NumberSize; count ++) { Value[count] -= NumberObject.Value[count]; } return *this; } Whenever I run tests on this, NumberObject.Value[count] always seems to be returning a zero value. Can anyone see where I'm going wrong with this? The line in main() where this subtraction is being carried out is cout << "The difference is: " << ArrayOfNumbers[0] - ArrayOfNumbers[1] << endl; ArrayOfNumbers contains two Number objects. The class declaration is #include <iostream> using namespace std; class Number { private: int Value[50]; int NumberSize; public: Number(); // Default constructor Number(const Number&); // Copy constructor Number(int, int); // Non-default constructor void SetMemberValues(int, int); // Manually set member values int GetNumberSize() const; // Return NumberSize member int GetValue() const; // Return Value[] member Number& operator-=(const Number&); }; inline Number operator-(Number Lhs, const Number& Rhs); ostream& operator<<(ostream&, const Number&); The full class definition is as follows: #include <iostream> #include "../headers/number.h" using namespace std; // Default constructor Number :: Number() {} // Copy constructor Number :: Number(const Number& NumberObject) { int Temp[NumberSize]; NumberSize = NumberObject.GetNumberSize(); for (int count = 0; count < NumberObject.GetNumberSize(); count ++) { Temp[count] = Value[count] - NumberObject.GetValue(); } } // Manually set member values void Number :: SetMemberValues(int NewNumberValue, int NewNumberSize) { NumberSize = NewNumberSize; for (int count = NewNumberSize - 1; count >= 0; count --) { Value[count] = NewNumberValue % 10; NewNumberValue = NewNumberValue / 10; } } // Non-default constructor Number :: Number(int NumberValue, int NewNumberSize) { NumberSize = NewNumberSize; for (int count = NewNumberSize - 1; count >= 0; count --) { Value[count] = NumberValue % 10; NumberValue = NumberValue / 10; } } // Return the NumberSize member int Number :: GetNumberSize() const { return NumberSize; } // Return the Value[] member int Number :: GetValue() const { int ResultSoFar; for (int count2 = 0; count2 < NumberSize; count2 ++) { ResultSoFar = ResultSoFar * 10 + Value[count2]; } return ResultSoFar; } Number& operator-=(const Number& Rhs) { for (int count = 0; count < NumberSize; count ++) { Value[count] -= Rhs.Value[count]; } return *this; } inline Number operator-(Number Lhs, const Number& Rhs) { Lhs -= Rhs; return Lhs; } // Overloaded output operator ostream& operator<<(ostream& OutputStream, const Number& NumberObject) { OutputStream << NumberObject.GetValue(); return OutputStream; }

    Read the article

  • How to kill android application using android code?

    - by Natarajan M
    I am develoing small android application in eclipse. In that project i kill the running process in android, i got the Permission Denial error. how can i solve this problem in android. Anybody help for this problem.... THIS IS MY CODE package com.example.nuts; import java.util.Iterator; import java.util.List; import android.app.Activity; import android.app.ActivityManager; import android.app.ActivityManager.RunningAppProcessInfo; import android.content.Context; import android.content.pm.PackageManager; import android.os.Bundle; import android.telephony.SmsManager; import android.widget.Toast; import android.*; public class killprocess extends Activity { SmsManager smsManager = SmsManager.getDefault(); Recivesms rms=new Recivesms(); String Number=""; int pid=0; String appname=""; protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); try { Number=Recivesms.senderNum; pid=Integer.parseInt(Recivesms.struid); appname=getAppName(pid); Toast.makeText(getBaseContext(),"App Name is "+appname, Toast.LENGTH_LONG).show(); ActivityManager am = (ActivityManager) getSystemService(Activity.ACTIVITY_SERVICE); List<RunningAppProcessInfo> processes = am.getRunningAppProcesses(); if (processes != null){ for (int i=0; i<processes.size(); i++){ RunningAppProcessInfo temp = processes.get(i); String pName = temp.processName; if (pName.equals(appname)) { Toast.makeText(getBaseContext(),"App Name is matched "+appname+" "+pName, Toast.LENGTH_LONG).show(); int pid1 = android.os.Process.getUidForName(pName); //android.os.Process.killProcess(pid1); am.killBackgroundProcesses(pName); Toast.makeText(getBaseContext(), "Killed successfully....", Toast.LENGTH_LONG).show(); } } } smsManager.sendTextMessage(Number, null,"Your process Successfully killed..." , null,null); }catch(Exception e) { Toast.makeText(getBaseContext(),e.getMessage(), Toast.LENGTH_LONG).show(); } } private String getAppName(int Pid) { String processName = ""; ActivityManager am = (ActivityManager)this.getSystemService(ACTIVITY_SERVICE); List l = am.getRunningAppProcesses(); Iterator i = l.iterator(); PackageManager pm = this.getPackageManager(); while(i.hasNext()) { ActivityManager.RunningAppProcessInfo info = (ActivityManager.RunningAppProcessInfo)(i.next()); try { if(info.pid == Pid) { CharSequence c = pm.getApplicationLabel(pm.getApplicationInfo(info.processName, PackageManager.GET_META_DATA)); //Log.d("Process", "Id: "+ info.pid +" ProcessName: "+ info.processName +" Label: "+c.toString()); //processName = c.toString(); processName = info.processName; } } catch(Exception e) { //Log.d("Process", "Error>> :"+ e.toString()); } } return processName; } } After executing the code. i got the following error... Permission Denial: killBackgroundProcess() from pid=894, uid=10052 requires android.permission.KILL_BACKGROUND_PROCESSES Also i put the following line on manifest file <uses-permission android:name="android.permission.KILL_BACKGROUND_PROCESS" /> Anybody help for how to solve this problem... Thanking you....

    Read the article

  • Generate random number histogram using java

    - by Chewart
    Histogram -------------------------------------------------------- 1 ****(4) 2 ******(6) 3 ***********(11) 4 *****************(17) 5 **************************(26) 6 *************************(25) 7 *******(7) 8 ***(3) 9 (0) 10 *(1) -------------------------------------------------------- basically above is what my prgram needs to do.. im missing something somewhere any help would be great :) import java.util.Random; public class Histogram { /*This is a program to generate random number histogram between 1 and 100 and generate a table */ public static void main(String args[]) { int [] randarray = new int [80]; Random random = new Random(); System.out.println("Histogram"); System.out.println("---------"); int i ; for ( i = 0; i<randarray.length;i++) { int temp = random.nextInt(100); //random numbers up to number value 100 randarray[i] = temp; } int [] histo = new int [10]; for ( i = 0; i<10; i++) { /* %03d\t, this generates the random numbers to three decimal places so the numbers are generated with a full number or number with 00's or one 0*/ if (randarray[i] <= 10) { histo[i] = histo[i] + 1; //System.out.println("*"); } else if ( randarray[i] <= 20){ histo[i] = histo[i] + 1; } else if (randarray[i] <= 30){ histo[i] = histo[i] + 1; } else if ( randarray[i] <= 40){ histo[i] = histo[i] + 1; } else if (randarray[i] <= 50){ histo[i] = histo[i] + 1; } else if ( randarray[i] <=60){ histo[i] = histo[i] + 1; } else if ( randarray[i] <=70){ histo[i] = histo[i] + 1; } else if ( randarray[i] <=80){ histo[i] = histo[i] + 1; } else if ( randarray[i] <=90){ histo[i] = histo[i] + 1; } else if ( randarray[i] <=100){ histo[i] = histo[i] + 1; } switch (randarray[i]) { case 1: System.out.print("0-10 | "); break; case 2: System.out.print("11-20 | "); break; case 3: System.out.print("21-30 | "); break; case 4: System.out.print("31-40 | "); break; case 5: System.out.print("41-50 | "); break; case 6: System.out.print("51-60 | "); break; case 7: System.out.print("61-70 | "); break; case 8: System.out.print("71-80 | "); break; case 9: System.out.print("81-90 | "); break; case 10: System.out.print("91-100 | "); } for (int i = 0; i < 80; i++) { randomNumber = random.nextInt(100) index = (randomNumber - 1) / 2; histo[index]++; } } } }

    Read the article

  • Can't figure out this file behavior in IOS 4.2

    - by Don Jones
    Odd behavior with file behavior. Here's the thing: I'm using the phone camera to snap a picture, and internally generating a thumbnail. I'm saving those as temp files in the Documents directory. Here's the complete code: - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { UIImage *photo = [info objectForKey:@"UIImagePickerControllerOriginalImage"]; photo = [self scaleAndRotateImage:photo]; // save photo NSString *docsDir = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents"]; NSLog(@"Writing to %@",docsDir); // save photo file NSLog(@"Saving photo as %@",[NSString stringWithFormat:@"%@/temp_photo.png",docsDir]); [UIImagePNGRepresentation(photo) writeToFile:[NSString stringWithFormat:@"%@/temp_photo.png",docsDir] atomically:YES]; // make and save thumbnail NSLog(@"Saving thumbnail as %@",[NSString stringWithFormat:@"%@/temp_thumb.png",docsDir]); UIImage *thumb = [self makeThumbnail:photo]; [UIImagePNGRepresentation(thumb) writeToFile:[NSString stringWithFormat:@"%@/temp_thumb.png",docsDir] atomically:YES]; NSFileManager *manager = [NSFileManager defaultManager]; NSError *error; NSArray *files = [manager contentsOfDirectoryAtPath:docsDir error:&error]; NSLog(@"\n\nThere are %d files in the documents directory %@",(files.count),docsDir); for (int i = 0; i < files.count; i++) { NSString *filename = [files objectAtIndex:i]; NSLog(@"Seeing filename %@",filename); } // done //[photo release]; //[thumb release]; [self dismissModalViewControllerAnimated:NO]; [delegate textInputFinished]; } As you can see, I've put quite a bit of logging in here in an attempt to figure out my problem (which is coming up). The log output to this point is: Writing to /var/mobile/Applications/77D792DC-A224-4A47-8A4C-BB7C557626F3/Documents Saving photo as /var/mobile/Applications/77D792DC-A224-4A47-8A4C-BB7C557626F3/Documents/temp_photo.png Saving thumbnail as /var/mobile/Applications/77D792DC-A224-4A47-8A4C-BB7C557626F3/Documents/temp_thumb.png There are 3 files in the documents directory /var/mobile/Applications/77D792DC-A224-4A47-8A4C-BB7C557626F3/Documents Seeing filename temp_photo.png Seeing filename temp_text.txt Seeing filename temp_thumb.png This is absolutely as-expected. I clearly have three files on the device. Here's the very next code that operates - the code that received the textInputFinished message: - (void)textInputFinished { NSFileManager *fileManager = [NSFileManager defaultManager]; NSString *docsDir = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents"]; NSError *error; // get next filename NSString *filename; int i = 0; do { i++; filename = [NSString stringWithFormat:@"%@/reminder_%d",docsDir,i]; NSLog(@"Trying filename %@",filename); } while ([fileManager fileExistsAtPath:[filename stringByAppendingString:@".txt"]]); NSLog(@"New base filename is %@",filename); NSArray *files = [fileManager contentsOfDirectoryAtPath:docsDir error:&error]; NSLog(@"There are %d files in the directory %@",(files.count),docsDir); This is testing to get a new, non-temp, not-in-use filename. It does that - but then it says there aren't any files in the documents folder! Here's the logged output: Trying filename /var/mobile/Applications/77D792DC-A224-4A47-8A4C-BB7C557626F3/Documents/reminder_1 New base filename is /var/mobile/Applications/77D792DC-A224-4A47-8A4C-BB7C557626F3/Documents/reminder_1 There are 0 files in the directory /var/mobile/Applications/77D792DC-A224-4A47-8A4C-BB7C557626F3/Documents What the heck? Where did the three files go?

    Read the article

  • Which workaround to use for the following SQL deadlock?

    - by Marko
    I found a SQL deadlock scenario in my application during concurrency. I belive that the two statements that cause the deadlock are (note - I'm using LINQ2SQL and DataContext.ExecuteCommand(), that's where this.studioId.ToString() comes into play): exec sp_executesql N'INSERT INTO HQ.dbo.SynchronizingRows ([StudioId], [UpdatedRowId]) SELECT @p0, [t0].[Id] FROM [dbo].[UpdatedRows] AS [t0] WHERE NOT (EXISTS( SELECT NULL AS [EMPTY] FROM [dbo].[ReceivedUpdatedRows] AS [t1] WHERE ([t1].[StudioId] = @p0) AND ([t1].[UpdatedRowId] = [t0].[Id]) ))',N'@p0 uniqueidentifier',@p0='" + this.studioId.ToString() + "'; and exec sp_executesql N'INSERT INTO HQ.dbo.ReceivedUpdatedRows ([UpdatedRowId], [StudioId], [ReceiveDateTime]) SELECT [t0].[UpdatedRowId], @p0, GETDATE() FROM [dbo].[SynchronizingRows] AS [t0] WHERE ([t0].[StudioId] = @p0)',N'@p0 uniqueidentifier',@p0='" + this.studioId.ToString() + "'; The basic logic of my (client-server) application is this: Every time someone inserts or updates a row on the server side, I also insert a row into the table UpdatedRows, specifying the RowId of the modified row. When a client tries to synchronize data, it first copies all of the rows in the UpdatedRows table, that don't contain a reference row for the specific client in the table ReceivedUpdatedRows, to the table SynchronizingRows (the first statement taking part in the deadlock). Afterwards, during the synchronization I look for modified rows via lookup of the SynchronizingRows table. This step is required, otherwise if someone inserts new rows or modifies rows on the server side during synchronization I will miss them and won't get them during the next synchronization (explanation scenario to long to write here...). Once synchronization is complete, I insert rows to the ReceivedUpdatedRows table specifying that this client has received the UpdatedRows contained in the SynchronizingRows table (the second statement taking part in the deadlock). Finally I delete all rows from the SynchronizingRows table that belong to the current client. The way I see it, the deadlock is occuring on tables SynchronizingRows (abbreviation SR) and ReceivedUpdatedRows (abbreviation RUR) during steps 2 and 3 (one client is in step 2 and is inserting into SR and selecting from RUR; while another client is in step 3 inserting into RUR and selecting from SR). I googled a bit about SQL deadlocks and came to a conclusion that I have three options. Inorder to make a decision I need more input about each option/workaround: Workaround 1: The first advice given on the web about SQL deadlocks - restructure tables/queries so that deadlocks don't happen in the first place. Only problem with this is that with my IQ I don't see a way to do the synchronization logic any differently. If someone wishes to dwelve deeper into my current synchronization logic, how and why it is set up the way it is, I'll post a link for the explanation. Perhaps, with the help of someone smarter than me, it's possible to create a logic that is deadlock free. Workaround 2: The second most common advice seems to be the use of WITH(NOLOCK) hint. The problem with this is that NOLOCK might miss or duplicate some rows. Duplication is not a problem, but missing rows is catastrophic! Another option is the WITH(READPAST) hint. On the face of it, this seems to be a perfect solution. I really don't care about rows that other clients are inserting/modifying, because each row belongs only to a specific client, so I may very well skip locked rows. But the MSDN documentaion makes me a bit worried - "When READPAST is specified, both row-level and page-level locks are skipped". As I said, row-level locks would not be a problem, but page-level locks may very well be, since a page might contain rows that belong to multiple clients (including the current one). While there are lots of blog posts specifically mentioning that NOLOCK might miss rows, there seems to be none about READPAST (never) missing rows. This makes me skeptical and nervous to implement it, since there is no easy way to test it (implementing would be a piece of cake, just pop WITH(READPAST) into both statements SELECT clause and job done). Can someone confirm whether the READPAST hint can miss rows? Workaround 3: The final option is to use ALLOW_SNAPSHOT_ISOLATION and READ_COMMITED_SNAPSHOT. This would seem to be the only option to work 100% - at least I can't find any information that would contradict with it. But it is a little bit trickier to setup (I don't care much about the performance hit), because I'm using LINQ. Off the top of my head I probably need to manually open a SQL connection and pass it to the LINQ2SQL DataContext, etc... I haven't looked into the specifics very deeply. Mostly I would prefer option 2 if somone could only reassure me that READPAST will never miss rows concerning the current client (as I said before, each client has and only ever deals with it's own set of rows). Otherwise I'll likely have to implement option 3, since option 1 is probably impossible... I'll post the table definitions for the three tables as well, just in case: CREATE TABLE [dbo].[UpdatedRows]( [Id] [uniqueidentifier] NOT NULL ROWGUIDCOL DEFAULT NEWSEQUENTIALID() PRIMARY KEY CLUSTERED, [RowId] [uniqueidentifier] NOT NULL, [UpdateDateTime] [datetime] NOT NULL, ) ON [PRIMARY] GO CREATE NONCLUSTERED INDEX IX_RowId ON dbo.UpdatedRows ([RowId] ASC) WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO CREATE TABLE [dbo].[ReceivedUpdatedRows]( [Id] [uniqueidentifier] NOT NULL ROWGUIDCOL DEFAULT NEWSEQUENTIALID() PRIMARY KEY NONCLUSTERED, [UpdatedRowId] [uniqueidentifier] NOT NULL REFERENCES [dbo].[UpdatedRows] ([Id]), [StudioId] [uniqueidentifier] NOT NULL REFERENCES, [ReceiveDateTime] [datetime] NOT NULL, ) ON [PRIMARY] GO CREATE CLUSTERED INDEX IX_Studios ON dbo.ReceivedUpdatedRows ([StudioId] ASC) WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO CREATE TABLE [dbo].[SynchronizingRows]( [StudioId] [uniqueidentifier] NOT NULL [UpdatedRowId] [uniqueidentifier] NOT NULL REFERENCES [dbo].[UpdatedRows] ([Id]) PRIMARY KEY CLUSTERED ([StudioId], [UpdatedRowId]) ) ON [PRIMARY] GO PS! Studio = Client. PS2! I just noticed that the index definitions have ALLOW_PAGE_LOCK=ON. If I would turn it off, would that make any difference to READPAST? Are there any negative downsides for turning it off?

    Read the article

  • Many-to-one relation exception due to closed session after loading

    - by Nick Thissen
    Hi, I am using NHibernate (version 1.2.1) for the first time so I wrote a simple test application (an ASP.NET project) that uses it. In my database I have two tables: Persons and Categories. Each person gets one category, seems easy enough. | Persons | | Categories | |--------------| |--------------| | Id (PK) | | Id (PK) | | Firstname | | CategoryName | | Lastname | | CreatedTime | | CategoryId | | UpdatedTime | | CreatedTime | | Deleted | | UpdatedTime | | Deleted | The Id, CreatedTime, UpdatedTime and Deleted attributes are a convention I use in all my tables, so I have tried to bring this fact into an additional abstraction layer. I have a project DatabaseFramework which has three important classes: Entity: an abstract class that defines these four properties. All 'entity objects' (in this case Person and Category) must inherit Entity. IEntityManager: a generic interface (type parameter as Entity) that defines methods like Load, Insert, Update, etc. NHibernateEntityManager: an implementation of this interface using NHibernate to do the loading, saving, etc. Now, the Person and Category classes are straightforward, they just define the attributes of the tables of course (keeping in mind that four of them are in the base Entity class). Since the Persons table is related to the Categories table via the CategoryId attribute, the Person class has a Category property that holds the related category. However, in my webpage, I will also need the name of this category (CategoryName), for databinding purposes for example. So I created an additional property CategoryName that returns the CategoryName property of the current Category property, or an empty string if the Category is null: Namespace Database Public Class Person Inherits DatabaseFramework.Entity Public Overridable Property Firstname As String Public Overridable Property Lastname As String Public Overridable Property Category As Category Public Overridable ReadOnly Property CategoryName As String Get Return If(Me.Category Is Nothing, _ String.Empty, _ Me.Category.CategoryName) End Get End Property End Class End Namespace I am mapping the Person class using this mapping file. The many-to-one relation was suggested by Yads in another thread: <id name="Id" column="Id" type="int" unsaved-value="0"> <generator class="identity" /> </id> <property name="CreatedTime" type="DateTime" not-null="true" /> <property name="UpdatedTime" type="DateTime" not-null="true" /> <property name="Deleted" type="Boolean" not-null="true" /> <property name="Firstname" type="String" /> <property name="Lastname" type="String" /> <many-to-one name="Category" column="CategoryId" class="NHibernateWebTest.Database.Category, NHibernateWebTest" /> (I can't get it to show the root node, this forum hides it, I don't know how to escape the html-like tags...) The final important detail is the Load method of the NHibernateEntityManager implementation. (This is in C# as it's in a different project, sorry about that). I simply open a new ISession (ISessionFactory.OpenSession) in the GetSession method and then use that to fill an EntityCollection(Of TEntity) which is just a collection inheriting System.Collections.ObjectModel.Collection(Of T). public virtual EntityCollection< TEntity Load() { using (ISession session = this.GetSession()) { var entities = session .CreateCriteria(typeof (TEntity)) .Add(Expression.Eq("Deleted", false)) .List< TEntity (); return new EntityCollection< TEntity (entities); } } (Again, I can't get it to format the code correctly, it hides the generic type parameters, probably because it reads the angled symbols as a HTML tag..? If you know how to let me do that, let me know!) Now, the idea of this Load method is that I get a fully functional collection of Persons, all their properties set to the correct values (including the Category property, and thus, the CategoryName property should return the correct name). However, it seems that is not the case. When I try to data-bind the result of this Load method to a GridView in ASP.NET, it tells me this: Property accessor 'CategoryName' on object 'NHibernateWebTest.Database.Person' threw the following exception:'Could not initialize proxy - the owning Session was closed.' The exception occurs on the DataBind method call here: public virtual void LoadGrid() { if (this.Grid == null) return; this.Grid.DataSource = this.Manager.Load(); this.Grid.DataBind(); } Well, of course the session is closed, I closed it via the using block. Isn't that the correct approach, should I keep the session open? And for how long? Can I close it after the DataBind method has been run? In each case, I'd really like my Load method to just return a functional collection of items. It seems to me that it is now only getting the Category when it is required (eg, when the GridView wants to read the CategoryName, which wants to read the Category property), but at that time the session is closed. Is that reasoning correct? How do I stop this behavior? Or shouldn't I? And what should I do otherwise? Thanks!

    Read the article

  • Simple 'database' in c++

    - by DevAno1
    Hello. My task was to create pseudodatabase in c++. There are 3 tables given, that store name(char*), age(int), and sex (bool). Write a program allowing to : - add new data to the tables - show all records - sort tables with criteria : - name increasing/decreasing - age increasing/decreasing - sex Using function templates is a must. Also size of arrays must be variable, depending on the amount of records. I have some code but there are still problems there. Here's what I have: Function tabSize() for returning size of array. But currently it returns size of pointer I guess : #include <iostream> using namespace std; template<typename TYPE> int tabSize(TYPE *T) { int size = 0; size = sizeof(T) / sizeof(T[0]); return size; } How to make it return size of array, not its pointer ? Next the most important : add() for adding new elements. Inside first I get the size of array (but hence it returns value of pointer, and not size it's of no use now :/). Then I think I must check if TYPE of data is char. Or am I wrong ? // add(newElement, table) template<typename TYPE> TYPE add(TYPE L, TYPE *T) { int s = tabSize(T); //here check if TYPE = char. If yes, get the length of the new name int len = 0; while (L[len] != '\0') { len++; } //current length of table int tabLen = 0; while (T[tabLen] != '\0') { tabLen++; } //if TYPE is char //if current length of table + length of new element exceeds table size create new table if(len + tabLen > s) { int newLen = len + tabLen; TYPE newTab = new [newLen]; for(int j=0; j < newLen; j++ ){ if(j == tabLen -1){ for(int k = 0; k < len; k++){ newTab[k] = } } else { newTab[j] = T[j]; } } } //else check if tabLen + 1 is greater than s. If yes enlarge table by 1. } Am I thinking correct here ? Last functions show() is correct I guess : template<typename TYPE> TYPE show(TYPE *L) { int len = 0; while (L[len] == '\0') { len++; } for(int i=0; i<len; i++) { cout << L[i] << endl; } } and problem with sort() is as follows : Ho can I influence if sorting is decreasing or increasing ? I'm using bubble sort here. template<typename TYPE> TYPE sort(TYPE *L, int sort) { int s = tabSize(L); int len = 0; while (L[len] == '\0') { len++; } //add control increasing/decreasing sort int i,j; for(i=0;i<len;i++) { for(j=0;j<i;j++) { if(L[i]>L[j]) { int temp=L[i]; L[i]=L[j]; L[j]=temp; } } } } And main function to run it : int main() { int sort=0; //0 increasing, 1 decreasing char * name[100]; int age[10]; bool sex[10]; char c[] = "Tom"; name[0] = "John"; name[1] = "Mike"; cout << add(c, name) << endl; system("pause"); return 0; }

    Read the article

  • Alphabetically Ordering an array of words

    - by Genesis
    I'm studying C on my own in preparation for my upcoming semester of school and was wondering what I was doing wrong with my code so far. If Things look weird it is because this is part of a much bigger grab bag of sorting functions I'm creating to get a sense of how to sort numbers,letters,arrays,and the like! I'm basically having some troubles with the manipulation of strings in C currently. Also, I'm quite limited in my knowledge of C at the moment! My main Consists of this: #include <stdio.h> #include <stdio.h> #include <stdlib.h> int numbers[10]; int size; int main(void){ setvbuf(stdout,NULL,_IONBF,0); //This is magical code that allows me to input. int wordNumber; int lengthOfWord = 50; printf("How many words do you want to enter: "); scanf("%i", &wordNumber); printf("%i\n",wordNumber); char words[wordNumber][lengthOfWord]; printf("Enter %i words:",wordNumber); int i; for(i=0;i<wordNumber+1;i++){ //+1 is because my words[0] is blank. fgets(&words[i], 50, stdin); } for(i=1;i<wordNumber+1;i++){ // Same as the above comment! printf("%s", words[i]); //prints my words out! } alphabetize(words,wordNumber); //I want to sort these arrays with this function. } My sorting "method" I am trying to construct is below! This function is seriously flawed, but I'd thought I'd keep it all to show you where my mind was headed when writing this. void alphabetize(char a[][],int size){ // This wont fly. size = size+1; int wordNumber; int lengthOfWord; char sortedWords[wordNumber][lengthOfWord]; //In effort for the for loop int i; int j; for(i=1;i<size;i++){ //My effort to copy over this array for manipulation for(j=1;j<size;j++){ sortedWords[i][j] = a[i][j]; } } //This should be kinda what I want when ordering words alphabetically, right? for(i=1;i<size;i++){ for(j=2;j<size;j++){ if(strcmp(sortedWords[i],sortedWords[j]) > 0){ char* temp = sortedWords[i]; sortedWords[i] = sortedWords[j]; sortedWords[j] = temp; } } } for(i=1;i<size;i++){ printf("%s, ",sortedWords[i]); } } I guess I also have another question as well... When I use fgets() it's doing this thing where I get a null word for the first spot of the array. I have had other issues recently trying to scanf() char[] in certain ways specifically spacing my input word variables which "magically" gets rid of the first null space before the character. An example of this is using scanf() to write "Hello" and getting " Hello" or " ""Hello"... Appreciate any thoughts on this, I've got all summer to study up so this doesn't need to be answered with haste! Also, thank you stack overflow as a whole for being so helpful in the past. This may be my first post, but I have been a frequent visitor for the past couple of years and it's been one of the best places for helpful advice/tips.

    Read the article

  • Object reference not set to an instance of an object- Linked List Example

    - by Zoro Roronoa
    I am seeing following errors : Object reference not set to an instance of an object! Check to determinate if the object is null before calling the method! I'am new with C#,and I made a program for Sorted Linked Lists. Here is the code where the error comes! public void Insert(double data) { Link newLink = new Link(data); Link current = first; Link previous = null; if (first == null) { first = newLink; } else { while (data > current.DData && current != null) { previous = current; current = current.Next; } previous.Next = newLink; newLink.Next = current; } } It says that the current referenc is null while (data current.DData && current != null), but I assigned it current = first; Please Help ! The rest is the complete code of the Program! class Link { double dData; Link next=null; public Link Next { get { return next; } set { next = value; } } public double DData { get { return dData; } set { dData = value; } } public Link(double dData) { this.dData = dData; } public void DisplayLink() { Console.WriteLine("Link : "+ dData); } } class SortedList { Link first; public SortedList() { first = null; } public bool IsEmpty() { return (this.first == null); } public void Insert(double data) { Link newLink = new Link(data); Link current = first; Link previous = null; if (first == null) { first = newLink; } else { while (data > current.DData && current != null) { previous = current; current = current.Next; } previous.Next = newLink; newLink.Next = current; } } public Link Remove() { Link temp = first; first = first.Next; return temp; } public void DisplayList() { Link current; current = first; Console.WriteLine("Display the List!"); while (current != null) { current.DisplayLink(); current = current.Next; } } } class SortedListApp { public void TestSortedList() { SortedList newList = new SortedList(); newList.Insert(20); newList.Insert(22); newList.Insert(100); newList.Insert(1000); newList.Insert(15); newList.Insert(11); newList.DisplayList(); newList.Remove(); newList.DisplayList(); } }

    Read the article

  • How can I install a 32bit python on 64 bit Ubuntu

    - by moose
    I am using Ubuntu 10.10 (Linux pc07 2.6.35-27-generic #48-Ubuntu SMP Tue Feb 22 20:25:46 UTC 2011 x86_64 GNU/Linux) and the default python package (Python 2.6.6). I would like to install python-psyco to improve the performance of one of my scripts, but only python-psyco-doc is available for 64 bit. I tried a virtual machine, but the the performance boost is much less on the virtual machine than on a "real" installed 32-bit Ubuntu. So my question is: How can I install a 32Bit Python with psyco on my 64Bit Ubuntu machine? edit: I've found this article and made this: Download "Python 2.7.1 bzipped source tarball" from http://python.org/download/ Go in the directory where you decompressed "Python 2.7.1" $ OPT=-m32 LDFLAGS=-m32 ./configure --prefix=/opt/pym32 $ make But I got this error: gcc -pthread -m32 -Xlinker -export-dynamic -o python \ Modules/python.o \ libpython2.7.a -lpthread -ldl -lutil -lm libpython2.7.a(posixmodule.o): In function `posix_tmpnam': /home/moose/Downloads/Python-2.7.1/./Modules/posixmodule.c:7346: warning: the use of `tmpnam_r' is dangerous, better use `mkstemp' libpython2.7.a(posixmodule.o): In function `posix_tempnam': /home/moose/Downloads/Python-2.7.1/./Modules/posixmodule.c:7301: warning: the use of `tempnam' is dangerous, better use `mkstemp' Segmentation fault make: *** [sharedmods] Fehler 139 edit2: Now I've found http://indefinitestudies.org/2010/02/08/how-to-build-32-bit-python-on-ubuntu-9-10-x86_64/ and it seems like this worked: $ cd Python-2.7.1 $ CC="gcc -m32" LDFLAGS="-L/lib32 -L/usr/lib32 \ -Lpwd/lib32 -Wl,-rpath,/lib32 -Wl,-rpath,/usr/lib32" \ ./configure --prefix=/opt/pym32 $ make $ sudo make install But installing psyco didn't work: Download the lastest snapshot: http://psyco.sourceforge.net/download.html Extract it and go into the folder $ python setup.py install This error appeared: PROCESSOR = 'ivm' running install running build running build_py running build_ext building 'psyco._psyco' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DALL_STATIC=1 -Ic/ivm -I/usr/include/python2.6 -c c/psyco.c -o build/temp.linux-x86_64-2.6/c/psyco.o In file included from c/psyco.c:1: c/psyco.h:9: fatal error: Python.h: Datei oder Verzeichnis nicht gefunden compilation terminated. error: command 'gcc' failed with exit status 1

    Read the article

  • Parallelism in .NET – Part 11, Divide and Conquer via Parallel.Invoke

    - by Reed
    Many algorithms are easily written to work via recursion.  For example, most data-oriented tasks where a tree of data must be processed are much more easily handled by starting at the root, and recursively “walking” the tree.  Some algorithms work this way on flat data structures, such as arrays, as well.  This is a form of divide and conquer: an algorithm design which is based around breaking up a set of work recursively, “dividing” the total work in each recursive step, and “conquering” the work when the remaining work is small enough to be solved easily. Recursive algorithms, especially ones based on a form of divide and conquer, are often a very good candidate for parallelization. This is apparent from a common sense standpoint.  Since we’re dividing up the total work in the algorithm, we have an obvious, built-in partitioning scheme.  Once partitioned, the data can be worked upon independently, so there is good, clean isolation of data. Implementing this type of algorithm is fairly simple.  The Parallel class in .NET 4 includes a method suited for this type of operation: Parallel.Invoke.  This method works by taking any number of delegates defined as an Action, and operating them all in parallel.  The method returns when every delegate has completed: Parallel.Invoke( () => { Console.WriteLine("Action 1 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 2 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 3 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); } ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Running this simple example demonstrates the ease of using this method.  For example, on my system, I get three separate thread IDs when running the above code.  By allowing any number of delegates to be executed directly, concurrently, the Parallel.Invoke method provides us an easy way to parallelize any algorithm based on divide and conquer.  We can divide our work in each step, and execute each task in parallel, recursively. For example, suppose we wanted to implement our own quicksort routine.  The quicksort algorithm can be designed based on divide and conquer.  In each iteration, we pick a pivot point, and use that to partition the total array.  We swap the elements around the pivot, then recursively sort the lists on each side of the pivot.  For example, let’s look at this simple, sequential implementation of quicksort: public static void QuickSort<T>(T[] array) where T : IComparable<T> { QuickSortInternal(array, 0, array.Length - 1); } private static void QuickSortInternal<T>(T[] array, int left, int right) where T : IComparable<T> { if (left >= right) { return; } SwapElements(array, left, (left + right) / 2); int last = left; for (int current = left + 1; current <= right; ++current) { if (array[current].CompareTo(array[left]) < 0) { ++last; SwapElements(array, last, current); } } SwapElements(array, left, last); QuickSortInternal(array, left, last - 1); QuickSortInternal(array, last + 1, right); } static void SwapElements<T>(T[] array, int i, int j) { T temp = array[i]; array[i] = array[j]; array[j] = temp; } Here, we implement the quicksort algorithm in a very common, divide and conquer approach.  Running this against the built-in Array.Sort routine shows that we get the exact same answers (although the framework’s sort routine is slightly faster).  On my system, for example, I can use framework’s sort to sort ten million random doubles in about 7.3s, and this implementation takes about 9.3s on average. Looking at this routine, though, there is a clear opportunity to parallelize.  At the end of QuickSortInternal, we recursively call into QuickSortInternal with each partition of the array after the pivot is chosen.  This can be rewritten to use Parallel.Invoke by simply changing it to: // Code above is unchanged... SwapElements(array, left, last); Parallel.Invoke( () => QuickSortInternal(array, left, last - 1), () => QuickSortInternal(array, last + 1, right) ); } This routine will now run in parallel.  When executing, we now see the CPU usage across all cores spike while it executes.  However, there is a significant problem here – by parallelizing this routine, we took it from an execution time of 9.3s to an execution time of approximately 14 seconds!  We’re using more resources as seen in the CPU usage, but the overall result is a dramatic slowdown in overall processing time. This occurs because parallelization adds overhead.  Each time we split this array, we spawn two new tasks to parallelize this algorithm!  This is far, far too many tasks for our cores to operate upon at a single time.  In effect, we’re “over-parallelizing” this routine.  This is a common problem when working with divide and conquer algorithms, and leads to an important observation: When parallelizing a recursive routine, take special care not to add more tasks than necessary to fully utilize your system. This can be done with a few different approaches, in this case.  Typically, the way to handle this is to stop parallelizing the routine at a certain point, and revert back to the serial approach.  Since the first few recursions will all still be parallelized, our “deeper” recursive tasks will be running in parallel, and can take full advantage of the machine.  This also dramatically reduces the overhead added by parallelizing, since we’re only adding overhead for the first few recursive calls.  There are two basic approaches we can take here.  The first approach would be to look at the total work size, and if it’s smaller than a specific threshold, revert to our serial implementation.  In this case, we could just check right-left, and if it’s under a threshold, call the methods directly instead of using Parallel.Invoke. The second approach is to track how “deep” in the “tree” we are currently at, and if we are below some number of levels, stop parallelizing.  This approach is a more general-purpose approach, since it works on routines which parse trees as well as routines working off of a single array, but may not work as well if a poor partitioning strategy is chosen or the tree is not balanced evenly. This can be written very easily.  If we pass a maxDepth parameter into our internal routine, we can restrict the amount of times we parallelize by changing the recursive call to: // Code above is unchanged... SwapElements(array, left, last); if (maxDepth < 1) { QuickSortInternal(array, left, last - 1, maxDepth); QuickSortInternal(array, last + 1, right, maxDepth); } else { --maxDepth; Parallel.Invoke( () => QuickSortInternal(array, left, last - 1, maxDepth), () => QuickSortInternal(array, last + 1, right, maxDepth)); } We no longer allow this to parallelize indefinitely – only to a specific depth, at which time we revert to a serial implementation.  By starting the routine with a maxDepth equal to Environment.ProcessorCount, we can restrict the total amount of parallel operations significantly, but still provide adequate work for each processing core. With this final change, my timings are much better.  On average, I get the following timings: Framework via Array.Sort: 7.3 seconds Serial Quicksort Implementation: 9.3 seconds Naive Parallel Implementation: 14 seconds Parallel Implementation Restricting Depth: 4.7 seconds Finally, we are now faster than the framework’s Array.Sort implementation.

    Read the article

  • Downloading a file over HTTP the SSIS way

    This post shows you how to download files from a web site whilst really making the most of the SSIS objects that are available. There is no task to do this, so we have to use the Script Task and some simple VB.NET or C# (if you have SQL Server 2008) code. Very often I see suggestions about how to use the .NET class System.Net.WebClient and of course this works, you can code pretty much anything you like in .NET. Here I’d just like to raise the profile of an alternative. This approach uses the HTTP Connection Manager, one of the stock connection managers, so you can use configurations and property expressions in the same way you would for all other connections. Settings like the security details that you would want to make configurable already are, but if you take the .NET route you have to write quite a lot of code to manage those values via package variables. Using the connection manager we get all of that flexibility for free. The screenshot below illustrate some of the options we have. Using the HttpClientConnection class makes for much simpler code as well. I have demonstrated two methods, DownloadFile which just downloads a file to disk, and DownloadData which downloads the file and retains it in memory. In each case we show a message box to note the completion of the download. You can download a sample package below, but first the code: Imports System Imports System.IO Imports System.Text Imports System.Windows.Forms Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() ' Get the unmanaged connection object, from the connection manager called "HTTP Connection Manager" Dim nativeObject As Object = Dts.Connections("HTTP Connection Manager").AcquireConnection(Nothing) ' Create a new HTTP client connection Dim connection As New HttpClientConnection(nativeObject) ' Download the file #1 ' Save the file from the connection manager to the local path specified Dim filename As String = "C:\Temp\Sample.txt" connection.DownloadFile(filename, True) ' Confirm file is there If File.Exists(filename) Then MessageBox.Show(String.Format("File {0} has been downloaded.", filename)) End If ' Download the file #2 ' Read the text file straight into memory Dim buffer As Byte() = connection.DownloadData() Dim data As String = Encoding.ASCII.GetString(buffer) ' Display the file contents MessageBox.Show(data) Dts.TaskResult = Dts.Results.Success End Sub End Class Sample Package HTTPDownload.dtsx (74KB)

    Read the article

  • Downloading a file over HTTP the SSIS way

    This post shows you how to download files from a web site whilst really making the most of the SSIS objects that are available. There is no task to do this, so we have to use the Script Task and some simple VB.NET or C# (if you have SQL Server 2008) code. Very often I see suggestions about how to use the .NET class System.Net.WebClient and of course this works, you can code pretty much anything you like in .NET. Here I’d just like to raise the profile of an alternative. This approach uses the HTTP Connection Manager, one of the stock connection managers, so you can use configurations and property expressions in the same way you would for all other connections. Settings like the security details that you would want to make configurable already are, but if you take the .NET route you have to write quite a lot of code to manage those values via package variables. Using the connection manager we get all of that flexibility for free. The screenshot below illustrate some of the options we have. Using the HttpClientConnection class makes for much simpler code as well. I have demonstrated two methods, DownloadFile which just downloads a file to disk, and DownloadData which downloads the file and retains it in memory. In each case we show a message box to note the completion of the download. You can download a sample package below, but first the code: Imports System Imports System.IO Imports System.Text Imports System.Windows.Forms Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() ' Get the unmanaged connection object, from the connection manager called "HTTP Connection Manager" Dim nativeObject As Object = Dts.Connections("HTTP Connection Manager").AcquireConnection(Nothing) ' Create a new HTTP client connection Dim connection As New HttpClientConnection(nativeObject) ' Download the file #1 ' Save the file from the connection manager to the local path specified Dim filename As String = "C:\Temp\Sample.txt" connection.DownloadFile(filename, True) ' Confirm file is there If File.Exists(filename) Then MessageBox.Show(String.Format("File {0} has been downloaded.", filename)) End If ' Download the file #2 ' Read the text file straight into memory Dim buffer As Byte() = connection.DownloadData() Dim data As String = Encoding.ASCII.GetString(buffer) ' Display the file contents MessageBox.Show(data) Dts.TaskResult = Dts.Results.Success End Sub End Class Sample Package HTTPDownload.dtsx (74KB)

    Read the article

  • Ubuntu CPU Fan at 2200 RPM and CPU top at 90°C

    - by T-Erra
    I have a problem with my CPU heat. I'm running Ubuntu 14.04 (64bit) and I have issues with the cooling. I know it might be a hardware issue, but I've checked, the fan is running and in my GUI I use the command "sensors" which shows me a RPM of 2200 and a CPU temperature of 60°C while I'm not running any software. This seems to be really mysterious. However, if I start my IDE (Eclipse), Firefox and Chromium at the same time, the CPU temp goes up to 75-90° Celsius. I doubt that this is common for a system with 16 GB RAM, an i7 Processor and an Intel water cooling system and I also never had some issues like this before when I was running Ubuntu 12.04 or 13.04. Fan Speed At 60°C it's at 1300 RPM, and after start up Eclipse and Firefox it's at approximately 2200 RPM and between 75°C - 90°C depending on how many windows and IDE's I've opened. If I use the "top" command, there are just few processes like Xorg or Compiz which are taking up to 10% CPU usage at maximum, during the time I'm not running any software. I have tried to upgrade the Linux kernel, where I failed. After upgrading, I wasn't able to boot anymore so I tried to remove the new kernel from the boot directory and updated my grub file to an old entry, which works fine now, but still with the temperature issue. My NVIDIA drivers is also up to date, which dropped some issues I had before with the CPU load. So it can't be a problem with the graphic card. How can I find out, where the problem is, or why my CPU gets that high temperatures, which I only should get while playing games with high end graphics and so on? Did anyone have some similar issues before?

    Read the article

  • Mount SMB / AFP 13.10

    - by Jeffery
    I cannot seem to get Ubuntu to mount a mac share via SMB or AFP. I've tried the following... AFP: apt-get install afpfs-ng-utils mount_afp afp://user:password@localip/share /mnt/share Error given: "Could not connect, never got a reponse to getstatus, Connection timed out". Which is odd as I can access the share just fine via Mac. SMB: apt-get install cifs-utils nano /etc/fstab added the following line "//localip/share /mnt/share cifs username=user,password=pass,iocharset=utf8,sec=nltm 0 0" mount -a Error given: root@Asrock:~# mount -a -vvv mount: fstab path: "/etc/fstab" mount: mtab path: "/etc/mtab" mount: lock path: "/etc/mtab~" mount: temp path: "/etc/mtab.tmp" mount: UID: 0 mount: eUID: 0 mount: spec: "//10.0.1.3/NAS" mount: node: "/mnt/NAS" mount: types: "cifs" mount: opts: "username=user,password=pass,iocharset=utf8,sec=nltm" mount: external mount: argv[0] = "/sbin/mount.cifs" mount: external mount: argv[1] = "//10.0.1.3/NAS" mount: external mount: argv[2] = "/mnt/NAS" mount: external mount: argv[3] = "-v" mount: external mount: argv[4] = "-o" mount: external mount: argv[5] = "rw,username=user,password=pass,iocharset=utf8,sec=nltm" mount.cifs kernel mount options: ip=10.0.1.3,unc=\\10.0.1.3\NAS,iocharset=utf8,sec=nltm,user=user,pass=* mount error(22): Invalid argument Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) I don't really care which it uses I just want it to work! Am I doing something wrong?

    Read the article

  • Fixing unbootable installation on LVM root from Desktop LiveCD

    - by intuited
    I just did an installation from the 10.10 Desktop LiveCD, making the root volume an LVM LV. Apparently this is not supported; I managed it by taking these steps before starting the GUI installer app: installing the lvm2 package on the running system creating an LVM-type partition on the system hard drive creating a physical volume, a volume group and a root LV using the LVM tools. I also created a second LV for /var; this I don't think is relevant. creating a filesystem (ext4) on each of the two LVs. After taking these steps, the GUI installer offered the two LVs as installation targets; I gladly accepted, also putting /boot on a primary partition separate from the LVM partition. Installation seemed to go smoothly, and I've verified that both the root and var volumes do contain acceptable-looking directory structures. However, booting fails; if I understood correctly what happened, I was dropped into a busybox running in the initrd filesystem. Although I haven't worked through the entirety of the grub2 docs yet, it looks like the entry that tries to boot my new system is correct: menuentry 'Ubuntu, with Linux 2.6.35-22-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod part_msdos insmod ext2 set root='(hd0,msdos3)' search --no-floppy --fs-uuid --set $UUID_OF_BOOT_FILESYSTEM linux /vmlinuz-2.6.35-22-generic root=/dev/mapper/$LVM_VOLUME_GROUP-root ro quiet splash initrd /initrd.img-2.6.35-22-generic } Note that $VARS are replaced in the actual grub.cfg with their corresponding values. I rebooted back into the livecd and have unpacked the initrd image into a temp directory. It looks like the initrd image lacks LVM functionality. For example, if I'm reading /usr/share/initramfs-tools/hooks/lvm2 (installed with lvm2 on the livecd-booted system, not present on the installed one) correctly, an lvm executable should be situated in /sbin; that is not the case. What's the best way to remedy this situation? I realize that it would be easier to just use the alternate install CD, which apparently supports LVM, but I don't want to wait for it to download and then have to reinstall.

    Read the article

  • OFM 11g: Implementing OAM SSO with Forms

    - by olaf.heimburger
    There is some confusion about the integration of OFM 11g Forms with Oracle Access Manager 11g (OAM). Some say this does not work, some say it works, but.... Actually, having implemented it many times I belong to the later group. Here is how. Caveat Before you start installing anything, take a step back and consider your current implementation and what you really need and want to achieve. The current integration of Forms 11g with OAM 11g does not support self-service account creation and password resets from the Forms application. If you really need this, you must use the existing Oracle AS 10.1.4.3 infrastructure. On the other hand, if your user population is pretty stable, you can enjoy the latest Forms 11g with OAM 11g. Assumptions The whole process should be done in one day. I assume that all domains and instances are started during setup, if you need to restart them on demand or purpose, be sure to have proper start/stop scripts, I don't mention them. Preparation It goes without saying, that you always should do a proper backup before you change anything on your production environment. With proper backup, I also mean a tested and verified restore process. If you dared to test it before, do it now. It pays off. Requirements For OAM 11g to work properly you need a LDAP repository. For the integration of Forms 11g you need an Oracle Internet Directory (OID) configured with the Oracle AS SSO LDAP extensions. For better support I usually give the latest version a try, in this case OID 11g is a good choice.During the Installation and Integration steps we use an upgrade wizard that needs the old OID configuration on the same host but in a different ORACLE_HOME. Installation vs Configuration With OFM 11g Oracle introduced a clear separation between Installation of the binaries (the software) and the Configuration of the instances (the runtime). This is really great as you can install all the software and create new instances when needed. In the following we adhere to this scheme and install the software first and then configure the instances later. Installation Steps The Oracle documentation contains all the necessary steps for the installation of all pieces of software. But some hints help to avoid traps and pitfalls. Step 1 The Database Start the installation with the database. It is quite obvious but we need an Oracle database for all the other steps. If you have one at hand, fine. If not, just install at least a Oracle 10.2.0.4 version. This database can be on a different host. Step 2 The Repository Creation Utility The next step should be to run the Repository Creation Utility (RCU). This is a client application that just needs to connect to your database. It can be run on any host that can reach the database and is a Windows or Linux 32-bit machine. When you run it, be sure to install the OID schema and the OAM schema. If you miss one of these, you can run the RCU again to install the missing schema. Step 3 The Foundation With OFM 11g Oracle started to use WebLogic Server 11g (WLS) as its foundation for all OFM 11g installation. We therefore install it first. Depending on your operating system, it might be possible, that no native installer is available. My approach to this dilemma is to use the WLS Generic Installer for all my installations. It does not include a JDK either but if you have both for your platform you are ready to go. Step 3a The JDK To make things interesting, Oracle currently has two JDKs in its portfolio. The Sun JDK and the JRockit JDK. Both are available for a number of platforms. If you are lucky and both are available for your platform, install both in a separate directory (and not one of your ORACLE_HOMEs) each, You can use the later as you like. Step 3b Install WLS for OID and OAM With the JDK installed, we start the generic installer with java -jar wls_generic.jar.STOP! Before you do this, check the version first. It should be 1.6.0_18 or later and not the GCC one (Some Linux distros have it installed by default). To verify the version, issue a java -version command and make sure that the output does not contain the text gcj and the version matches. If this does not work, use an absolute path like /opt/java/jdk1.6.0_23/bin/java to start the installer. The installer allows you to specify a path to install the software into, say /opt/oracle/iam/11.1.1.3 for the OID and OAM installation. We will call this IAM_HOME. Step 4 Install OID Now we are ready to install OID. Start the OID installer (in the Disk1 directory) and just select the installation only step. This will install the software only and does not configure the instance. Use the IAM_HOME as the target directory. Step 5 Install SOA Suite The IAM 11g Suite uses the BPEL component of the SOA Suite 11g for its workflows. This is a pretty closed environment and not to be used for SCA Composites. We install the SOA Suite in $IAM_HOME/soa. The installer only installs the binaries. Configuration will be done later. Step 6 Install OAM Once the installation of OID and SOA is done, we are ready to install the OAM software in the same IAM_HOME. Make sure to install the OAM binaries in a directory different from the one you used during the OID and SOA installation. As before, we only install the software, the instance will be created later. Step 7 Backup the Installation At this point, I normally do a backup (or snapshot in a virtual image) of the installation. Good when you need to go back to this point. Step 8 Configure OID The software is installed and now we need instances to run it. This process is called configuration. For OID use the config.sh found in $IAM_HOME/oid/bin to start the configuration wizard. Normally this runs smoothly. If you encounter some issues check the Oracle Support site for help. This configuration will also start the OID instance. Step 9 Install the Oracle AS SSO Schema Before we install the Forms software we need to install the Oracle AS SSO Schema into the database and OID. This is a rather dangerous procedure, but fully documented in the IAM Installation Guide, Chapter 10. You should finish this in one go, do not reboot your host during the whole procedure. As a precaution, you should make a backup of the OID instance before you start the procedure. Once the backup is ready, read the chapter, including every note, carefully. You can avoid a number of issues by following all the steps and will succeed with a working solution. Step 10 Configure OAM Reached this step? Great. You are ready to create an OAM instance. Use the $IAM_HOME/iam/common/binconfig.sh for this. This will open the WLS Domain Creation Wizard and asks for the libraries to be installed. You should at least select the OAM with Database repository item. The configuration will also start the OAM instance. Step 11 Install WLS for Forms 11g It is quite tempting to install everything in one ORACLE_HOME. Unfortunately this does not work for all OFM packages. Therefore we do another WLS installation in another ORACLE_HOME. The same considerations as in step 3b apply. We call this one FORMS_HOME. Step 12 Install Forms In the FORMS_HOME we now install the binaries for the Forms 11g software. Again, this is a install only step. Configuration starts with the next step. Step 13 Configure Forms To configure Forms 11g we start the Configuration Wizard (config.sh) in FORMS_HOME/bin. This wizard should create a new WebLogic Domain and an OHS instance! Do not extend existing domains or instances! Forms should run in its own instances! When all information is supplied, the wizard will create the domain and instance and starts them automatically.Step 14 Setup your Forms SSO EnvironmentOnce you have implemented and tested your Forms 11g instance, you can configured it for SSO. Yes, this requires the old Oracle AS SSO solution, OIDDAS for creating and assigning users and SSO to setup your partner applications. In this step you should consider to create every user necessary for use within the environment. When done, do not forget to test it. Step 15 Migrate the SSO Repository Since the final goal is to get rid of the old SSO implementation we need to migrate the old SSO repository into the new OID structure. Additionally, this step will also migrate all partner application configurations into OAM 11g. Quite convenient. To do this step, you have to start the upgrade agent (ua or ua.bat or ua.cmd) on the operating system level in $IAM_HOME/bin. Once finished, this wizard will create new osso.conf files for each partner application in $IAM_HOME/upgrade/temp/oam/.Note: At the time of this writing, this step only works if everything is on the same host (ie. OID, OAM, etc.). This restriction might be lifted in later releases. Step 16 Change your OHS sso.conf and shut down OC4J_SECURITY In Step 14 we verified that SSO for our Forms environment works fine. Now, we are shutting the old system done and reconfigure the OHS that acts as the Forms entry point. First we go to the OHS configuration directory and rename the old osso.conf  to osso.conf.10g. Now we change the moduleconf/mod_osso.conf  to point to the new osso.conf file. Copy the new osso.conf  file from $IAM_HOME/upgrade/temp/oam/ to the OHS configuration directory. Restart OHS, test forms by using the same forms links. OAM should now kick in and show the login dialog to ask for your user credentials.Done. Now your Forms environment is successfully integrated with OAM 11g.Enjoy. What's Next? This rather lengthy setup is just the foundation for your growing environment of OAM 11g protections. In the next entry we will show that Forms 11g and ADF Faces 11g can use the same OAM installation and provide real single sign-on. References Nearly everything is documented. Use the documentation! Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1 Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1, Chapter 11-14 Oracle® Fusion Middleware Administrator's Guide for Oracle Access Manager 11gR1, Appendix B Oracle® Fusion Middleware Upgrade Guide for Oracle Identity Management 11gR1, Chapter 10   

    Read the article

  • How to stop Cairo Dock minimizing Conky on Show Desktop?

    - by César
    Every time I use Cairo Dock Show Desktop add-on Conky minimizes: I've read about the own_window_type override option on .conkyrc and it seems to work for some people but it doesn't work for me. Conky won't show up if I use this option (it is currently set to own_window_type normal). Any suggestions? .conkyrc # Conky settings # background no update_interval 1 cpu_avg_samples 2 net_avg_samples 2 override_utf8_locale yes double_buffer yes no_buffers yes text_buffer_size 2048 #imlib_cache_size 0 temperature_unit fahrenheit # Window specifications # own_window yes own_window_type normal own_window_transparent yes own_window_hints undecorate,sticky,skip_taskbar,skip_pager,below border_inner_margin 0 border_outer_margin 0 minimum_size 200 250 maximum_width 200 alignment tr gap_x 35 gap_y 55 # Graphics settings # draw_shades no draw_outline no draw_borders no draw_graph_borders no # Text settings # use_xft yes override_utf8_locale yes xftfont Neuropolitical:size=8 xftalpha 0.8 uppercase no temperature_unit celsius default_color FFFFFF # Lua Load # lua_load ~/.lua/scripts/clock_rings.lua lua_draw_hook_pre clock_rings TEXT ${font Neuropolitical:size=42}${time %e} ${goto 100}${font Neuropolitical:size=18}${color FF3300}${voffset -75}${time %b} ${font Neuropolitical:size=10}${color FF3300}${voffset 15}${time %A}${color FF3300}${hr} ${goto 100}${font Neuropolitical:size=15}${color FFFFFF}${voffset -35}${time %Y} ${font Neuropolitical:size=30}${voffset 40}${alignc}${time %H}:${time %M} ${goto 175}${voffset -30}${font Neuropolitical:size=10}${time %S} ${voffset 10}${font Neuropolitical:size=11}${color FF3300}${alignr}HOME${font} ${font Neuropolitical:size=13}${color FFFFFF}${alignr}temp: ${weather http://weather.noaa.gov/pub/data/observations/metar/stations/ LQBK temperature temperature 30} °C${font} ${hr} ${image ~/.conky/logo.png -p 165,10 -s 35x35} ${color FFFFFF}${font Neuropolitical:size=8}Uptime: ${uptime_short} ${color FFFFFF}${font Neuropolitical:size=8}Processes: ${processes} ${color FFFFFF}${font Neuropolitical:size=8}Running: ${running_processes} ${color FF3300}${goto 125}${voffset 27}CPU ${color FFFFFF}${goto 125}${cpu cpu0}% ${color FF3300}${goto 125}${voffset 55}RAM ${color FFFFFF}${goto 125}${memperc}% ${color FF3300}${goto 125}${voffset 56}Swap ${color FFFFFF}${goto 125}${swapperc}% ${color FF3300}${goto 125}${voffset 57}Disk ${color FFFFFF}${goto 125}${fs_used_perc /}% ${color FF3300}${goto 130}${voffset 55}Net ${color FFFFFF}${goto 130}${downspeed eth0} ${color FFFFFF}${goto 130}${upspeed eth0} ${color FF3300}${font Neuropolitical:size=8}${alignr}${nodename} ${color FF3300}${font Neuropolitical:size=8}${alignr}${pre_exec cat /etc/issue.net} $machine ${color FF3300}${font Neuropolitical:size=8}${alignr}Kernel: ${kernel} ${hr}

    Read the article

  • Logparser and Powershell

    - by Michel Klomp
    Logparser in powershell One of the few examples how to use logparser in powershell is from the Microsoft.com Operations blog. This script is a good base to create more advanced logparser scripts: $myQuery = new-object -com MSUtil.LogQuery $szQuery = “Select top 10 * from r:\ex07011210.log”; $recordSet = $myQuery.Execute($szQuery) for(; !$recordSet.atEnd(); $recordSet.moveNext()) {             $record=$recordSet.getRecord();             write-host ($record.GetValue(0) + “,”+ $record.GetValue(1)); } $recordSet.Close(); Logparser input formats The previous example uses the default logparser object, you can extent this with the logparser input formats. with this formats get information from the event-log, different types of logfiles, the Active Directory, the registry and XML files. Here are the different ProgId’s you can use. Input Format ProgId ADS MSUtil.LogQuery.ADSInputFormat BIN MSUtil.LogQuery.IISBINInputFormat CSV MSUtil.LogQuery.CSVInputFormat ETW MSUtil.LogQuery.ETWInputFormat EVT MSUtil.LogQuery.EventLogInputFormat FS MSUtil.LogQuery.FileSystemInputFormat HTTPERR MSUtil.LogQuery.HttpErrorInputFormat IIS MSUtil.LogQuery.IISIISInputFormat IISODBC MSUtil.LogQuery.IISODBCInputFormat IISW3C MSUtil.LogQuery.IISW3CInputFormat NCSA MSUtil.LogQuery.IISNCSAInputFormat NETMON MSUtil.LogQuery.NetMonInputFormat REG MSUtil.LogQuery.RegistryInputFormat TEXTLINE MSUtil.LogQuery.TextLineInputFormat TEXTWORD MSUtil.LogQuery.TextWordInputFormat TSV MSUtil.LogQuery.TSVInputFormat URLSCAN MSUtil.LogQuery.URLScanLogInputFormat W3C MSUtil.LogQuery.W3CInputFormat XML MSUtil.LogQuery.XMLInputFormat Using logparser to parse IIS logs if you use the IISW3CinputFormat you can use the field names instead of de row number to get the information from an IIS logfile, it also skips the comment rows in the logfile. $ObjLogparser = new-object -com MSUtil.LogQuery $objInputFormat = new-object -com MSUtil.LogQuery.IISW3CInputFormat $Query = “Select top 10 * from c:\temp\hb\ex071002.log”; $recordSet = $ObjLogparser.Execute($Query, $objInputFormat) for(; !$recordSet.atEnd(); $recordSet.moveNext()) {     $record=$recordSet.getRecord();     write-host ($record.GetValue(“s-ip”) + “,”+ $record.GetValue(“cs-uri-query”)); } $recordSet.Close();

    Read the article

  • Read only file system

    - by Jack Moon
    I'm running Ubuntu 12.10, Upon opening any shell I get the following error: /home/jack/.rbenv/libexec/rbenv-init: line 87: cannot create temp file for here-document: Read-only file system I realised this wasn't simply a rbenv issue, as any file I try to write to returns an error saying the system is Read-only. I don't know how else to describe my problem, each time I boot up the system goes through a disk check, where it supposedly fixes several errors in my disk. Here is my /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda1 during installation UUID=1cc4b2ab-a984-4516-ac25-6d64f5050244 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=4e0dfeae-701a-43ce-b5c6-65f15ab3d8e3 none swap sw 0 0 The entire file system is read-only. I've tried the following sudo fsck.ext4 -f /dev/sda1 which gave the following (shortened) output /dev/sda1: ***** FILE SYSTEM WAS MODIFIED ***** /dev/sda1: ***** REBOOT LINUX ***** /dev/sda1: 1257080/45268992 files (1.0% non-contiguous), 50696803/181051904 blocks

    Read the article

  • SQL SERVER – Table Variables and Transactions – SQL in Sixty Seconds #007 – Video

    - by pinaldave
    Today’s SQL in Sixty Seconds video is inspired from my presentation at TechEd India 2012 on Misconception and Resolution. Quite often I have seen people getting confused with certain behavior of the T-SQL. They expect SQL to behave certain way and SQL Server behave differently. This kind of issue often creates confusion and frustration. Sometime I have seen them also confusing it with bug and submitting the bug, where reality is totally different. Similar concept which are going to see today. I have seen quite commonly developer assuming that table various will be rolled back when transaction is rolled back. This sixty seconds video describes that table various are not rolled back when transactions are rolled back. More on Errors: Difference Temp Table and Table Variable – Effect of Transaction Effect of TRANSACTION on Local Variable – After ROLLBACK and After COMMIT Debate – Table Variables vs Temporary Tables – Quiz – Puzzle – 13 of 31 I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Video

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >