Search Results

Search found 10242 results on 410 pages for 'stored proc'.

Page 395/410 | < Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >

  • SqlDataAdapter Update is not working in C# wih Sql Server

    - by Ahmed
    I am trying to save data from C# form to Sql server Northwind Orders database, I am only using CustomerID, OrderDate and ShippedDate for data entry. Following is the code to Form load and save button: private void Form1_Load(object sender, EventArgs e) { SetComb(); connectionString = ConfigurationManager.AppSettings["connectionString"]; sqlConnection = new SqlConnection(connectionString); String sqlSelect = "Select OrderID, CustomerID, OrderDate, ShippedDate from Orders"; sqlDataMaster = new SqlDataAdapter(sqlSelect, sqlConnection); sqlConnection.Open(); //=============================================================================== //--- Set up the INSERT Command //=============================================================================== sInsProcName = "prInsert_Order"; insertcommand = new SqlCommand(sInsProcName, sqlConnection); insertcommand.CommandType = CommandType.StoredProcedure; insertcommand.Parameters.Add(new SqlParameter("@nNewID", SqlDbType.Int, 0, ParameterDirection.Output, false, 0, 0, "OrderID", DataRowVersion.Default, null)); insertcommand.UpdatedRowSource = UpdateRowSource.OutputParameters; insertcommand.Parameters.Add(new SqlParameter("@sCustomerID", SqlDbType.NChar, 5,"CustomerID")); insertcommand.Parameters["@sCustomerID"].Value = cmbCust.SelectedValue; insertcommand.Parameters.Add(new SqlParameter("@dtOrderDate", SqlDbType.DateTime, 8,"OrderDate")); insertcommand.Parameters["@dtOrderDate"].Value = dtOrdDt.Text; insertcommand.Parameters.Add(new SqlParameter("@dtShipDate", SqlDbType.DateTime, 8,"ShippedDate")); insertcommand.Parameters["@dtShipDate"].Value = dtShipDt.Text; sqlDataMaster.InsertCommand = insertcommand; //=============================================================================== //--- Set up the UPDATE Command //=============================================================================== sUpdProcName = "prUpdate_Order"; updatecommand = new SqlCommand(sUpdProcName, sqlConnection); updatecommand.CommandType = CommandType.StoredProcedure; updatecommand.Parameters.Add(new SqlParameter("@nOrderID", SqlDbType.Int, 4, "OrderID")); updatecommand.Parameters.Add(new SqlParameter("@dtOrderDate", SqlDbType.DateTime, 8, "OrderDate")); updatecommand.Parameters.Add(new SqlParameter("@dtShipDate", SqlDbType.DateTime, 8, "ShippedDate")); sqlDataMaster.UpdateCommand = updatecommand; //=============================================================================== //--- Set up the DELETE Command //=============================================================================== sDelProcName = "prDelete_Order"; deletecommand = new SqlCommand(sDelProcName, sqlConnection); deletecommand.CommandType = CommandType.StoredProcedure; deletecommand.Parameters.Add(new SqlParameter("@nOrderID", SqlDbType.Int, 4, "OrderID")); sqlDataMaster.DeleteCommand = deletecommand; dt = new DataTable(); sqlDataMaster.FillSchema(dt, SchemaType.Source); ds = new DataSet(); ds.Tables.Add(dt); bs = new BindingSource(); bs.DataSource = ds.Tables[0]; } public void SetComb() { cmbCust.DataSource = dm.GetData("Select * from Customers order by CompanyName"); cmbCust.DisplayMember = "CompanyName"; cmbCust.ValueMember = "CustomerId"; cmbCust.Text = ""; } private void btnSave_Click(object sender, EventArgs e) { sqlDataMaster.Update((DataTable) bs.DataSource); } and Stored Procedures for Insert/Update/Delete set ANSI_NULLS ON set QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[prInsert_Order] -- ALTER PROCEDURE prInsert_Order @sCustomerID CHAR(5), @dtOrderDate DATETIME, @dtShipDate DATETIME, @nNewID INT OUTPUT AS SET NOCOUNT ON INSERT INTO Orders (CustomerID, OrderDate, ShippedDate) VALUES (@sCustomerID, @dtOrderDate, @dtShipDate) SELECT @nNewID = SCOPE_IDENTITY() set ANSI_NULLS ON set QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[prUpdate_Order] -- ALTER PROCEDURE prUpdate_Order @nOrderID INT, @dtOrderDate DATETIME, @dtShipDate DATETIME AS UPDATE Orders SET OrderDate = @dtOrderDate, ShippedDate = @dtShipDate WHERE OrderID = @nOrderID set ANSI_NULLS ON set QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[prDelete_Order] -- ALTER PROCEDURE prDelete_Order @nOrderID INT AS DELETE Orders WHERE OrderID = @nOrderID In the form CustomerID is selected via combobox which has Display property of CustomerName and Value property of CustomerID. But when clicking save button it shows no error, but it also don't save anything in Orders Table of Northwind....dm.GetData is the method of my Data Access Layer class to just get the info and populate CustomerID combobox. Any help with the code is highly appreciated... Thanks Ahmed

    Read the article

  • ActionScript MovieClip moves to the left, but not the right

    - by Defcon
    I have a stage with a movie clip with the instance name of "mc". Currently I have a code that is suppose to move the player left and right, and when the left or right key is released, the "mc" slides a little bit. The problem I'm having is that making the "mc" move to the left works, but the exact some code used for the right doesn't. All of this code is present on the Main Stage - Frame One //Variables var mcSpeed:Number = 0;//MC's Current Speed var mcJumping:Boolean = false;//if mc is Jumping var mcFalling:Boolean = false;//if mc is Falling var mcMoving:Boolean = false;//if mc is Moving var mcSliding:Boolean = false;//if mc is sliding var mcSlide:Number = 0;//Stored for use when creating slide var mcMaxSlide:Number = 1.6;//Max Distance the object will slide. //Player Move Function p1Move = new Object(); p1Move = function (dir:String, maxSpeed:Number) { if (dir == "left" && _root.mcSpeed<maxSpeed) { _root.mcSpeed += .2; _root.mc._x -= _root.mcSpeed; } else if (dir == "right" && _root.mcSpeed<maxSpeed) { _root.mcSpeed += .2; _root.mc._x += _root.mcSpeed; } else if (dir == "left" && speed>=maxSpeed) { _root.mc._x -= _root.mcSpeed; } else if (dir == "right" && _root.mcSpeed>=maxSpeed) { _root.mc._x += _root.mcSpeed; } } //onEnterFrame for MC mc.onEnterFrame = function():Void { if (Key.isDown(Key.LEFT)) { if (_root.mcMoving == false && _root.mcSliding == false) { _root.mcMoving = true; } else if (_root.mcMoving == true && _root.mcSliding == false) { _root.p1Move("left",5); } } else if (!Key.isDown(Key.LEFT)) { if (_root.mcMoving == true && _root.mcSliding == false) { _root.mcSliding = true; } else if (_root.mcMoving == true && _root.mcSliding == true && _root.mcSlide<_root.mcMaxSlide) { _root.mcSlide += .2; this._x -= .2; } else if (_root.mcMoving == true && _root.mcSliding == true && _root.mcSlide>=_root.mcMaxSlide) { _root.mcMoving = false; _root.mcSliding = false; _root.mcSlide = 0; _root.mcSpeed = 0; } } else if (Key.isDown(Key.RIGHT)) { if (_root.mcMoving == false && _root.mcSliding == false) { _root.mcMoving = true; } else if (_root.mcMoving == true && _root.mcSliding == false) { _root.p1Move("right",5); } } else if (!Key.isDown(Key.RIGHT)) { if (_root.mcMoving == true && _root.mcSliding == false) { _root.mcSliding = true; } else if (_root.mcMoving == true && _root.mcSliding == true && _root.mcSlide<_root.mcMaxSpeed) { _root.mcSlide += .2; this._x += .2; } else if (_root.mcMoving == true && _root.mcSliding == true && _root.mcSlide>=_root.mcMax) { _root.mcMoving = false; _root.mcSliding = false; _root.mcSlide = 0; _root.mcSpeed = 0; } } }; I just don't get why when you press the left arrow its works completely fine, but when you press the right arrow it doesn't respond. It is literally the same code.

    Read the article

  • Accessing Layout Items from inside Widget AppWidgetProvider

    - by cam4mav
    I am starting to go insane trying to figure this out. It seems like it should be very easy, I'm starting to wonder if it's possible. What I am trying to do is create a home screen widget, that only contains an ImageButton. When it is pressed, the idea is to change some setting (like the wi-fi toggle) and then change the Buttons image. I have the ImageButton declared like this in my main.xml <ImageButton android:id="@+id/buttonOne" android:src="@drawable/button_normal_ringer" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" /> my AppWidgetProvider class, named ButtonWidget * note that the RemoteViews class is a locally stored variable. this allowed me to get access to the RViews layout elements... or so I thought. @Override public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) { remoteViews = new RemoteViews(context.getPackageName(), R.layout.main); Intent active = new Intent(context, ButtonWidget.class); active.setAction(VIBRATE_UPDATE); active.putExtra("msg","TESTING"); PendingIntent actionPendingIntent = PendingIntent.getBroadcast(context, 0, active, 0); remoteViews.setOnClickPendingIntent(R.id.buttonOne, actionPendingIntent); appWidgetManager.updateAppWidget(appWidgetIds, remoteViews); } @Override public void onReceive(Context context, Intent intent) { // v1.5 fix that doesn't call onDelete Action final String action = intent.getAction(); Log.d("onReceive",action); if (AppWidgetManager.ACTION_APPWIDGET_DELETED.equals(action)) { final int appWidgetId = intent.getExtras().getInt( AppWidgetManager.EXTRA_APPWIDGET_ID, AppWidgetManager.INVALID_APPWIDGET_ID); if (appWidgetId != AppWidgetManager.INVALID_APPWIDGET_ID) { this.onDeleted(context, new int[] { appWidgetId }); } } else { // check, if our Action was called if (intent.getAction().equals(VIBRATE_UPDATE)) { String msg = "null"; try { msg = intent.getStringExtra("msg"); } catch (NullPointerException e) { Log.e("Error", "msg = null"); } Log.d("onReceive",msg); if(remoteViews != null){ Log.d("onReceive",""+remoteViews.getLayoutId()); remoteViews.setImageViewResource(R.id.buttonOne, R.drawable.button_pressed_ringer); Log.d("onReceive", "tried to switch"); } else{ Log.d("F!", "--naughty language used here!!!--"); } } super.onReceive(context, intent); } } so, I've been testing this and the onReceive method works great, I'm able to send notifications and all sorts of stuff (removed from code for ease of reading) the one thing I can't do is change any properties of the view elements. To try and fix this, I made RemoteViews a local and static private variable. Using log's I was able to see that When multiple instances of the app are on screen, they all refer to the one instance of RemoteViews. perfect for what I'm trying to do The trouble is in trying to change the image of the ImageButton. I can do this from within the onUpdate method using this. remoteViews.setImageViewResource(R.id.buttonOne, R.drawable.button_pressed_ringer); that doesn't do me any good though once the widget is created. For some reason, even though its inside the same class, being inside the onReceive method makes that line not work. That line used to throw a Null pointer as a matter of fact, until I changed the variable to static. now it passes the null test, refers to the same layoutId as it did at the start, reads the line, but it does nothing. Its like the code isn't even there, just keeps chugging along. SO...... Is there any way to modify layout elements from within a widget after the widget has been created!? I want to do this based on the environment, not with a configuration activity launch. I've been looking at various questions and this seems to be an issue that really hasn't been solved, such as link text and link text oh and for anyone who finds this and wants a good starting tutorial for widgets, this is easy to follow (though a bit old, it gets you comfortable with widgets) .pdf link text hopefully someone can help here. I kinda have the feeling that this is illegal and there is a different way to go about this. I would LOVE to be told another approach!!!! Thanks

    Read the article

  • C# slowdown while creating a bitmap - calculating distances from a large List of places for each pixel

    - by user576849
    I'm creating a graphic of the glow of lights above a geographic location based upon Walkers Law: Skyglow=0.01*Population*DistanceFromCenter^-2.5 I have a CSV file of places with 66,000 records using 5 fields (id,name,population,latitude,longitude), parsed on the FormLoad event and stored it in: List<string[]> placeDataList Then I set up nested loops to fill in a bitmap using SetPixel. For each pixel on the bitmap, which represents a coordinate on a map (latitude and longitude), the program loops through placeDataList – calculating the distance from that coordinate (pixel) to each place record. The distance (along with population) is used in a calculation to find how much cumulative sky glow is contributed to the coordinate from each place record. So, for every pixel, 66,000 distance calculations must be made. The problem is, this is predictably EXTREMELY slow – on the order of one line of pixels per 30 seconds or so on a 320 pixel wide image. This is unrelated to SetPixel, which I know is also slow, because the speed is similarly slow when adding the distance calculation results to an array. I don’t actually need to test all 66,000 records for every pixel, only the records within 150 miles (i.e. no skyglow is contributed to a coordinate from a small town 3000 miles away). But to find which records are within 150 miles of my coordinate I would still need to loop through all the records for each pixel. I can't use a smaller number of records because all 66,000 places contribute to skyglow for SOME coordinate in my map as it loops. This seems like a Catch-22, so I know there must be a better method out there. Like I mentioned, the slowdown is related to how many calculations I’m making per pixel, not anything to do with the bitmap. Any suggestions? private void fillPixels(int width) { Color pixelColor; int pixel_w = width; int pixel_h = (int)Math.Floor((width * 0.424088664)); Bitmap bmp = new Bitmap(pixel_w, pixel_h); for (int i = 0; i < pixel_h; i++) for (int j = 0; j < pixel_w; j++) { pixelColor = getPixelColor(i, j); bmp.SetPixel(j, i, pixelColor); } bmp.Save("Nightfall", System.Drawing.Imaging.ImageFormat.Jpeg); pictureBox1.Image = bmp; MessageBox.Show("Done"); } private Color getPixelColor(int height, int width) { int c; double glow,d,cityLat,cityLon,cityPop; double testLat, testLon; int size_h = (int)Math.Floor((size_w * 0.424088664)); ; testLat = (height * (24.443136 / size_h)) + 24.548874; testLon = (width * (57.636853 / size_w)) -124.640767; glow = 0; for (int i = 0; i < placeDataList.Count; i++) { cityPop=Convert.ToDouble(placeDataList[i][2]); cityLat=Convert.ToDouble(placeDataList[i][3]); cityLon=Convert.ToDouble(placeDataList[i][4]); d = distance(testLat, testLon, cityLat, cityLon,"M"); if(d<150) glow = glow+(0.01 * cityPop * Math.Pow(d, -2.5)); } if (glow >= 1) glow=1; c = (int)Math.Ceiling(glow * 255); return Color.FromArgb(c, c, c); }

    Read the article

  • How to implement this .NET client and PHP server synchronization scenario?

    - by pbean
    I have a PHP webserver and a .NET mobile application. The .NET application needs data from a database, which is provided (for now) by the php webserver. I'm fairly new to this kind of scenario so I'm not sure what the best practices are. I ran into a couple of problems and I am not certain how to overcome them. For now, I have the following setup. I run a PHP SOAP server which has a couple of operations which simply retrieve data from the database. For this web service I have created a WSDL file. In Visual Studio I added a web reference to my project using the WSDL file and it generated some classes for it. I simply call something like "MyWebService.GetItems();" and I get an array of items in my .NET application, which come straight from the database. Next I also serialize all these retrieved objects to local (permanent) storage. I face a couple of challenges which I don't know how to resolve properly. The idea is for the mobile client to synchronize the data once (at the start of the day), before working, and then use the local storage throughout the day, and synchronize it back at the end of the day. At the moment all data is downloaded through SOAP, and not a subset (only what is needed). How would I know which new information should be sent to the client? Only the server knows what is new, but only the client knows for sure which data it already has. I seem to be doing double work now. The data which is transferred with SOAP basically already are serialized objects. But at the moment I first retrieve all objects through SOAP and the .NET framework automatically deserializes it. Then I serialize all data again myself. It would be more efficient to simply download it to storage, and then deserialize it. The mobile device does not have a lot of memory. In my test data this is not really a problem, but with potentially tenths of thousands of records I can imagine this will become a problem. Currently when I call the SOAP method, it loads all data into memory (as an array) but as described above perhaps it would be better to have it stored into storage directly, and only retrieve from storage the objects that are needed. But how would I store this? An array would serialize to one big XML (or binary) file from which I cannot choose which objects to load. Would I make (possible tenths of thousands) separate files? Also at the end of the day when I want to send the changes back to the server, how would I know which objects to send... since they're not in memory? I hope my troubles are clear and I hope you are able to help me figure out how to implement this the best way. :) Some things are already fixed (like using .NET on the mobile device, using PHP and having a MySQL database server) but other things can most certainly be changed (like using SOAP).

    Read the article

  • CRM 2011 Plugin for CREATE (post-operational): Why is the value of "baseamount" zero in post entity image and target?

    - by Olli
    REFORMULATED QUESTION (Apr 24): I am using the CRM Developer Toolkit for VS2012 to create a CRM2011 plugin. The plugin is registered for the CREATE message of the "Invoice Product" entity. Pipeline-Stage is post-operational, execution is synchronous. I register for a post image that contains baseamount. The toolkit creates an execute function that looks like this: protected void ExecutePostInvoiceProductCreate(LocalPluginContext localContext) { if (localContext == null) { throw new ArgumentNullException("localContext"); } IPluginExecutionContext context = localContext.PluginExecutionContext; Entity postImageEntity = (context.PostEntityImages != null && context.PostEntityImages.Contains(this.postImageAlias)) ? context.PostEntityImages[this.postImageAlias] : null; } Since we are in post operational stage, the value of baseamount in postImageEntity should already be calculated from the user input, right? However, the value of baseamountin the postImageEntity is zero. The same holds true for the value of baseamount in the target entity that I get using the following code: Entity targetEntity = (context.InputParameters != null && context.InputParameters.Contains("Target")) ? (Entity)context.InputParameters["Target"] : null; Using a retrieve request like the one below, I am getting the correct value of baseamount: Entity newlyCreated = service.Retrieve("invoicedetail", targetEntity.Id, new ColumnSet(true)); decimal baseAmount = newlyCreated.GetAttributeValue<Money>("baseamount").Value; The issue does not appear in post operational stage of an update event. I'd be glad to hear your ideas/explanations/suggestions on why this is the case... (Further information: Remote debugging, no isolation mode, plugin stored in database) Original Question: I am working on a plugin for CRM 2011 that is supposed to calculate the amount of tax to be paid when an invoice detail is created. To this end I am trying to get the baseamount of the newly created invoicedetail entity from the post entity image in post operational stage. As far as I understood it, the post entity image is a snapshot of the entity in the database after the new invoice detail has been created. Thus it should contain all properties of the newly created invoice detail. I am getting a "postentityimages" property of the IPluginExecutionContext that contains an entity with the alias I registered ("postImage"). This "postImage" entity contains a key for "baseamount" but its value is 0. Can anybody help me understand why this is the case and what I can do about it? (I also noticed that the postImage does not contain all but only a subset of the entities I registered for.) Here is what the code looks like: protected void ExecutePostInvoiceProductCreate(LocalPluginContext localContext) { if (localContext == null) { throw new ArgumentNullException("localContext"); } // Get PluginExecutionContext to obtain PostEntityImages IPluginExecutionContext context = localContext.PluginExecutionContext; // This works: I get a postImage that is not null. Entity postImage = (context.PostEntityImages != null && context.PostEntityImages.Contains(this.postImageAlias)) ? context.PostEntityImages[this.postImageAlias] : null; // Here is the problem: There is a "baseamount" key in the postImage // but its value is zero! decimal baseAmount = ((Money)postImage["baseamount"]).Value; } ADDITION: Pre and post images for post operational update contain non-zero values for baseamount.

    Read the article

  • Find out when all processes in (void) is done?

    - by Emil
    Hey. I need to know how you can find out when all processes (loaded) from a - (void) are done, if it's possible. Why? I'm loading in data for a UITableView, and I need to know when a Loading... view can be replaced with the UITableView, and when I can start creating the cells. This is my code: - (void) reloadData { NSAutoreleasePool *releasePool = [[NSAutoreleasePool alloc] init]; NSLog(@"Reloading data."); NSURL *urlPosts = [NSURL URLWithString:[NSString stringWithFormat:@"%@", URL]]; NSError *lookupError = nil; NSString *data = [[NSString alloc] initWithContentsOfURL:urlPosts encoding:NSUTF8StringEncoding error:&lookupError]; postsData = [data componentsSeparatedByString:@"~"]; [data release], data = nil; urlPosts = nil; self.numberOfPosts = [[postsData objectAtIndex:0] intValue]; self.postsArrayID = [[postsData objectAtIndex:1] componentsSeparatedByString:@"#"]; self.postsArrayDate = [[postsData objectAtIndex:2] componentsSeparatedByString:@"#"]; self.postsArrayTitle = [[postsData objectAtIndex:3] componentsSeparatedByString:@"#"]; self.postsArrayComments = [[postsData objectAtIndex:4] componentsSeparatedByString:@"#"]; self.postsArrayImgSrc = [[postsData objectAtIndex:5] componentsSeparatedByString:@"#"]; NSMutableArray *writeToPlist = [NSMutableArray array]; NSMutableArray *writeToNoImagePlist = [NSMutableArray array]; NSMutableArray *imagesStored = [NSMutableArray arrayWithContentsOfFile:[rootPath stringByAppendingPathComponent:@"imagesStored.plist"]]; int loop = 0; for (NSString *postID in postsArrayID) { if ([imagesStored containsObject:[NSString stringWithFormat:@"%@.png", postID]]){ NSLog(@"Allready stored, jump to next. ID: %@", postID); continue; } NSLog(@"%@.png", postID); NSData *imageData = [NSData dataWithContentsOfURL:[NSURL URLWithString:[postsArrayImgSrc objectAtIndex:loop]]]; // If image contains anything, set cellImage to image. If image is empty, try one more time or use noImage.png, set in IB if (imageData == nil){ NSLog(@"imageData is empty before trying .jpeg"); // If image == nil, try to replace .jpg with .jpeg, and if that worked, set cellImage to that image. If that is also nil, use noImage.png, set in IB. imageData = [NSData dataWithContentsOfURL:[NSURL URLWithString:[[postsArrayImgSrc objectAtIndex:loop] stringByReplacingOccurrencesOfString:@".jpg" withString:@".jpeg"]]]; } if (imageData != nil){ NSLog(@"imageData is NOT empty when creating file"); [fileManager createFileAtPath:[rootPath stringByAppendingPathComponent:[NSString stringWithFormat:@"images/%@.png", postID]] contents:imageData attributes:nil]; [writeToPlist addObject:[NSString stringWithFormat:@"%@.png", postID]]; } else { [writeToNoImagePlist addObject:[NSString stringWithFormat:@"%@", postID]]; } imageData = nil; loop++; NSLog(@"imagePlist: %@\nnoImagePlist: %@", writeToPlist, writeToNoImagePlist); } NSMutableArray *writeToAllPlist = [NSMutableArray arrayWithArray:writeToPlist]; [writeToPlist addObjectsFromArray:[NSArray arrayWithContentsOfFile:nowPlist]]; [writeToAllPlist addObjectsFromArray:[NSArray arrayWithContentsOfFile:[rootPath stringByAppendingPathComponent:@"imagesStored.plist"]]]; [writeToNoImagePlist addObjectsFromArray:[NSArray arrayWithContentsOfFile:[rootPath stringByAppendingPathComponent:@"noImage.plist"]]]; [writeToPlist writeToFile:nowPlist atomically:YES]; [writeToAllPlist writeToFile:[rootPath stringByAppendingPathComponent:@"imagesStored.plist"] atomically:YES]; [writeToNoImagePlist writeToFile:[rootPath stringByAppendingPathComponent:@"noImage.plist"] atomically:YES]; [releasePool release]; }

    Read the article

  • How to safely operate on parameters in threads, using C++ & Pthreads?

    - by ChrisCphDK
    Hi. I'm having some trouble with a program using pthreads, where occassional crashes occur, that could be related to how the threads operate on data So I have some basic questions about how to program using threads, and memory layout: Assume that a public class function performs some operations on some strings, and returns the result as a string. The prototype of the function could be like this: std::string SomeClass::somefunc(const std::string &strOne, const std::string &strTwo) { //Error checking of strings have been omitted std::string result = strOne.substr(0,5) + strTwo.substr(0,5); return result; } Is it correct to assume that strings, being dynamic, are stored on the heap, but that a reference to the string is allocated on the stack at runtime? Stack: [Some mem addr] pointer address to where the string is on the heap Heap: [Some mem addr] memory allocated for the initial string which may grow or shrink To make the function thread safe, the function is extended with the following mutex (which is declared as private in the "SomeClass") locking: std::string SomeClass::somefunc(const std::string &strOne, const std::string &strTwo) { pthread_mutex_lock(&someclasslock); //Error checking of strings have been omitted std::string result = strOne.substr(0,5) + strTwo.substr(0,5); pthread_mutex_unlock(&someclasslock); return result; } Is this a safe way of locking down the operations being done on the strings (all three), or could a thread be stopped by the scheduler in the following cases, which I'd assume would mess up the intended logic: a. Right after the function is called, and the parameters: strOne & strTwo have been set in the two reference pointers that the function has on the stack, the scheduler takes away processing time for the thread and lets a new thread in, which overwrites the reference pointers to the function, which then again gets stopped by the scheduler, letting the first thread back in? b. Can the same occur with the "result" string: the first string builds the result, unlocks the mutex, but before returning the scheduler lets in another thread which performs all of it's work, overwriting the result etc. Or are the reference parameters / result string being pushed onto the stack while another thread is doing performing it's task? Is the safe / correct way of doing this in threads, and "returning" a result, to pass a reference to a string that will be filled with the result instead: void SomeClass::somefunc(const std::string &strOne, const std::string &strTwo, std::string result) { pthread_mutex_lock(&someclasslock); //Error checking of strings have been omitted result = strOne.substr(0,5) + strTwo.substr(0,5); pthread_mutex_unlock(&someclasslock); } The intended logic is that several objects of the "SomeClass" class creates new threads and passes objects of themselves as parameters, and then calls the function: "someFunc": int SomeClass::startNewThread() { pthread_attr_t attr; pthread_t pThreadID; if(pthread_attr_init(&attr) != 0) return -1; if(pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED) != 0) return -2; if(pthread_create(&pThreadID, &attr, proxyThreadFunc, this) != 0) return -3; if(pthread_attr_destroy(&attr) != 0) return -4; return 0; } void* proxyThreadFunc(void* someClassObjPtr) { return static_cast<SomeClass*> (someClassObjPtr)->somefunc("long string","long string"); } Sorry for the long description. But I hope the questions and intended purpose is clear, if not let me know and I'll elaborate. Best regards. Chris

    Read the article

  • Having different database sorting order (default_scope) for two different views

    - by Juniper747
    In my model (pins.rb), I have two sorting orders: default_scope order: 'pins.featured DESC' #for adding featured posts to the top of a list default_scope order: 'pins.created_at DESC' #for adding the remaining posts beneath the featured posts This sorting order (above) is how I want my 'pins view' (index.html.erb) to look. Which is just a list of ALL user posts. In my 'users view' (show.html.erb) I am using the same model (pins.rb) to list only current_user pins. HOWEVER, I want to sorting order to ignore the "featured" default scope and only use the second scope: default_scope order: 'pins.created_at DESC' How can I accomplish this? I tried doing something like this: default_scope order: 'pins.featured DESC', only: :index default_scope order: 'pins.created_at DESC' But that didn't fly... UPDATE I updated my model to define a scope: scope :featy, order: 'pins.featured DESC' default_scope order: 'pins.created_at DESC' And updated my pins view to: <%= render @pins.featy %> However, now when I open my pins view, I get the error: undefined method `featy' for #<Array:0x00000100ddbc78> UPDATE 2 User.rb class User < ActiveRecord::Base attr_accessible :name, :email, :username, :password, :password_confirmation, :avatar, :password_reset_token, :password_reset_sent_at has_secure_password has_many :pins, dependent: :destroy #destroys user posts when user is destroyed # has_many :featured_pins, order: 'featured DESC', class_name: "Pin", source: :pin has_attached_file :avatar, :styles => { :medium => "300x300#", :thumb => "120x120#" } before_save { |user| user.email = user.email.downcase } before_save { |user| user.username = user.username.downcase } before_save :create_remember_token before_save :capitalize_name validates :name, presence: true, length: { maximum: 50 } VALID_EMAIL_REGEX = /\A[\w+\-.]+@[a-z\d\-.]+\.[a-z]+\z/i VALID_USERNAME_REGEX = /^[A-Za-z0-9]+(?:[_][A-Za-z0-9]+)*$/ validates :email, presence: true, format: { with: VALID_EMAIL_REGEX }, uniqueness: { case_sensitive: false } validates :username, presence: true, format: { with: VALID_USERNAME_REGEX }, uniqueness: { case_sensitive: false } validates :password, length: { minimum: 6 }, on: :create #on create, because was causing erros on pw_reset Pin.rb class Pin < ActiveRecord::Base attr_accessible :content, :title, :privacy, :date, :dark, :bright, :fragmented, :hashtag, :emotion, :user_id, :imagesource, :imageowner, :featured belongs_to :user before_save :capitalize_title before_validation :generate_slug validates :content, presence: true, length: { maximum: 8000 } validates :title, presence: true, length: { maximum: 24 } validates :imagesource, presence: { message: "Please search and choose an image" }, length: { maximum: 255 } validates_inclusion_of :privacy, :in => [true, false] validates :slug, uniqueness: true, presence: true, exclusion: {in: %w[signup signin signout home info privacy]} # for sorting featured and newest posts first default_scope order: 'pins.created_at DESC' scope :featured_order, order: 'pins.featured DESC' def to_param slug # or "#{id}-#{name}".parameterize end def generate_slug # makes the url slug address bar freindly self.slug ||= loop do random_token = Digest::MD5.hexdigest(Time.zone.now.to_s + title)[0..9]+"-"+"#{title}".parameterize break random_token unless Pin.where(slug: random_token).exists? end end protected def capitalize_title self.title = title.split.map(&:capitalize).join(' ') end end users_controller.rb class UsersController < ApplicationController before_filter :signed_in_user, only: [:edit, :update, :show] before_filter :correct_user, only: [:edit, :update, :show] before_filter :admin_user, only: :destroy def index if !current_user.admin? redirect_to root_path end end def menu @user = current_user end def show @user = User.find(params[:id]) @pins = @user.pins current_user.touch(:last_log_in) #sets the last log in time if [email protected]? render 'pages/info/' end end def new @user = User.new end pins_controller.rb class PinsController < ApplicationController before_filter :signed_in_user, except: [:show] # GET /pins, GET /pins.json def index #Live Feed @pins = Pin.all @featured_pins = Pin.featured_order respond_to do |format| format.html # index.html.erb format.json { render json: @pins } end end # GET /pins, GET /pins.json def show #single Pin View @pin = Pin.find_by_slug!(params[:id]) require 'uri' #this gets the photo's id from the stored uri @image_id = URI(@pin.imagesource).path.split('/').second if @pin.privacy == true #check for private pins if signed_in? if @pin.user_id == current_user.id respond_to do |format| format.html # show.html.erb format.json { render json: @pin } end else redirect_to home_path, notice: "Prohibited 1" end else redirect_to home_path, notice: "Prohibited 2" end else respond_to do |format| format.html # show.html.erb format.json { render json: @pin } end end end # GET /pins, GET /pins.json def new @pin = current_user.pins.new respond_to do |format| format.html # new.html.erb format.json { render json: @pin } end end # GET /pins/1/edit def edit @pin = current_user.pins.find_by_slug!(params[:id]) end Finally, on my index.html.erb I have: <%= render @featured_pins %>

    Read the article

  • Two functions working but not when I put them together

    - by Error_404
    I'm really new to C(I have been learning Cuda and wanted to learn C so I can run everything together instead of generating data in Java/Python and copy/pasting it manually to my Cuda program to play with). I am trying to open a large file(20+gb) and parse some data but because I was having problems I decided to try to break down my problem first to verify I can open and read line by line a file and then in another file take a sample of the output of a line and try to parse it, then if all goes well I can simply bring them together. I managed(after a bit of a struggle) to get each part working but when I combine them together it doesn't work(I tried in both eclipse & QT creator. QT creator says it exited unexpectedly and eclipse just stops..). Here's the parsing part(it works exactly as I want): #include <string.h> #include <stdio.h> int main() { char line[1024] = "[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 4],"; size_t len = strlen(line); memmove(line, line+1, len-3); line[len-3] = 0; printf("%s \n",line); char str[100], *s = str, *t = NULL; strcpy(str, line); while ((t = strtok(s, " ,")) != NULL) { s = NULL; printf(":%s:\n", t); } return 0; } The data in variable line is exactly as I get if I print each line of a file. But when I copy/paste it into my main program then it doesn't work(QT shows nothing and eclipse just shows the first line). This code is just a few file IO commands that are wrapped around the commands above(everything within the while loop is the exact same as above) #include <stdio.h> #include <stdlib.h> int main() { char line[1024]; FILE *fp = fopen("/Users/me/Desktop/output.txt","r"); printf("Starting.. \n"); int count = 0; int list[30]; //items will be stored here while(fgets(line, 1024, fp) != NULL){ count++; //char line[1024] = "[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 4],"; size_t len = strlen(line); memmove(line, line+1, len-3); line[len-3] = 0; printf("%s \n",line); char str[100], *s = str, *t = NULL; strcpy(str, line); while ((t = strtok(s, " ,")) != NULL) { s = NULL; printf(":%s:\n", t); } } printf(" num of lines is %i \n", count); return 0; } eclipse output: Starting.. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 4 I can't figure out what I'm doing wrong. I'm using a mac with gcc 4.2, if that helps. The only thing I can think of is maybe I'm doing something wrong the way I parse the list variable so its not accessible later on(I'm not sure its a wild guess). Is there anything I'm missing?

    Read the article

  • Table Name is null in the sqlite database in android device

    - by Mahe
    I am building a simple app which stores some contacts and retrieves contacts in android phone device. I have created my own database and a table and inserting the values to the table in phone. My phone is not rooted. So I cannot access the files, but I see that values are stored in the table. And tested on a emulator also. Till here it is fine. Display all the contacts in a list by fetching data from table. This is also fine. But the problem is When I am trying to delete the record, it shows the table name is null in the logcat(not an exception), and the data is not deleted. But in emulator the data is getting deleted from table. I am not able to achieve this through phone. This is my code for deleting, public boolean onContextItemSelected(MenuItem item) { super.onContextItemSelected(item); AdapterView.AdapterContextMenuInfo info = (AdapterView.AdapterContextMenuInfo) item .getMenuInfo(); int menuItemIndex = item.getItemId(); String[] menuItems = getResources().getStringArray(R.array.menu); String menuItemName = menuItems[menuItemIndex]; String listItemName = Customers[info.position]; if (item.getTitle().toString().equalsIgnoreCase("Delete")) { Toast.makeText( context, "Selected List item is: " + listItemName + "MenuItem is: " + menuItemName, Toast.LENGTH_LONG).show(); DB = context.openOrCreateDatabase("CustomerDetails.db", MODE_PRIVATE, null); try { int pos = info.position; pos = pos + 1; Log.d("", "customers[pos]: " + Customers[info.position]); Cursor c = DB .rawQuery( "Select customer_id,first_name,last_name from CustomerInfo", null); int rowCount = c.getCount(); DB.delete(Table_name, "customer_id" + "=" + String.valueOf(pos), null); DB.close(); Log.d("", "" + String.valueOf(pos)); Toast.makeText(context, "Deleted Customer", Toast.LENGTH_LONG) .show(); // Customers[info.position]=null; getCustomers(); } catch (Exception e) { Toast.makeText(context, "Delete unsuccessfull", Toast.LENGTH_LONG).show(); } } this is my logcat, 07-02 10:12:42.976: D/Cursor(1560): Database path: CustomerDetails.db 07-02 10:12:42.976: D/Cursor(1560): Table name : null 07-02 10:12:42.984: D/Cursor(1560): Database path: CustomerDetails.db 07-02 10:12:42.984: D/Cursor(1560): Table name : null Don't know the reason why data is not being deleted. Data exists in the table. This is the specification I have given for creating the table public static String customer_id="customer_id"; public static String site_id="site_id"; public static String last_name="last_name"; public static String first_name="first_name"; public static String phone_number="phone_number"; public static String address="address"; public static String city="city"; public static String state="state"; public static String zip="zip"; public static String email_address="email_address"; public static String custlat="custlat"; public static String custlng="custlng"; public static String Table_name="CustomerInfo"; final SQLiteDatabase DB = context.openOrCreateDatabase( "CustomerDetails.db", MODE_PRIVATE, null); final String CREATE_TABLE = "create table if not exists " + Table_name + " (" + customer_id + " integer primary key autoincrement, " + first_name + " text not null, " + last_name + " text not null, " + phone_number+ " integer not null, " + address+ " text not null, " + city+ " text not null, " + state+ " text not null, " + zip+ " integer not null, " + email_address+ " text not null, " + custlat+ " double, " + custlng+ " double " +" );"; DB.execSQL(CREATE_TABLE); DB.close(); Please correct my code. I am struggling from two days. Any help is appreciated!!

    Read the article

  • PHP & MySQL database storing the name Array problem.

    - by comma
    I'm new to PHP & MySQL and I keep getting the word Array stored in my MySQL tables fields $skill, $experience, $yearsI was wondering how can I fix this problem? And I know I need to add mysqli_real_escape_string which I left out to make code more easier to read Hopefully. Here is the PHP & MySQL code. if (isset($_POST['info_submitted'])) { $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"SELECT learned_skills.*, users_skills.* FROM learned_skills INNER JOIN users_skills ON learned_skills.id = users_skills.skill_id WHERE user_id='$user_id'"); if (!$dbc) { print mysqli_error($mysqli); return; } $user_id = '5'; $skill = $_POST['skill']; $experience = $_POST['experience']; $years = $_POST['years']; if (isset($_POST['skill'][0]) && trim($_POST['skill'][0])!=='') { $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"SELECT learned_skills.*, users_skills.* FROM learned_skills INNER JOIN users_skills ON users_skills.skill_id = learned_skills.id WHERE users_skills.user_id='$user_id'"); if (mysqli_num_rows($dbc) == 0) { for ($s = 0; $s < count($skill); $s++){ for ($x = 0; $x < count($experience); $x++){ for ($g = 0; $g < count($years); $g++){ $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $query1 = mysqli_query($mysqli,"INSERT INTO learned_skills (skill, experience, years) VALUES ('" . $s . "', '" . $x . "', '" . $g . "')"); $id = mysqli_insert_id($mysqli); if ($query1 == TRUE) { $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $query2 = mysqli_query($mysqli,"INSERT INTO users_skills (skill_id, user_id) VALUES ('$id', '$user_id')"); } } } } } } } Here is the XHTML code. <li><label for="skill">Skill: </label><input type="text" name="skill[0]" id="skill[0]" /> <label for="experience">Experience: </label> <?php echo '<select id="experience[0]" name="experience[0]">' . "\n"; foreach($experience_options as $option) { if ($option == $experience) { echo '<option value="' . $option . '" selected="selected">' . $option . '</option>' . "\n"; } else { echo '<option value="'. $option . '">' . $option . '</option>'."\n"; } } echo '</select>'; ?> <label for="years">Years: </label> <?php echo '<select id="years[0]" name="years[0]">' . "\n"; foreach($grade_options as $option) { if ($option == $years) { echo '<option value="' . $option . '" selected="selected">' . $option . '</option>' . "\n"; } else { echo '<option value="'. $option . '">' . $option . '</option>'."\n"; } } echo '</select>'; ?> </li>

    Read the article

  • Weird behavior of fork() and execvp() in C

    - by ron
    After some remarks from my previous post , I made the following modifications : int main() { char errorStr[BUFF3]; while (1) { int i , errorFile; char *line = malloc(BUFFER); char *origLine = line; fgets(line, 128, stdin); // get a line from stdin // get complete diagnostics on the given string lineData info = runDiagnostics(line); char command[20]; sscanf(line, "%20s ", command); line = strchr(line, ' '); // here I remove the command from the line , the command is stored in "commmand" above printf("The Command is: %s\n", command); int currentCount = 0; // number of elements in the line int *argumentsCount = &currentCount; // pointer to that // get the elements separated char** arguments = separateLineGetElements(line,argumentsCount); printf("\nOutput after separating the given line from the user\n"); for (i = 0; i < *argumentsCount; i++) { printf("Argument %i is: %s\n", i, arguments[i]); } // here we call a method that would execute the commands pid_t pid ; if (-1 == (pid = fork())) { sprintf(errorStr,"fork: %s\n",strerror(errno)); write(errorFile,errorStr,strlen(errorStr + 1)); perror("fork"); exit(1); } else if (pid == 0) // fork was successful { printf("\nIn son process\n"); // if (execvp(arguments[0],arguments) < 0) // for the moment I ignore this line if (execvp(command,arguments) < 0) // execute the command { perror("execvp"); printf("ERROR: execvp failed\n"); exit(1); } } else // parent { int status = 0; pid = wait(&status); printf("Process %d returned with status %d.", pid, status); } // print each element of the line for (i = 0; i < *argumentsCount; i++) { printf("Argument %i is: %s\n", i, arguments[i]); } // free all the elements from the memory for (i = 0; i < *argumentsCount; i++) { free(arguments[i]); } free(arguments); free(origLine); } return 0; } When I enter in the Console : ls out.txt I get : The Command is: ls execvp: No such file or directory In son process ERROR: execvp failed Process 4047 returned with status 256.Argument 0 is: > Argument 1 is: out.txt So I guess that the son process is active , but from some reason the execvp fails . Why ? Regards REMARK : The ls command is just an example . I need to make this works with any given command . EDIT 1 : User input : ls > qq.out Program output : The Command is: ls Output after separating the given line from the user Argument 0 is: > Argument 1 is: qq.out In son process >: cannot access qq.out: No such file or directory Process 4885 returned with status 512.Argument 0 is: > Argument 1 is: qq.out

    Read the article

  • not able to show the file from the array...

    - by sjl
    Hi, i'm trying to do an image gallery. i was not able to display the thumbnails stored in the array. instead it keep showing the same thumbnails. anyone can help? very much appreciated.. // Arrays that store the filenames of the text, images and thumbnails var thumbnails:Array = [ "images/image1_thumb.jpg", "images/image2_thumb.jpg", "images/image3_thumb.jpg", "images/image4_thumb.jpg", "images/image5_thumb.jpg", "images/image6_thumb.jpg", "images/image7_thumb.jpg" ]; var images:Array = [ "images/image1.jpg", "images/image2.jpg", "images/image3.jpg", "images/image4.jpg", "images/image5.jpg", "images/image6.jpg", "images/image7.jpg" ]; var textFiles:Array = [ "text/picture1.txt", "text/picture2.txt", "text/picture3.txt", "text/picture4.txt", "text/picture5.txt", "text/picture6.txt", "text/picture7.txt" ]; // NOTE: thumbnail images are 60 x 43 pixels in size // loop through the array and create the thumbnails and position them vertically for (var i:int = 0; i <7; i++) { // create an instance of MyUIThumbnail var thumbs:MyUIThumbnail = new MyUIThumbnail(); // position the thumbnails on the left vertically thumbs.y = 43 * i;/*** FILL IN THE BLANK HERE ***/ // store the corresponding full size image name in the thumbnail // use the "images" array thumbs.image = "images/image1.jpg"; thumbs.image = "images/image2.jpg"; thumbs.image = "images/image3.jpg"; thumbs.image = "images/image4.jpg"; thumbs.image = "images/image5.jpg"; thumbs.image = "images/image6.jpg"; thumbs.image = "images/image7.jpg"; // store the corresponding text file name in the thumbnail // use the "textFiles" array thumbs.textFile = "text/picture1.txt"; thumbs.textFile = "text/picture2.txt"; thumbs.textFile = "text/picture3.txt"; thumbs.textFile = "text/picture4.txt"; thumbs.textFile = "text/picture5.txt"; thumbs.textFile = "text/picture6.txt"; thumbs.textFile = "text/picture7.txt"; thumbs.imageFullSize = full_image_mc ; // store the reference to the textfield in the thumbnail thumbs.infoText = info; // load and display the thumbnails // use the "thumbnails" array thumbs.loadThumbnail("images/image1_thumb.jpg"); thumbs.loadThumbnail("images/image2_thumb.jpg"); thumbs.loadThumbnail("images/image3_thumb.jpg"); thumbs.loadThumbnail("images/image4_thumb.jpg"); thumbs.loadThumbnail("images/image5_thumb.jpg"); thumbs.loadThumbnail("images/image6_thumb.jpg"); thumbs.loadThumbnail("images/image7_thumb.jpg"); addChild(thumbs); } below is my thumbanail. as package{ import fl.containers.; import flash.events.; import flash.text.; import flash.net.; import flash.display.*; public class MyUIThumbnail extends MovieClip { public var image:String = ""; public var textFile:String = ""; var loader:UILoader = new UILoader(); public var imageFullSize:MyFullSizeImage; // reference to loader to show the full size image public var infoText:TextField; // reference to textfield that displays image infoText var txtLoader:URLLoader = new URLLoader(); // loader to load the text file public function MyUIThumbnail() { buttonMode = true; addChild(loader); this.addEventListener(MouseEvent.CLICK, myClick); txtLoader.addEventListener(Event.COMPLETE, displayText); loader.addEventListener(Event.COMPLETE, thumbnailLoaded); } public function loadThumbnail(filename:String):void { loader.source = filename; preloader.trackLoading(loader); } function myClick(e:MouseEvent):void { // load and display the full size image imageFullSize.loadImage("my_full_mc"); // load and display the text textLoad("info"); } function textLoad(file:String) { // load the text description from the text file loader.load(new URLRequest(textFile)); } function displayText(e:Event) { //display the text when loading is completed addChild(infoText); } function thumbnailLoaded(e:Event) { loader.width = 60; loader.height = 43; } } }

    Read the article

  • mapping rect in small image to larger image (in order to do a copyPixels operation)

    - by skinnyTOD
    Hi all - this is (I think) a relatively simple math question but I've spent a day banging my head against it and have only the dents and no solution... I'm coding in actionscript 3 - the functionality is: large image loaded at runtime. The bitmapData is stored and a smaller version is created to display on the available screen area (I may end up just scaling the large image since it is in memory anyway). The user can create a rectangle hotspot on the smaller image (the functionality will be more complex: multiple rects with transparency: example a donut shape with hole, etc) 3 When the user clicks on the hotspot, the rect of the hotspot is mapped to the larger image and a new bitmap "callout" is created, using the larger bitmap data. The reason for this is so the "callout" will be better quality than just scaling up the area of the hotspot. The image below shows where I am at so far- the blue rect is the clicked hotspot. In the upper left is the "callout" - copied from the larger image. I have the aspect ratio right but I am not mapping to the larger image correctly. Ugly code below... Sorry this post is so long - I just figured I ought to provide as much info as possible. Thanks for any tips! --trace of my data values *source BitmapDada 1152 864 scaled to rect 800 600 scaled BitmapData 800 600 selection BitmapData 58 56 scaled selection 83 80 ratio 1.44 before (x=544, y=237, w=58, h=56) (x=544, y=237, w=225.04, h=217.28) * Image here: http://i795.photobucket.com/albums/yy237/skinnyTOD/exampleST.jpg public function onExpandCallout(event:MouseEvent):void{ if (maskBitmapData.getPixel32(event.localX, event.localY) != 0){ var maskClone:BitmapData = maskBitmapData.clone(); //amount to scale callout - this will vary/can be changed by user var scale:Number =150 //scale percentage var normalizedScale :Number = scale/=100; var w:Number = maskBitmapData.width*normalizedScale; var h:Number = maskBitmapData.height*normalizedScale; var ratio:Number = (sourceBD.width /targetRect.width); //creat bmpd of the scaled size to copy source into var scaledBitmapData:BitmapData = new BitmapData(maskBitmapData.width * ratio, maskBitmapData.height * ratio, true, 0xFFFFFFFF); trace("source BitmapDada " + sourceBD.width, sourceBD.height); trace("scaled to rect " + targetRect.width, targetRect.height); trace("scaled BitmapData", bkgnImageSprite.width, bkgnImageSprite.height); trace("selection BitmapData", maskBitmapData.width, maskBitmapData.height); trace("scaled selection", scaledBitmapData.width, scaledBitmapData.height); trace("ratio", ratio); var scaledBitmap:Bitmap = new Bitmap(scaledBitmapData); var scaleW:Number = sourceBD.width / scaledBitmapData.width; var scaleH:Number = sourceBD.height / scaledBitmapData.height; var scaleMatrix:Matrix = new Matrix(); scaleMatrix.scale(ratio,ratio); var sRect:Rectangle = maskSprite.getBounds(bkgnImageSprite); var sR:Rectangle = sRect.clone(); var ss:Sprite = new Sprite(); ss.graphics.lineStyle(8, 0x0000FF); //ss.graphics.beginFill(0x000000, 1); ss.graphics.drawRect(sRect.x, sRect.y, sRect.width, sRect.height); //ss.graphics.endFill(); this.addChild(ss); trace("before " + sRect); w = uint(sRect.width * scaleW); h = uint(sRect.height * scaleH); sRect.inflate(maskBitmapData.width * ratio, maskBitmapData.height * ratio); sRect.offset(maskBitmapData.width * ratio, maskBitmapData.height * ratio); trace(sRect); scaledBitmapData.copyPixels(sourceBD, sRect, new Point()); addChild(scaledBitmap); scaledBitmap.x = offsetPt.x; scaledBitmap.y = offsetPt.y; } }

    Read the article

  • What is a good java data structure for storing nested items (like cities in states)?

    - by anotherAlan
    I'm just getting started in Java and am looking for advice on a good way to store nested sets of data. For example, I'm interested in storing city population data that can be accessed by looking up the city in a given state. (Note: eventually, other data will be stored with each city as well, this is just the first attempt at getting started.) The current approach I'm using is to have a StateList Object which contains a HashMap that stores State Objects via a string key (i.e. HashMap<String, State>). Each State Object contains its own HashMap of City Objects keyed off the city name (i.e. HashMap<String, City>). A cut down version of what I've come up with looks like this: // TestPopulation.java public class TestPopulation { public static void main(String [] args) { // build the stateList Object StateList sl = new StateList(); // get a test state State stateAl = sl.getState("AL"); // make sure it's there. if(stateAl != null) { // add a city stateAl.addCity("Abbeville"); // now grab the city City cityAbbevilleAl = stateAl.getCity("Abbeville"); cityAbbevilleAl.setPopulation(2987); System.out.print("The city has a pop of: "); System.out.println(Integer.toString(cityAbbevilleAl.getPopulation())); } // otherwise, print an error else { System.out.println("That was an invalid state"); } } } // StateList.java import java.util.*; public class StateList { // define hash map to hold the states private HashMap<String, State> theStates = new HashMap<String, State>(); // setup constructor that loads the states public StateList() { String[] stateCodes = {"AL","AK","AZ","AR","CA","CO"}; // etc... for (String s : stateCodes) { State newState = new State(s); theStates.put(s, newState); } } // define method for getting a state public State getState(String stateCode) { if(theStates.containsKey(stateCode)) { return theStates.get(stateCode); } else { return null; } } } // State.java import java.util.*; public class State { // Setup the state code String stateCode; // HashMap for cities HashMap<String, City> cities = new HashMap<String, City>(); // define the constructor public State(String newStateCode) { System.out.println("Creating State: " + newStateCode); stateCode = newStateCode; } // define the method for adding a city public void addCity(String newCityName) { City newCityObj = new City(newCityName); cities.put(newCityName, newCityObj); } // define the method for getting a city public City getCity(String cityName) { if(cities.containsKey(cityName)) { return cities.get(cityName); } else { return null; } } } // City.java public class City { // Define the instance vars String cityName; int cityPop; // setup the constructor public City(String newCityName) { cityName = newCityName; System.out.println("Created City: " + newCityName); } public void setPopulation(int newPop) { cityPop = newPop; } public int getPopulation() { return cityPop; } } This is working for me, but I'm wondering if there are gotchas that I haven't run into, or if there are alternate/better ways to do the same thing. (P.S. I know that I need to add some more error checking in, but right now, I'm focused on trying to figure out a good data structure.) (NOTE: Edited to change setPop() and getPop() to setPopulation() and getPopulation() respectively to avoid confucsion)

    Read the article

  • DataGrid : Binding with two different classes with lists ? WPF C#

    - by MyRestlessDream
    It is my first question on StackOverflow so I hope I am doing nothing wrong ! Sorry if it is the case ! I need some help because I can not find the solution of my problem. Of course I have searched everywhere on the web but I can not find it (can not post the links that I am using because of my low reputation :( ). Moreover, I am new in C# and WPF (and self-learning). I used to work in C++/Qt so I do not know how everything works in WPF. And sorry for my English, I am French. My problem My basic classes are that an Employee can use a computer. The id of the computer and the date of use are stored into the class Connection. I would like to display the list information in a DataGrid and in RowDetailsTemplate like here : http://i.stack.imgur.com/Bvn1z.png So it will do a binding to the Employee class but also to the Connection class with only the last value of the property (here the last value of the list "Computer ID" and the last value of the list "Connection Date" on this last computer). So it is a loop in the different lists. How can I do it ? Is it too much to do ? :( I succeed to get the Employee informations but I do not know how to bind the list of computer. When I am trying, it shows me "(Collection)" so it does not go inside the list :( Summary of Questions How to display/bind a value from a list AND from a different class in a DataGrid ? How to display all the values of a list into the RowDetailsTemplate ? Under Windows 7 and Visual Studio 2010 Pro version. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ EDIT ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Solution I have used the solution of Stefan Denchev. Here the modification of my class : http://i.stack.imgur.com/Ijx5i.png And the code used: <DataGrid ItemsSource="{Binding}" Name="table"> <DataGrid.Columns> <DataGridTextColumn Header="First Name" Binding="{Binding FirstName}"/> <DataGridTextColumn Header="Last Name" Binding="{Binding LastName}"/> <DataGridTextColumn Header="Gender" Binding="{Binding Gender}"/> <DataGridTextColumn Header="Last computer used" Binding="{Binding LastComputerID}"/> <DataGridTextColumn Header="Last connection date" Binding="{Binding LastDate}"/> </DataGrid.Columns> <DataGrid.RowDetailsTemplate> <DataTemplate> <DataGrid ItemsSource="{Binding ListOfConnection}"> <DataGrid.Columns> <DataGridTextColumn Header="Computer ID" Binding="{Binding ComputerID}"/> <DataGridTemplateColumn> <DataGridTemplateColumn.CellTemplate> <DataTemplate> <ListView ItemsSource="{Binding ListOfDate}"/> </DataTemplate> </DataGridTemplateColumn.CellTemplate> </DataGridTemplateColumn> </DataGrid.Columns> </DataGrid> </DataTemplate> </DataGrid.RowDetailsTemplate> </DataGrid> With in code behind : List<Employee> allEmployees = WorkflowMgr.Instance.AllEmployees; table.DataContext = allEmployees; And it works ! I have tryed to improve my fake example :) Hope it will help to another developer !

    Read the article

  • map with string is broken?[solved]

    - by teritriano
    Yes. I can't see what im doing wrong the map is string, int Here the method bange::function::Add(lua_State *vm){ //userdata, function if (!lua_isfunction(vm, 2)){ cout << "bange: AddFunction: First argument isn't a function." << endl; return false;} void *pfunction = const_cast<void *>(lua_topointer(vm, 2)); char key[32] = {0}; snprintf(key, 32, "%p", pfunction); cout << "Key: " << key << endl; string strkey = key; if (this->functions.find(strkey) != this->functions.end()){ luaL_unref(vm, LUA_REGISTRYINDEX, this->functions[strkey]);} this->functions[strkey] = luaL_ref(vm, LUA_REGISTRYINDEX); return true; Ok, when the code is executed... Program received signal SIGSEGV, Segmentation fault. 0x00007ffff6e6caa9 in std::basic_string<char, std::char_traits<char>, std::allocator<char> > ::compare(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const () from /usr/lib/libstdc++.so.6 Seriously, what's wrong with my code. Thanks for help. Edit 1: Ok, I've done the solution and still fails. I've tried directly insert a string but gives the same error. Let's see, the object is a bange::scene inherited from bange::function. I create the object with lua_newuserdata: bange::scene *scene = static_cast<bange::scene *>(lua_newuserdata(vm, sizeof(bange::scene))); (...) scene = new (scene) bange::scene(width, height, nlayers, vm); I need this for LUA garbage collection. Now the access to bange::function::Add from Lua: static int bangefunction_Add(lua_State *vm){ //userdata, function bange::function *function = reinterpret_cast<bange::function *>(lua_touserdata(vm, 1)); cout &lt&lt "object with bange::function: " &lt&lt function << endl; bool added = function->bange::function::Add(vm); lua_pushboolean(vm, static_cast<int>(added)); return 1; } Userdata is bange::scene stored in Lua. Knowing that userdata is scene, in fact, the object's direction is the same when I've created the scene before. I need the reinterpret_cast, and then call the method. The pointer "this" is still the same direction inside the method. solved I did a small test in the bange::function constructor which works without problems. bange::function::function(){ string test("test"); this->functions["test"] = 2; } I finally noticed that the problem is bange::function *function = reinterpret_cast<bange::function *>(lua_touserdata(vm, 1)); because the object is bange::scene and no bange::function (i admit it, a pointer corruption) and this seems more a code design issue. So this, in a way, is solved. Thanks everybody.

    Read the article

  • bulls and cows game -- programming algorithm(python)

    - by IcyFlame
    This is a simulation of the game Cows and Bulls with three digit numbers I am trying to get the number of cows and bulls between two numbers. One of which is generated by the computer and the other is guessed by the user. I have parsed the two numbers I have so that now I have two lists with three elements each and each element is one of the digits in the number. So: 237 will give the list [2,3,7]. And I make sure that the relative indices are maintained.the general pattern is:(hundreds, tens, units). And these two lists are stored in the two lists: machine and person. ALGORITHM 1 So, I wrote the following code, The most intuitive algorithm: cows and bulls are initialized to 0 before the start of this loop. for x in person: if x in machine: if machine.index(x) == person.index(x): bulls += 1 print x,' in correct place' else: print x,' in wrong place' cows += 1 And I started testing this with different type of numbers guessed by the computer. Quite randomly, I decided on 277. And I guessed 447. Here, I got the first clue that this algorithm may not work. I got 1 cow and 0 bulls. Whereas I should have got 1 bull and 1 cow. This is a table of outputs with the first algorithm: Guess Output Expected Output 447 0 bull, 1 cow 1 bull, 1 cow 477 2 bulls, 0 cows 2 bulls, 0 cows 777 0 bulls, 3 cows 2 bulls, 0 cows So obviously this algorithm was not working when there are repeated digits in the number randomly selected by the computer. I tried to understand why these errors are taking place, But I could not. I have tried a lot but I just could not see any mistake in the algorithm(probably because I wrote it!) ALGORITHM 2 On thinking about this for a few days I tried this: cows and bulls are initialized to 0 before the start of this loop. for x in range(3): for y in range(3): if x == y and machine[x] == person[y]: bulls += 1 if not (x == y) and machine[x] == person[y]: cows += 1 I was more hopeful about this one. But when I tested this, this is what I got: Guess Output Expected Output 447 1 bull, 1 cow 1 bull, 1 cow 477 2 bulls, 2 cows 2 bulls, 0 cows 777 2 bulls, 4 cows 2 bulls, 0 cows The mistake I am making is quite clear here, I understood that the numbers were being counted again and again. i.e.: 277 versus 477 When you count for bulls then the 2 bulls come up and thats alright. But when you count for cows: the 7 in 277 at units place is matched with the 7 in 477 in tens place and thus a cow is generated. the 7 in 277 at tens place is matched with the 7 in 477 in units place and thus a cow is generated.' Here the matching is exactly right as I have written the code as per that. But this is not what I want. And I have no idea whatsoever on what to do after this. Furthermore... I would like to stress that both the algorithms work perfectly, if there are no repeated digits in the number selected by the computer. Please help me with this issue. P.S.: I have been thinking about this for over a week, But I could not post a question earlier as my account was blocked(from asking questions) because I asked a foolish question. And did not delete it even though I got 2 downvotes immediately after posting the question.

    Read the article

  • Incorrect data when passing pointer a list of pointers to a function. (C++)

    - by Phil Elm
    I'm writing code for combining data received over multiple sources. When the objects received (I'll call them MyPacket for now), they are stored in a standard list. However, whenever I reference the payload size of a partial MyPacket, the value shows up as 1 instead of the intended size. Here's the function code: MyPacket* CombinePackets(std::list<MyPacket*>* packets, uint8* current_packet){ uint32 total_payload_size = 0; if(packets->size() <= 0) return NULL; //For now. std::list<MyPacket*>::iterator it = packets->begin(); //Some minor code here, not relevant to the problem. for(uint8 index = 0; index < packets->size(); index++){ //(*it)->GetPayloadSize() returns 1 when it should show 1024. I've tried directly accessing the variable and more, but I just can't get it to work. total_payload_size += (*it)->GetPayloadSize(); cout << "Adding to total payload size value: " << (*it)->GetPayloadSize() << endl; std::advance(it,1); } MyPacket* packet = new MyPacket(); //Byte is just a typedef'd unsigned char. packet->payload = (byte) calloc(total_payload_size, sizeof(byte)); packet->payload_size = total_payload_size; it = packets->begin(); //Go back to the beginning again. uint32 big_payload_index = 0; for(uint8 index = 0; index < packets->size(); index++){ if(current_packet != NULL) *current_packet = index; for(uint32 payload_index = 0; payload_index < (*it)->GetPayloadSize(); payload_index++){ packet->payload[big_payload_index] = (*it)->payload[payload_index]; big_payload_index++; } std::advance(it,1); } return packet; } //Calling code std::list<MyPacket*> received = std::list<MyPacket*>(); //The code that fills it is here. std::list<MyPacket*>::iterator it = received.begin(); cout << (*it)->GetPayloadSize() << endl; // Outputs 1024 correctly! MyPacket* final = CombinePackets(&received,NULL); cout << final->GetPayloadSize() << endl; //Outputs 181, which happens to be the number of elements in the received list. So, as you can see above, when I reference (*it)-GetPayloadSize(), it returns 1 instead of the intended 1024. Can anyone see the problem and if so, do you have an idea on how to fix this? I've spent 4 hours searching and trying new solutions, but they all keep returning 1... EDIT:

    Read the article

  • Dynamic JSON Parsing in .NET with JsonValue

    - by Rick Strahl
    So System.Json has been around for a while in Silverlight, but it's relatively new for the desktop .NET framework and now moving into the lime-light with the pending release of ASP.NET Web API which is bringing a ton of attention to server side JSON usage. The JsonValue, JsonObject and JsonArray objects are going to be pretty useful for Web API applications as they allow you dynamically create and parse JSON values without explicit .NET types to serialize from or into. But even more so I think JsonValue et al. are going to be very useful when consuming JSON APIs from various services. Yes I know C# is strongly typed, why in the world would you want to use dynamic values? So many times I've needed to retrieve a small morsel of information from a large service JSON response and rather than having to map the entire type structure of what that service returns, JsonValue actually allows me to cherry pick and only work with the values I'm interested in, without having to explicitly create everything up front. With JavaScriptSerializer or DataContractJsonSerializer you always need to have a strong type to de-serialize JSON data into. Wouldn't it be nice if no explicit type was required and you could just parse the JSON directly using a very easy to use object syntax? That's exactly what JsonValue, JsonObject and JsonArray accomplish using a JSON parser and some sweet use of dynamic sauce to make it easy to access in code. Creating JSON on the fly with JsonValue Let's start with creating JSON on the fly. It's super easy to create a dynamic object structure. JsonValue uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JsonValue:[TestMethod] public void JsonValueOutputTest() { // strong type instance var jsonObject = new JsonObject(); // dynamic expando instance you can add properties to dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1977; album.Songs = new JsonArray() as dynamic; dynamic song = new JsonObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JsonObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces proper JSON just as you would expect: {"AlbumName":"Dirty Deeds Done Dirt Cheap","Artist":"AC\/DC","YearReleased":1977,"Songs":[{"SongName":"Dirty Deeds Done Dirt Cheap","SongLength":"4:11"},{"SongName":"Love at First Feel","SongLength":"3:10"}]} The important thing about this code is that there's no explicitly type that is used for holding the values to serialize to JSON. I am essentially creating this value structure on the fly by adding properties and then serialize it to JSON. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JsonObject() to create a new object and immediately cast it to dynamic. JsonObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JsonValue/JsonObject these values are stored in pseudo collections of key value pairs that are exposed as properties through the DynamicObject functionality in .NET. The syntax gets a little tedious only if you need to create child objects or arrays that have to be explicitly defined first. Other than that the syntax looks like normal object access sytnax. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the values you create are accessed consistently and without typos in your code. Note that you can also access the JsonValue instance directly and get access to the underlying type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JsonObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JsonValue internally stores properties keys and values in collections and you can iterate over them at runtime. You can also manipulate the collections if you need to to get the object structure to look exactly like you want. Again, if you've used ExpandoObject before JsonObject/Value are very similar in the behavior of the structure. Reading JSON strings into JsonValue The JsonValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JsonValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:[TestMethod] public void JsonValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"",""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JsonValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JsonValue object and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JsonPrimitive and I have to assign them to their appropriate types first before I can do type comparisons. The dynamic properties will automatically cast to the right type expected as long as the compiler can resolve the type of the assignment or usage. The AreEqual() method oesn't as it expects two object instances and comparing json.Company to "West Wind" is comparing two different types (JsonPrimitive to String) which fails. So the intermediary assignment is required to make the test pass. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1977, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/B00008BXJ4/ref=as_li_ss_tl?ie=UTF8&tag=westwindtechn-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""67280fb8"", ""AlbumName"": ""Echoes, Silence, Patience & Grace"", ""Artist"": ""Foo Fighters"", ""YearReleased"": 2007, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/41mtlesQPVL._SL500_AA280_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/B000UFAURI/ref=as_li_ss_tl?ie=UTF8&tag=westwindtechn-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B000UFAURI"", ""Songs"": [ { ""AlbumId"": ""67280fb8"", ""SongName"": ""The Pretender"", ""SongLength"": ""4:29"" }, { ""AlbumId"": ""67280fb8"", ""SongName"": ""Let it Die"", ""SongLength"": ""4:05"" }, { ""AlbumId"": ""67280fb8"", ""SongName"": ""Erase/Replay"", ""SongLength"": ""4:13"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; dynamic albums = JsonValue.Parse(jsonString); foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName ); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName);}   It's pretty sweet how easy it becomes to parse even complex JSON and then just run through the object using object syntax, yet without an explicit type in the mix. In fact it looks and feels a lot like if you were using JavaScript to parse through this data, doesn't it? And that's the point…© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  JSON   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • ASP.NET MVC Paging/Sorting/Filtering using the MVCContrib Grid and Pager

    - by rajbk
    This post walks you through creating a UI for paging, sorting and filtering a list of data items. It makes use of the excellent MVCContrib Grid and Pager Html UI helpers. A sample project is attached at the bottom. Our UI will eventually look like this. The application will make use of the Northwind database. The top portion of the page has a filter area region. The filter region is enclosed in a form tag. The select lists are wired up with jQuery to auto post back the form. The page has a pager region at the top and bottom of the product list. The product list has a link to display more details about a given product. The column headings are clickable for sorting and an icon shows the sort direction. Strongly Typed View Models The views are written to expect strongly typed objects. We suffix these strongly typed objects with ViewModel since they are designed specifically for passing data down to the view.  The following listing shows the ProductViewModel. This class will be used to hold information about a Product. We use attributes to specify if the property should be hidden and what its heading in the table should be. This metadata will be used by the MvcContrib Grid to render the table. Some of the properties are hidden from the UI ([ScaffoldColumn(false)) but are needed because we will be using those for filtering when writing our LINQ query. public ActionResult Index( string productName, int? supplierID, int? categoryID, GridSortOptions gridSortOptions, int? page) {   var productList = productRepository.GetProductsProjected();   // Set default sort column if (string.IsNullOrWhiteSpace(gridSortOptions.Column)) { gridSortOptions.Column = "ProductID"; }   // Filter on SupplierID if (supplierID.HasValue) { productList = productList.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { productList = productList.Where(a => a.CategoryID == categoryID); }   // Filter on ProductName if (!string.IsNullOrWhiteSpace(productName)) { productList = productList.Where(a => a.ProductName.Contains(productName)); }   // Create all filter data and set current values if any // These values will be used to set the state of the select list and textbox // by sending it back to the view. var productFilterViewModel = new ProductFilterViewModel(); productFilterViewModel.SelectedCategoryID = categoryID ?? -1; productFilterViewModel.SelectedSupplierID = supplierID ?? -1; productFilterViewModel.Fill();   // Order and page the product list var productPagedList = productList .OrderBy(gridSortOptions.Column, gridSortOptions.Direction) .AsPagination(page ?? 1, 10);     var productListContainer = new ProductListContainerViewModel { ProductPagedList = productPagedList, ProductFilterViewModel = productFilterViewModel, GridSortOptions = gridSortOptions };   return View(productListContainer); } The following diagram shows the rest of the key ViewModels in our design. We have a container class called ProductListContainerViewModel which has nested classes. The ProductPagedList is of type IPagination<ProductViewModel>. The MvcContrib expects the IPagination<T> interface to determine the page number and page size of the collection we are working with. You convert any IEnumerable<T> into an IPagination<T> by calling the AsPagination extension method in the MvcContrib library. It also creates a paged set of type ProductViewModel. The ProductFilterViewModel class will hold information about the different select lists and the ProductName being searched on. It will also hold state of any previously selected item in the lists and the previous search criteria (you will recall that this type of state information was stored in Viewstate when working with WebForms). With MVC there is no state storage and so all state has to be fetched and passed back to the view. The GridSortOptions is a type defined in the MvcContrib library and is used by the Grid to determine the current column being sorted on and the current sort direction. The following shows the view and partial views used to render our UI. The Index view expects a type ProductListContainerViewModel which we described earlier. <%Html.RenderPartial("SearchFilters", Model.ProductFilterViewModel); %> <% Html.RenderPartial("Pager", Model.ProductPagedList); %> <% Html.RenderPartial("SearchResults", Model); %> <% Html.RenderPartial("Pager", Model.ProductPagedList); %> The View contains a partial view “SearchFilters” and passes it the ProductViewFilterContainer. The SearchFilter uses this Model to render all the search lists and textbox. The partial view “Pager” uses the ProductPageList which implements the interface IPagination. The “Pager” view contains the MvcContrib Pager helper used to render the paging information. This view is repeated twice since we want the pager UI to be available at the top and bottom of the product list. The Pager partial view is located in the Shared directory so that it can be reused across Views. The partial view “SearchResults” uses the ProductListContainer model. This partial view contains the MvcContrib Grid which needs both the ProdctPagedList and GridSortOptions to render itself. The Controller Action An example of a request like this: /Products?productName=test&supplierId=29&categoryId=4. The application receives this GET request and maps it to the Index method of the ProductController. Within the action we create an IQueryable<ProductViewModel> by calling the GetProductsProjected() method. /// <summary> /// This method takes in a filter list, paging/sort options and applies /// them to an IQueryable of type ProductViewModel /// </summary> /// <returns> /// The return object is a container that holds the sorted/paged list, /// state for the fiters and state about the current sorted column /// </returns> public ActionResult Index( string productName, int? supplierID, int? categoryID, GridSortOptions gridSortOptions, int? page) {   var productList = productRepository.GetProductsProjected();   // Set default sort column if (string.IsNullOrWhiteSpace(gridSortOptions.Column)) { gridSortOptions.Column = "ProductID"; }   // Filter on SupplierID if (supplierID.HasValue) { productList.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { productList = productList.Where(a => a.CategoryID == categoryID); }   // Filter on ProductName if (!string.IsNullOrWhiteSpace(productName)) { productList = productList.Where(a => a.ProductName.Contains(productName)); }   // Create all filter data and set current values if any // These values will be used to set the state of the select list and textbox // by sending it back to the view. var productFilterViewModel = new ProductFilterViewModel(); productFilterViewModel.SelectedCategoryID = categoryID ?? -1; productFilterViewModel.SelectedSupplierID = supplierID ?? -1; productFilterViewModel.Fill();   // Order and page the product list var productPagedList = productList .OrderBy(gridSortOptions.Column, gridSortOptions.Direction) .AsPagination(page ?? 1, 10);     var productListContainer = new ProductListContainerViewModel { ProductPagedList = productPagedList, ProductFilterViewModel = productFilterViewModel, GridSortOptions = gridSortOptions };   return View(productListContainer); } The supplier, category and productname filters are applied to this IQueryable if any are present in the request. The ProductPagedList class is created by applying a sort order and calling the AsPagination method. Finally the ProductListContainerViewModel class is created and returned to the view. You have seen how to use strongly typed views with the MvcContrib Grid and Pager to render a clean lightweight UI with strongly typed views. You also saw how to use partial views to get data from the strongly typed model passed to it from the parent view. The code also shows you how to use jQuery to auto post back. The sample is attached below. Don’t forget to change your connection string to point to the server containing the Northwind database. NorthwindSales_MvcContrib.zip My name is Kobayashi. I work for Keyser Soze.

    Read the article

  • SQL SERVER – Shrinking NDF and MDF Files – Readers’ Opinion

    - by pinaldave
    Previously, I had written a blog post about SQL SERVER – Shrinking NDF and MDF Files – A Safe Operation. After that, I have written the following blog post that talks about the advantage and disadvantage of Shrinking and why one should not be Shrinking a file SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008. On this subject, SQL Server Expert Imran Mohammed left an excellent comment. I just feel that his comment is worth a big article itself. For everybody to read his wonderful explanation, I am posting this blog post here. Thanks Imran! Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. Comments from Imran - Before I explain to you the concept of Shrink Database, let us understand the concept of Database Files. When we create a new database inside the SQL Server, it is typical that SQl Server creates two physical files in the Operating System: one with .MDF Extension, and another with .LDF Extension. .MDF is called as Primary Data File. .LDF is called as Transactional Log file. If you add one or more data files to a database, the physical file that will be created in the Operating System will have an extension of .NDF, which is called as Secondary Data File; whereas, when you add one or more log files to a database, the physical file that will be created in the Operating System will have the same extension as .LDF. The questions now are, “Why does a new data file have a different extension (.NDF)?”, “Why is it called as a secondary data file?” and, “Why is .MDF file called as a primary data file?” Answers: Note: The following explanation is based on my limited knowledge of SQL Server, so experts please do comment. A data file with a .MDF extension is called a Primary Data File, and the reason behind it is that it contains Database Catalogs. Catalogs mean Meta Data. Meta Data is “Data about Data”. An example for Meta Data includes system objects that store information about other objects, except the data stored by the users. sysobjects stores information about all objects in that database. sysindexes stores information about all indexes and rows of every table in that database. syscolumns stores information about all columns that each table has in that database. sysusers stores how many users that database has. Although Meta Data stores information about other objects, it is not the transactional data that a user enters; rather, it’s a system data about the data. Because Primary Data File (.MDF) contains important information about the database, it is treated as a special file. It is given the name Primary Data file because it contains the Database Catalogs. This file is present in the Primary File Group. You can always create additional objects (Tables, indexes etc.) in the Primary data file (This file is present in the Primary File group), by mentioning that you want to create this object under the Primary File Group. Any additional data file that you add to the database will have only transactional data but no Meta Data, so that’s why it is called as the Secondary Data File. It is given the extension name .NDF so that the user can easily identify whether a specific data file is a Primary Data File or a Secondary Data File(s). There are many advantages of storing data in different files that are under different file groups. You can put your read only in the tables in one file (file group) and read-write tables in another file (file group) and take a backup of only the file group that has read the write data, so that you can avoid taking the backup of a read-only data that cannot be altered. Creating additional files in different physical hard disks also improves I/O performance. A real-time scenario where we use Files could be this one: Let’s say you have created a database called MYDB in the D-Drive which has a 50 GB space. You also have 1 Database File (.MDF) and 1 Log File on D-Drive and suppose that all of that 50 GB space has been used up and you do not have any free space left but you still want to add an additional space to the database. One easy option would be to add one more physical hard disk to the server, add new data file to MYDB database and create this new data file in a new hard disk then move some of the objects from one file to another, and put the file group under which you added new file as default File group, so that any new object that is created gets into the new files, unless specified. Now that we got a basic idea of what data files are, what type of data they store and why they are named the way they are, let’s move on to the next topic, Shrinking. First of all, I disagree with the Microsoft terminology for naming this feature as “Shrinking”. Shrinking, in regular terms, means to reduce the size of a file by means of compressing it. BUT in SQL Server, Shrinking DOES NOT mean compressing. Shrinking in SQL Server means to remove an empty space from database files and release the empty space either to the Operating System or to SQL Server. Let’s examine this through an example. Let’s say you have a database “MYDB” with a size of 50 GB that has a free space of about 20 GB, which means 30GB in the database is filled with data and the 20 GB of space is free in the database because it is not currently utilized by the SQL Server (Database); it is reserved and not yet in use. If you choose to shrink the database and to release an empty space to Operating System, and MIND YOU, you can only shrink the database size to 30 GB (in our example). You cannot shrink the database to a size less than what is filled with data. So, if you have a database that is full and has no empty space in the data file and log file (you don’t have an extra disk space to set Auto growth option ON), YOU CANNOT issue the SHRINK Database/File command, because of two reasons: There is no empty space to be released because the Shrink command does not compress the database; it only removes the empty space from the database files and there is no empty space. Remember, the Shrink command is a logged operation. When we perform the Shrink operation, this information is logged in the log file. If there is no empty space in the log file, SQL Server cannot write to the log file and you cannot shrink a database. Now answering your questions: (1) Q: What are the USEDPAGES & ESTIMATEDPAGES that appear on the Results Pane after using the DBCC SHRINKDATABASE (NorthWind, 10) ? A: According to Books Online (For SQL Server 2000): UsedPages: the number of 8-KB pages currently used by the file. EstimatedPages: the number of 8-KB pages that SQL Server estimates the file could be shrunk down to. Important Note: Before asking any question, make sure you go through Books Online or search on the Google once. The reasons for doing so have many advantages: 1. If someone else already has had this question before, chances that it is already answered are more than 50 %. 2. This reduces your waiting time for the answer. (2) Q: What is the difference between Shrinking the Database using DBCC command like the one above & shrinking it from the Enterprise Manager Console by Right-Clicking the database, going to TASKS & then selecting SHRINK Option, on a SQL Server 2000 environment? A: As far as my knowledge goes, there is no difference, both will work the same way, one advantage of using this command from query analyzer is, your console won’t be freezed. You can do perform your regular activities using Enterprise Manager. (3) Q: What is this .NDF file that is discussed above? I have never heard of it. What is it used for? Is it used by end-users, DBAs or the SERVER/SYSTEM itself? A: .NDF File is a secondary data file. You never heard of it because when database is created, SQL Server creates database by default with only 1 data file (.MDF) and 1 log file (.LDF) or however your model database has been setup, because a model database is a template used every time you create a new database using the CREATE DATABASE Command. Unless you have added an extra data file, you will not see it. This file is used by the SQL Server to store data which are saved by the users. Hope this information helps. I would like to as the experts to please comment if what I understand is not what the Microsoft guys meant. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Run Windows in Ubuntu with VMware Player

    - by Matthew Guay
    Are you an enthusiast who loves their Ubuntu Linux experience but still needs to use Windows programs?  Here’s how you can get the full Windows experience on Ubuntu with the free VMware Player. Linux has become increasingly consumer friendly, but still, the wide majority of commercial software is only available for Windows and Macs.  Dual-booting between Windows and Linux has been a popular option for years, but this is a frustrating solution since you have to reboot into the other operating system each time you want to run a specific application.  With virtualization, you’ll never have to make this tradeoff.  VMware Player makes it quick and easy to install any edition of Windows in a virtual machine.  With VMware’s great integration tools, you can copy and paste between your Linux and Windows programs and even run native Windows applications side-by-side with Linux ones. Getting Started Download the latest version of VMware Player for Linux, and select either the 32-bit or 64-bit version, depending on your system.  VMware Player is a free download, but requires registration.  Sign in with your VMware account, or create a new one if you don’t already have one. VMware Player is fairly easy to install on Linux, but you will need to start out the installation from the terminal.  First, enter the following to make sure the installer is marked as executable, substituting version/build_number for the version number on the end of the file you downloaded. chmod +x ./VMware-Player-version/build_number.bundle Then, enter the following to start the install, again substituting your version number: gksudo bash ./VMware-Player-version/build_number.bundle You may have to enter your administrator password to start the installation, and then the VMware Player graphical installer will open.  Choose whether you want to check for product updates and submit usage data to VMware, and then proceed with the install as normal. VMware Player installed in only a few minutes in our tests, and was immediately ready to run, no reboot required.  You can now launch it from your Ubuntu menu: click Applications \ System Tools \ VMware Player. You’ll need to accept the license agreement the first time you run it. Welcome to VMware Player!  Now you can create new virtual machines and run pre-built ones on your Ubuntu desktop. Install Windows in VMware Player on Ubuntu Now that you’ve got VMware setup, it’s time to put it to work.  Click the Create a New Virtual Machine as above to start making a Windows virtual machine. In the dialog that opens, select your installer disk or ISO image file that you want to install Windows from.  In this example, we’re select a Windows 7 ISO.  VMware will automatically detect the operating system on the disk or image.  Click Next to continue. Enter your Windows product key, select the edition of Windows to install, and enter your name and password. You can leave the product key field blank and enter it later.  VMware will ask if you want to continue without a product key, so just click Yes to continue. Now enter a name for your virtual machine and select where you want to save it.  Note: This will take up at least 15Gb of space on your hard drive during the install, so make sure to save it on a drive with sufficient storage space. You can choose how large you want your virtual hard drive to be; the default is 40Gb, but you can choose a different size if you wish.  The entire amount will not be used up on your hard drive initially, but the virtual drive will increase in size up to your maximum as you add files.  Additionally, you can choose if you want the virtual disk stored as a single file or as multiple files.  You will see the best performance by keeping the virtual disk as one file, but the virtual machine will be more portable if it is broken into smaller files, so choose the option that will work best for your needs. Finally, review your settings, and if everything looks good, click Finish to create the virtual machine. VMware will take over now, and install Windows without any further input using its Easy Install.  This is one of VMware’s best features, and is the main reason we find it the easiest desktop virtualization solution to use.   Installing VMware Tools VMware Player doesn’t include the VMware Tools by default; instead, it automatically downloads them for the operating system you’re installing.  Once you’ve downloaded them, it will use those tools anytime you install that OS.  If this is your first Windows virtual machine to install, you may be prompted to download and install them while Windows is installing.  Click Download and Install so your Easy Install will finish successfully. VMware will then download and install the tools.  You may need to enter your administrative password to complete the install. Other than this, you can leave your Windows install unattended; VMware will get everything installed and running on its own. Our test setup took about 30 minutes, and when it was done we were greeted with the Windows desktop ready to use, complete with drivers and the VMware tools.  The only thing missing was the Aero glass feature.  VMware Player is supposed to support the Aero glass effects in virtual machines, and although this works every time when we use VMware Player on Windows, we could not get it to work in Linux.  Other than that, Windows is fully ready to use.  You can copy and paste text, images, or files between Ubuntu and Windows, or simply drag-and-drop files between the two. Unity Mode Using Windows in a window is awkward, and makes your Windows programs feel out of place and hard to use.  This is where Unity mode comes in.  Click Virtual Machine in VMware’s menu, and select Enter Unity. Your Windows desktop will now disappear, and you’ll see a new Windows menu underneath your Ubuntu menu.  This works the same as your Windows Start Menu, and you can open your Windows applications and files directly from it. By default, programs from Windows will have a colored border and a VMware badge in the corner.  You can turn this off from the VMware settings pane.  Click Virtual Machine in VMware’s menu and select Virtual Machine Settings.  Select Unity under the Options tab, and uncheck the Show borders and Show badges boxes if you don’t want them. Unity makes your Windows programs feel at home in Ubuntu.  Here we have Word 2010 and IE8 open beside the Ubuntu Help application.  Notice that the Windows applications show up in the taskbar on the bottom just like the Linux programs.  If you’re using the Compiz graphics effects in Ubuntu, your Windows programs will use them too, including the popular wobbly windows effect. You can switch back to running Windows inside VMware Player’s window by clicking the Exit Unity button in the VMware window. Now, whenever you want to run Windows applications in Linux, you can quickly launch it from VMware Player. Conclusion VMware Player is a great way to run Windows on your Linux computer.  It makes it extremely easy to get Windows installed and running, lets you run your Windows programs seamlessly alongside your Linux ones.  VMware products work great in our experience, and VMware Player on Linux was no exception. If you’re a Windows user and you’d like to run Ubuntu on Windows, check out our article on how to Run Ubuntu in Windows with VMware Player. Link Download VMware Player 3 (Registration required) Download Windows 7 Enterprise 90-day trial Similar Articles Productive Geek Tips Enable Copy and Paste from Ubuntu VMware GuestInstall VMware Tools on Ubuntu Edgy EftRestart the Ubuntu Gnome User Interface QuicklyHow to Add a Program to the Ubuntu Startup List (After Login)How To Run Ubuntu in Windows 7 with VMware Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor

    Read the article

  • Azure - Part 4 - Table Storage Service in Windows Azure

    - by Shaun
    In Windows Azure platform there are 3 storage we can use to save our data on the cloud. They are the Table, Blob and Queue. Before the Chinese New Year Microsoft announced that Azure SDK 1.1 had been released and it supports a new type of storage – Drive, which allows us to operate NTFS files on the cloud. I will cover it in the coming few posts but now I would like to talk a bit about the Table Storage.   Concept of Table Storage Service The most common development scenario is to retrieve, create, update and remove data from the data storage. In the normal way we communicate with database. When we attempt to move our application over to the cloud the most common requirement should be have a storage service. Windows Azure provides a in-build service that allow us to storage the structured data, which is called Windows Azure Table Storage Service. The data stored in the table service are like the collection of entities. And the entities are similar to rows or records in the tradtional database. An entity should had a partition key, a row key, a timestamp and set of properties. You can treat the partition key as a group name, the row key as a primary key and the timestamp as the identifer for solving the concurrency problem. Different with a table in a database, the table service does not enforce the schema for tables, which means you can have 2 entities in the same table with different property sets. The partition key is being used for the load balance of the Azure OS and the group entity transaction. As you know in the cloud you will never know which machine is hosting your application and your data. It could be moving based on the transaction weight and the number of the requests. If the Azure OS found that there are many requests connect to your Book entities with the partition key equals “Novel” it will move them to another idle machine to increase the performance. So when choosing the partition key for your entities you need to make sure they indecate the category or gourp information so that the Azure OS can perform the load balance as you wish.   Consuming the Table Although the table service looks like a database, you cannot access it through the way you are using now, neither ADO.NET nor ODBC. The table service exposed itself by ADO.NET Data Service protocol, which allows you can consume it through the RESTful style by Http requests. The Azure SDK provides a sets of classes for us to connect it. There are 2 classes we might need: TableServiceContext and TableServiceEntity. The TableServiceContext inherited from the DataServiceContext, which represents the runtime context of the ADO.NET data service. It provides 4 methods mainly used by us: CreateQuery: It will create a IQueryable instance from a given type of entity. AddObject: Add the specified entity into Table Service. UpdateObject: Update an existing entity in the Table Service. DeleteObject: Delete an entity from the Table Service. Beofre you operate the table service you need to provide the valid account information. It’s something like the connect string of the database but with your account name and the account key when you created the storage service on the Windows Azure Development Portal. After getting the CloudStorageAccount you can create the CloudTableClient instance which provides a set of methods for using the table service. A very useful method would be CreateTableIfNotExist. It will create the table container for you if it’s not exsited. And then you can operate the eneities to that table through the methods I mentioned above. Let me explain a bit more through an exmaple. We always like code rather than sentence.   Straightforward Accessing to the Table Here I would like to build a WCF service on the Windows Azure platform, and for now just one requirement: it would allow the client to create an account entity on the table service. The WCF service would have a method named Register and accept an instance of the account which the client wants to create. After perform some validation it will add the entity into the table service. So the first thing I should do is to create a Cloud Application on my VIstial Studio 2010 RC. (The Azure SDK 1.1 only supports VS2008 and VS2010 RC.) The solution should be like this below. Then I added a configuration items for the storage account through the Settings section under the cloud project. (Double click the Services file under Roles folder and navigate to the Setting section.) This setting will be used when to retrieve my storage account information. Since for now I just in the development phase I will select “UseDevelopmentStorage=true”. And then I navigated to the WebRole.cs file under my WCF project. If you have read my previous posts you would know that this file defines the process when the application start, and terminate on the cloud. What I need to do is to when the application start, set the configuration publisher to load my config file with the config name I specified. So the code would be like below. I removed the original service and contract created by the VS template and add my IAccountService contract and its implementation class - AccountService. And I add the service method Register with the parameters: email, password and it will return a boolean value to indicates the result which is very simple. At this moment if I press F5 the application will be established on my local development fabric and I can see my service runs well through the browser. Let’s implement the service method Rigister, add a new entity to the table service. As I said before the entities you want to store in the table service must have 3 properties: partition key, row key and timespan. You can create a class with these 3 properties. The Azure SDK provides us a base class for that named TableServiceEntity in Microsoft.WindowsAzure.StorageClient namespace. So what we need to do is more simply, create a class named Account and let it derived from the TableServiceEntity. And I need to add my own properties: Email, Password, DateCreated and DateDeleted. The DateDeleted is a nullable date time value to indecate whether this entity had been deleted and when. Do you notice that I missed something here? Yes it’s the partition key and row key I didn’t assigned. The TableServiceEntity base class defined 2 constructors one was a parameter-less constructor which will be used to fill values into the properties from the table service when retrieving data. The other was one with 2 parameters: partition key and row key. As I said below the partition key may affect the load balance and the row key must be unique so here I would like to use the email as the parition key and the email plus a Guid as the row key. OK now we finished the entity class we need to store onto the table service. The next step is to create a data access class for us to add it. Azure SDK gives us a base class for it named TableServiceContext as I mentioned below. So let’s create a class for operate the Account entities. The TableServiceContext need the storage account information for its constructor. It’s the combination of the storage service URI that we will create on Windows Azure platform, and the relevant account name and key. The TableServiceContext will use this information to find the related address and verify the account to operate the storage entities. Hence in my AccountDataContext class I need to override this constructor and pass the storage account into it. All entities will be saved in the table storage with one or many tables which we call them “table containers”. Before we operate an entity we need to make sure that the table container had been created on the storage. There’s a method we can use for that: CloudTableClient.CreateTableIfNotExist. So in the constructor I will perform it firstly to make sure all method will be invoked after the table had been created. Notice that I passed the storage account enpoint URI and the credentials to specify where my storage is located and who am I. Another advise is that, make your entity class name as the same as the table name when create the table. It will increase the performance when you operate it over the cloud especially querying. Since the Register WCF method will add a new account into the table service, here I will create a relevant method to add the account entity. Before implement, I should add a reference - System.Data.Services.Client to the project. This reference provides some common method within the ADO.NET Data Service which can be used in the Windows Azure Table Service. I will use its AddObject method to create my account entity. Since the table service are not fully implemented the ADO.NET Data Service, there are some methods in the System.Data.Services.Client that TableServiceContext doesn’t support, such as AddLinks, etc. Then I implemented the serivce method to add the account entity through the AccountDataContext. You can see in the service implmentation I load the storage account information through my configuration file and created the account table entity from the parameters. Then I created the AccountDataContext. If it’s my first time to invoke this method the constructor of the AccountDataContext will create a table container for me. Then I use Add method to add the account entity into the table. Next, let’s create a farely simple client application to test this service. I created a windows console application and added a service reference to my WCF service. The metadata information of the WCF service cannot be retrieved if it’s deployed on the Windows Azure even though the <serviceMetadata httpGetEnabled="true"/> had been set. If we need to get its metadata we can deploy it on the local development service and then changed the endpoint to the address which is on the cloud. In the client side app.config file I specified the endpoint to the local development fabric address. And the just implement the client to let me input an email and a password then invoke the WCF service to add my acocunt. Let’s run my application and see the result. Of course it should return TRUE to me. And in the local SQL Express I can see the data had been saved in the table.   Summary In this post I explained more about the Windows Azure Table Storage Service. I also created a small application for demostration of how to connect and consume it through the ADO.NET Data Service Managed Library provided within the Azure SDK. I only show how to create an eneity in the storage service. In the next post I would like to explain about how to query the entities with conditions thruogh LINQ. I also would like to refactor my AccountDataContext class to make it dyamic for any kinds of entities.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >