Search Results

Search found 12224 results on 489 pages for 'map editor'.

Page 110/489 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • OpenGL ES - texture map all faces of an 8 vertex cube?

    - by Feet
    Working through some OpenGL-ES tutorials, using the Android emulator. I've gotten up to texture mapping and am having some trouble mapping to a cube. Is it possible to map a texture to all faces of a cube that has 8 vertices and 12 triangles for the 6 faces as described below? // Use half as we are going for a 0,0,0 centre. width /= 2; height /= 2; depth /= 2; float vertices[] = { -width, -height, depth, // 0 width, -height, depth, // 1 width, height, depth, // 2 -width, height, depth, // 3 -width, -height, -depth, // 4 width, -height, -depth, // 5 width, height, -depth, // 6 -width, height, -depth, // 7 }; short indices[] = { // Front 0,1,2, 0,2,3, // Back 5,4,7, 5,7,6, // Left 4,0,3, 4,3,7, // Right 1,5,6, 1,6,2, // Top 3,2,6, 3,6,7, // Bottom 4,5,1, 4,1,0, }; float texCoords[] = { 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, }; I have gotten the front and back faces working correctly, however, none of the other faces are showing the texture.

    Read the article

  • How can I map stored procedure result into a custom class with linq-to-sql?

    - by Remnant
    I have a stored procedure that returns a result set (4 columns x n Rows). The data is based on multiple tables within my database and provides a summary for each department within a corporate. Here is sample: usp_GetDepartmentSummary DeptName EmployeeCount Male Female HR 12 5 7 etc... I am using linq-to-sql to retrieve data from my database (nb - have to use sproc as it is something I have inherited). I would like to call the above sproc and map into a department class: public class Department { public string DeptName {get; set;} public int EmployeeCount {get; set;} public int MaleCount {get; set;} public int FemaleCount {get; set;} } In VS2008, I can drag and drop my sproc onto the methods pane of the linq-to-sql designer. When I examine the designer.cs the return type for this sproc is defined as: ISingleResult<usp_GetDepartmentSummaryResult> What I would like to do is amend this somehow so that it returns a Department type so that I can pass the results of the sproc as a strongly typed view: <% foreach (var dept in Model) { %> <ul> <li class="deptname"><%= dept.DeptName %></li> <li class="deptname"><%= dept.EmployeeCount %></li> etc... Any ideas how to achieve this? NB - I have tried amending the designer.cs and dbml xml file directly but with limited success. I admit to being a little out of my depth when it comes to updating those files directly and I am not sure it is best practice? Would be good to get some diretion. Thanks much

    Read the article

  • In Fluent NHibernate, how would I map the following domain models?

    - by Brandon
    I have a user class that looks something like this public class User { public virtual int Id { get; set; } public virtual long ValueA { get; set; } public virtual int? ValueB { get; set; } } ValueA is automatically assigned by the system. It is used in a lookup that would map to UserClass. However, if a value for ValueB exists, then it would do the lookup for UserClass in a different way. Right now the way I handle it is to get the User and then perform a separate lookup each time. return user.ValueB.HasValue ? Find(user.ValueB.Value) : Find(user.ValueA); Is there any way to make Fluent NHibernate do this for me so I can have UserClass as a property on the User class instead of having to do the lookup separately? I was thinking of the ComponentMap but I'm not sure how to make it account for the two possible lookup values.

    Read the article

  • What algorithms compute directions from point A to point B on a map?

    - by A. Rex
    How do map providers (such as Google or Yahoo! Maps) suggest directions? I mean, they probably have real-world data in some form, certainly including distances but also perhaps things like driving speeds, presence of sidewalks, train schedules, etc. But suppose the data were in a simpler format, say a very large directed graph with edge weights reflecting distances. I want to be able to quickly compute directions from one arbitrary point to another. Sometimes these points will be close together (within one city) while sometimes they will be far apart (cross-country). Graph algorithms like Dijkstra's algorithm will not work because the graph is enormous. Luckily, heuristic algorithms like A* will probably work. However, our data is very structured, and perhaps some kind of tiered approach might work? (For example, store precomputed directions between certain "key" points far apart, as well as some local directions. Then directions for two far-away points will involve local directions to a key points, global directions to another key point, and then local directions again.) What algorithms are actually used in practice? PS. This question was motivated by finding quirks in online mapping directions. Contrary to the triangle inequality, sometimes Google Maps thinks that X-Z takes longer and is farther than using an intermediate point as in X-Y-Z. But maybe their walking directions optimize for another parameter, too? PPS. Here's another violation of the triangle inequality that suggests (to me) that they use some kind of tiered approach: X-Z versus X-Y-Z. The former seems to use prominent Boulevard de Sebastopol even though it's slightly out of the way. (Edit: this example doesn't work anymore, but did at the time of the original post. The one above still works as of early November 2009.)

    Read the article

  • How can I map to a field that is joined in via one of three possible tables

    - by Mongus Pong
    I have this object : public class Ledger { public virtual Person Client { get; set; } // .... } The Ledger table joins to the Person table via one of three possible tables : Bill, Receipt or Payment. So we have the following tables : Ledger LedgerID PK Bill BillID PK, LedgerID, ClientID Receipt ReceiptID PK, LedgerID, ClientID Payment PaymentID PK, LedgerID, ClientID If it was just the one table, I could map this as : Join ( "Bill", x => { x.ManyToOne ( ledger => ledger.Client, mapping => mapping.Column ( "ClientID" ) ); x.Key ( l => l.Column ( "LedgerID" ) ); } ); This won't work for three tables. For a start the Join performs an inner join. Since there will only ever be one of Bill, Receipt or Payment - an inner join on these tables will always return zero rows. Then it would need to know to do a Coalesce on the ClientID of each of these tables to know the ClientID to grab the data from. Is there a way to do this? Or am I asking too much of the mappings here?

    Read the article

  • How to map string keys to unique integer IDs?

    - by Marek
    I have some data that comes regularily as a dump from a data souce with a string natural key that is long (up to 60 characters) and not relevant to the end user. I am using this key in a url. This makes urls too long and user unfriendly. I would like to transform the string keys into integers with the following requirements: The source dataset will change over time. The ID should be: non negative integer unique and constant even if the set of input keys changes preferrably reversible back to key (not a strong requirement) The database is rebuilt from scratch every time so I can not remember the already assigned IDs and match the new data set to existing IDs and generate sequential IDs for the added keys. There are currently around 30000 distinct keys and the set is constantly growing. How to implement a function that will map string keys to integer IDs? What I have thought about: 1. Built-in string.GetHashCode: ID(key) = Math.Abs(key.GetHashCode()) is not guaranteed to be unique (not reversible) 1.1 "Re-hashing" the built-in GetHashCode until a unique ID is generated to prevent collisions. existing IDs may change if something colliding is added to the beginning of the input data set 2. a perfect hashing function I am not sure if this can generate constant IDs if the set of inputs changes (not reversible) 3. translate to base 36/64/?? does not shorten the long keys enough What are the other options?

    Read the article

  • How do I deal with different requests that map to the same response?

    - by daxim
    I'm designing a Web service. The request is idempotent, so I chose the GET method. The response is relatively expensive to calculate and not small, so I want to get caching (on the protocol level) right. (Don't worry about memoisation at my part, I have that already covered; my question here is actually also paying attention to the Web as a whole.) There's only one mandatory parameter and a number of optional parameter with default values if missing. For example, the following two map to the same representation of the response. (If this is a dumb way to go about it the interface, propose something better.) GET /service?mandatory_parameter=some_data HTTP/1.1 GET /service?mandatory_parameter=some_data;optional_parameter=default1;another_optional_parameter=default2;yet_another_optional_parameter=default3 HTTP/1.1 However, I imagine clients do not know this and would treat them separate and therefore waste cache storage. What should I do to avoid violating the golden rule of caching? Make up a canonical form, document it (e.g. all parameters are required after all and need to be sorted in a specific order) and return a client error unless the required form is met? Instead of an error, redirect permanently to the canonical form of a request? Or is it enough to not mind how the request looks like, and just respond with the same ETag for same responses?

    Read the article

  • How to map this class in NHibernate (not FluentNHibernate)?

    - by JMSA
    Suppose I have a database like this: This is set up to give role-wise menu permissions. Please note that, User-table has no direct relationship with Permission-table. Then how should I map this class against the database-tables? class User { public int ID { get; set; } public string Name { get; set; } public string Username { get; set; } public string Password { get; set; } public bool? IsActive { get; set; } public IList<Role> RoleItems { get; set; } public IList<Permission> PermissionItems { get; set; } public IList<string> MenuItemKeys { get; set; } } This means, (1) Every user has some Roles. (2) Every user has some Permissions (depending on to Roles). (3) Every user has some permitted MenuItemKeys (according to Permissions). How should my User.hbm.xml look like?

    Read the article

  • An offscreen MKMapView behaves differently in 3.2, 4.0

    - by Duane Fields
    In 3.1 I've been using an "offscreen" MKMapView to create map images that I can rotate, crop and so forth before presenting them the user. In 3.2 and 4.0 this technique no longer works quite right. Here's some code that illustrates the problem, followed by my theory. // create map view _mapView = [[MKMapView alloc] initWithFrame:CGRectMake(0, 0, MAP_FRAME_SIZE, MAP_FRAME_SIZE)]; _mapView.zoomEnabled = NO; _mapView.scrollEnabled = NO; _mapView.delegate = self; _mapView.mapType = MKMapTypeSatellite; // zoom in to something enough to fill the screen MKCoordinateRegion region; CLLocationCoordinate2D center = {30.267222, -97.763889}; region.center = center; MKCoordinateSpan span = {0.1, 0.1 }; region.span = span; _mapView.region = region; // set scrollview content size to full the imageView _scrollView.contentSize = _imageView.frame.size; // force it to load #ifndef __IPHONE_3_2 // in 3.1 we can render to an offscreen context to force a load UIGraphicsBeginImageContext(_mapView.frame.size); [_mapView.layer renderInContext:UIGraphicsGetCurrentContext()]; UIGraphicsEndImageContext(); #else // in 3.2 and above, the renderInContext trick doesn't work... // this at least causes the map to render, but it's clipped to what appears to be // the viewPort size, plus some padding [self.view addSubview:_mapView]; #endif when the map is done loading, I snap picture of it and stuff it in my scrollview - (void)mapViewDidFinishLoadingMap:(MKMapView *)mapView { NSLog(@"[MapBuilder] mapViewDidFinishLoadingMap"); // render the map to a UIImage UIGraphicsBeginImageContext(mapView.bounds.size); // the first sub layer is just the map, the second is the google layer, this sublayer structure might change of course [[[mapView.layer sublayers] objectAtIndex:0] renderInContext:UIGraphicsGetCurrentContext()]; // we are done with the mapView at this point, we need its ram! _mapView.delegate = nil; [_mapView release]; [_mapView removeFromSuperview]; _mapView = nil; UIImage* mapImage = [UIGraphicsGetImageFromCurrentImageContext() retain]; UIGraphicsEndImageContext(); _imageView.image = mapImage; [mapImage release], mapImage = nil; } The first problem is that in 3.1 rendering to a context would trigger the map to begin loading. This no longer works in 3.2, 4.0. The only thing I have found would trigger the load is to temporarily add the map to the view (i.e. make it visible). The problem being that the map only renders to the visible area of the screen, plus a little padding. The frame/bounds are fine, but it appears to be "helpfully" optimizes the loading to limit the tiles to those visible on the screen or close to it. Any ideas how to force the map to load at full size? Anyone else have this issue?

    Read the article

  • View Maps and Get Directions in Google Chrome

    - by Asian Angel
    Every so often we all need to look at a map for reference purposes or to get directions. If you are looking for a great quick reference app then join us as we look at the Mini Google Maps extension for Google Chrome. Mini Google Maps in Action While this may look like a rather basic map extension there is more to it than meets the eye at first glance. Here is the default view when you open Mini Google Maps for the first time. Things that we really liked about this extension were: Three different aerial views available (Map, Satellite, & Terrain) Three different viewing sizes available (and the extension remembers your chosen size) The ability to get directions in combination with a map We decided to try each of the viewing sizes available…here you can see the “Medium Setting”. Notice that the scale stays the same but you get more territory included to view. Then the “Large Setting”…which we infinitely preferred to the others. Once again look at the amount of territory included by default…very nice. Switching over to the “Satellite View”… Followed by the “Terrain View”. For our first example we decided to peek at Vancouver, British Columbia. After zooming out a little bit we had a very nice looking map. For the next test we asked for directions from Vancouver to Toronto. Both the directions and map turned out very well. And just for fun we looked up Paris, France with the “Satellite View”. Conclusion If you find yourself needing to view a map or get directions often then the Mini Google Maps extension will be a very useful tool for you. Links Download the Mini Google Maps extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Get Maps and Directions to Your Contacts in Outlook 2007Stupid Geek Tricks: Browse the Web from OutlookView the Time & Date in Chrome When Hiding Your TaskbarHow to Make Google Chrome Your Default BrowserAccess Google Chrome’s Special Pages the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • Box Selection and Multi-Line Editing with VS 2010

    - by ScottGu
    This is the twenty-second in a series of blog posts I’m doing on the VS 2010 and .NET 4 release. I’ve already covered some of the code editor improvements in the VS 2010 release.  In particular, I’ve blogged about the Code Intellisense Improvements, new Code Searching and Navigating Features, HTML, ASP.NET and JavaScript Snippet Support, and improved JavaScript Intellisense.  Today’s blog post covers a small, but nice, editor improvement with VS 2010 – the ability to use “Box Selection” when performing multi-line editing.  This can eliminate keystrokes and enables some slick editing scenarios. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Box Selection Box selection is a feature that has been in Visual Studio for awhile (although not many people knew about it).  It allows you to select a rectangular region of text within the code editor by holding down the Alt key while selecting the text region with the mouse.  With VS 2008 you could then copy or delete the selected text. VS 2010 now enables several more capabilities with box selection including: Text Insertion: Typing with box selection now allows you to insert new text into every selected line Paste/Replace: You can now paste the contents of one box selection into another and have the content flow correctly Zero-Length Boxes: You can now make a vertical selection zero characters wide to create a multi-line insert point for new or copied text These capabilities can be very useful in a variety of scenarios.  Some example scenarios: change access modifiers (private->public), adding comments to multiple lines, setting fields, or grouping multiple statements together. Great 3 Minute Box-Selection Video Demo Brittany Behrens from the Visual Studio Editor Team has an excellent 3 minute video that shows off a few cool VS 2010 multi-line code editing scenarios with box selection:   Watch it to learn a few ways you can use this new box selection capability to optimize your typing in VS 2010 even further: Hope this helps, Scott P.S. You can learn more about the VS Editor by following the Visual Studio Team Blog or by following @VSEditor on Twitter.

    Read the article

  • ASP.NET MVC File Upload Error - "The input is not a valid Base-64 string"

    - by Justin
    Hey all, I'm trying to add a file upload control to my ASP.NET MVC 2 form but after I select a jpg and click Save, it gives the following error: The input is not a valid Base-64 string as it contains a non-base 64 character, more than two padding characters, or a non-white space character among the padding characters. Here's the view: <% using (Html.BeginForm("Save", "Developers", FormMethod.Post, new {enctype = "multipart/form-data"})) { %> <%: Html.ValidationSummary(true) %> <fieldset> <legend>Fields</legend> <div class="editor-label"> Login Name </div> <div class="editor-field"> <%: Html.TextBoxFor(model => model.LoginName) %> <%: Html.ValidationMessageFor(model => model.LoginName) %> </div> <div class="editor-label"> Password </div> <div class="editor-field"> <%: Html.Password("Password") %> <%: Html.ValidationMessageFor(model => model.Password) %> </div> <div class="editor-label"> First Name </div> <div class="editor-field"> <%: Html.TextBoxFor(model => model.FirstName) %> <%: Html.ValidationMessageFor(model => model.FirstName) %> </div> <div class="editor-label"> Last Name </div> <div class="editor-field"> <%: Html.TextBoxFor(model => model.LastName) %> <%: Html.ValidationMessageFor(model => model.LastName) %> </div> <div class="editor-label"> Photo </div> <div class="editor-field"> <input id="Photo" name="Photo" type="file" /> </div> <p> <%: Html.Hidden("DeveloperID") %> <%: Html.Hidden("CreateDate") %> <input type="submit" value="Save" /> </p> </fieldset> <% } %> And the controller: //POST: /Secure/Developers/Save/ [AcceptVerbs(HttpVerbs.Post)] public ActionResult Save(Developer developer) { //get profile photo. var upload = Request.Files["Photo"]; if (upload.ContentLength > 0) { string savedFileName = Path.Combine( ConfigurationManager.AppSettings["FileUploadDirectory"], "Developer_" + developer.FirstName + "_" + developer.LastName + ".jpg"); upload.SaveAs(savedFileName); } developer.UpdateDate = DateTime.Now; if (developer.DeveloperID == 0) {//inserting new developer. DataContext.DeveloperData.Insert(developer); } else {//attaching existing developer. DataContext.DeveloperData.Attach(developer); } //save changes. DataContext.SaveChanges(); //redirect to developer list. return RedirectToAction("Index"); } Thanks, Justin

    Read the article

  • MKMapView memory usage grows out of control with setRegion: calls

    - by Kurt
    Hi, I have a single MKMapView instance that I have programmatically added to a UIView. As part of the UI, the user can cycle through a list of addresses and the map view is updated to show the correct map for each address as the user goes through them. I create the map view once, and simply change what it displays with setRegion:animated:. The problem is that each time the map is changed to show a new address, the memory usage of my program increases by 200K-500K (as reported by Memory Monitor in Instruments). According to Object Allocations, it appears that a lot of 1.0K Mallocs are happening each time, and the Extended Detail pane for these 1.0K allocations shows that the Responsible Caller is convert_image_data and the Extended Detail pane shows that this is the result of [MKMapTileView drawLayer:inContext:]. So, seems likely to me that the memory usage is due to MKMapView not freeing memory it uses to redraw the map each time. In fact, when I don't display the map at all (by not even adding it as a subview of my main UIView) but still cycle through the addresses (which changes various UILabels and other displayed info) the memory usage for the app does NOT increase. If I add the map view but never update it with setRegion:, the memory also does NOT increase when changing to a new address. One more bit of info: if I go to a new address (and therefore ask the map to display the new address) the memory jumps as described above. However, if I go back to an address that was already displayed, the memory does not jump when the map redraws with the old address. Also, this happens on iPad (real device) with 3.2 and on iPhone (again, real device) with 3.1.2. Here's how I initialize the MKMapView (I only do this once): CGRect mapFrame; mapFrame.origin.y = 460; // yes, magic numbers. just for testing. mapFrame.origin.x = 0; mapFrame.size.height = 500; mapFrame.size.width = 768; mapView = [[MKMapView alloc] initWithFrame:mapFrame]; mapView.delegate = self; [self.view insertSubview:mapView atIndex:0]; And in response to the user selecting an address, I set the map like so: MKCoordinateRegion region; MKCoordinateSpan span; span.latitudeDelta=kStreetMapSpan; // 0.003 span.longitudeDelta=kStreetMapSpan; // 0.003 region.center = address.coords; // coords is CLLocationCoordinate2D region.span = span; mapView.region.span = span; [mapView setRegion:region animated:NO]; Any thoughts? I've scoured the net but haven't seen mention of this problem, and I've reached the limits of my Instruments knowledge. Thanks for any ideas.

    Read the article

  • How to Add a Business Card, or vCard (.vcf) File, to a Signature in Outlook 2013 Without Displaying an Image

    - by Lori Kaufman
    Whenever you add a Business Card to your signature in Outlook 2013, the Signature Editor automatically generates a picture of it and includes that in the signature as well as attaching the .vcf file. However, there is a way to leave out the image. To remove the business card image from your signature but maintain the attached .vcf file, you must make a change to the registry. NOTE: Before making changes to the registry, be sure you back it up. We also recommend creating a restore point you can use to restore your system if something goes wrong. Before changing the registry, we must add the Business Card to the signature and save it so a .vcf file of the contact is created in the Signatures folder. To do this, click the File tab. Click Options in the menu list on the left side of the Account Information screen. On the Outlook Options dialog box, click Mail in the list of options on the left side of the dialog box. On the Mail screen, click Signatures in the Compose messages section. For this example, we will create a new signature to include the .vcf file for your business card without the image. Click New below the Select signature to edit box. Enter a name for the new signature, such as Business Card, and click OK. Enter text in the signature editor and format it the way you want or insert a different image or logo. Click Business Card above the signature editor. Select the contact you want to include in the signature on the Insert Business Card dialog box and click OK. Click Save below the Select signature to edit box. This creates a .vcf file for the business card in the Signatures folder. Click on the business card image in the signature and delete it. You should only see your formatted text or other image or logo in the signature editor. Click OK to save your new signature and close the signature editor. Close Outlook as well. Now, we will open the Registry Editor to add a key and value to indicate where to find the .vcf to include in the signature we just created. If you’re running Windows 8, press the Windows Key + X to open the command menu and select Run. You can also press the Windows Key + R to directly access the Run dialog box. NOTE: In Windows 7, select Run from the Start menu. In the Open edit box on the Run dialog box, enter “regedit” (without the quotes) and click OK. If the User Account Control dialog box displays, click Yes to continue. NOTE: You may not see this dialog box, depending on your User Account Control settings. Navigate to the following registry key: HKEY_CURRENT_USER\Software\Microsoft\Office\15.0\Outlook\Signatures Make sure the Signatures key is selected. Select New | String Value from the Edit menu. NOTE: You can also right-click in the empty space in the right pane and select New | String Value from the popup menu. Rename the new value to the name of the Signature you created. For this example, we named the value Business Card. Double-click on the new value. In the Value data edit box on the Edit String dialog box, enter the value indicating the location of the .vcf file to include in the signature. The format is: <signature name>_files\<name of .vcf file> For our example, the Value data should be as follows: Business Card_files\Lori Kaufman The name of the .vcf file is generally the contact name. If you’re not sure of what to enter for the Value data for the new key value, you can check the location and name of the .vcf file. To do this, open the Outlook Options dialog box and access the Mail screen as instructed earlier in this article. However, press and hold the Ctrl key while clicking the Signatures button. The Signatures folder opens in Windows Explorer. There should be a folder in the Signatures folder named after the signature you created with “_files” added to the end. For our example, the folder is named Business Card_files. Open this folder. In this folder, you should see a .vcf file with the name of your contact as the name of the file. For our contact, the file is named Lori Kaufman.vcf. The path to the .vcf file should be the name of the folder for the signature (Business Card_files), followed by a “\”, and the name of the .vcf file without the extension (Lori Kaufman). Putting these names together, you get the path that should be entered as the Value data in the new key you created in the Registry Editor. Business Card_files\Lori Kaufman Once you’ve entered the Value data for the new key, select Exit from the File menu to close the Registry Editor. Open Outlook and click New Email on the Home tab. Click Signature in the Include section of the New Mail Message tab and select your new signature from the drop-down menu. NOTE: If you made the new signature the default signature, it will be automatically inserted into the new mail message. The .vcf file is attached to the email message, but the business card image is not included. All you will see in the body of the email message is the text or other image you included in the signature. You can also choose to include an image of your business card in a signature with no .vcf file attached.     

    Read the article

  • Download PowerCommands for VS 2008

    - by Editor
    PowerCommands for Visual Studio 2008 is now available for free download, along with source code and a readme document. PowerCommands, is a set of useful extensions for the Visual Studio 2008 adding additional functionality to various areas of the IDE. The source code, which requires the VS SDK for VS 2008 [...]

    Read the article

  • Download PowerCommands for VS 2008

    - by Editor
    PowerCommands is a set of useful extensions for the Visual Studio 2008 adding additional functionality to various areas of the IDE. The source code is included and requires the VS SDK for VS 2008 to allow modification of functionality or as a reference to create additional custom PowerCommand extensions. Visit the [...]

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Implementing EAV pattern with Hibernate for User -> Settings relationship

    - by Trevor
    I'm trying to setup a simple EAV pattern in my web app using Java/Spring MVC and Hibernate. I can't seem to figure out the magic behind the hibernate XML setup for this scenario. My database table "SETUP" has three columns: user_id (FK) setup_item setup_value The database composite key is made up of user_id | setup_item Here's the Setup.java class: public class Setup implements CommonFormElements, Serializable { private Map data = new HashMap(); private String saveAction; private Integer speciesNamingList; private User user; Logger log = LoggerFactory.getLogger(Setup.class); public String getSaveAction() { return saveAction; } public void setSaveAction(String action) { this.saveAction = action; } public User getUser() { return user; } public void setUser(User user) { this.user = user; } public Integer getSpeciesNamingList() { return speciesNamingList; } public void setSpeciesNamingList(Integer speciesNamingList) { this.speciesNamingList = speciesNamingList; } public Map getData() { return data; } public void setData(Map data) { this.data = data; } } My problem with the Hibernate setup, is that I can't seem to figure out how to map out the fact that a foreign key and the key of a map will construct the composite key of the table... this is due to a lack of experience using Hibernate. Here's my initial attempt at getting this to work: <composite-id> <key-many-to-one foreign-key="id" name="user" column="user_id" class="Business.User"> <meta attribute="use-in-equals">true</meta> </key-many-to-one> </composite-id> <map lazy="false" name="data" table="setup"> <key column="user_id" property-ref="user"/> <composite-map-key class="Command.Setup"> <key-property name="data" column="setup_item" type="string"/> </composite-map-key> <element column="setup_value" not-null="true" type="string"/> </map> Any insight into how to properly map this common scenario would be most appreciated!

    Read the article

  • Connmand Equivalent

    - by CurtisS
    What's the connmand equivalent on Fedora 15? I'm trying to connect to the Internet with Fedora 15 but I'm having problems. It can't see my network connection and when I try to run nm-connection-editor I get: (nm-connection-editor:9816): WARNING: **get_all_cb: couldn't retrivew system settings properties ... (nm-connection-editor: 9816): WARNING **: fetch_connections_done: error fetching connections: (32) ... (nm-connection-editor: 9816): GVFS-RemoteVolumneMonitor-WARNING ** cannot connect to the session bus: (nm-connection-editor: 9816) GVFS-RemoteVolumeMonitor-WARNING **: cannot connect to the session bus ... g_dbus_connection_real_closed: Remote peer vanished with error: Underlying GIOSStream returned 0 bytes onan async read (g-io-error-quark, 0). Exiting.

    Read the article

  • Download NServiceBus Framework

    - by Editor
    NServiceBus is a highly extensible, publish/subscribe, workflow integrated communications framework for .NET. NServiceBus is a lightweight higher-level API built on top of MSMQ and based on one-way messaging. For now the Technological Implementation is based on MSMQ, though other implementations are considered. Download NServiceBus.

    Read the article

  • Download Mp3Sharp to decode MP3s

    - by Editor
    Mp3Sharp decodes MP3 files natively in .NET using a managed application written in C#. Mp3Sharp is a C# port of JavaLayer, an MP3 decoder for Java written by the JavaZoom team. Tested with a variety of MP3s. Download Mp3Sharp.

    Read the article

  • Download NPlot .NET charting library

    - by Editor
    NPlot is a .NET charting library for .NET. And, it is available as freeware. NPlot features an useful and flexible API. Also, NPlot includes controls for ASP.NET and Windows Forms, as well as a class for creating Bitmaps. Learn from a few examples. Download NPlot.

    Read the article

  • Download ASP.NET MVC Source Code

    - by Editor
    From Scott Guthrie’s blog: Last month I blogged about our ASP.NET MVC Roadmap. Two weeks ago we shipped the ASP.NET Preview 2 Release. Phil Haack from the ASP.NET team published a good blog post about the release here. Scott Hanselman has created a bunch of great ASP.NET MVC tutorial videos [...]

    Read the article

  • Segmentation fault 11 in MacOS X- C++ [migrated]

    - by Marcos Cesar Vargas Magana
    all. I have a "segmentation fault 11" error when I run the following code. The code actually compiles but I get the error at run time. //** Terror.h ** #include <iostream> #include <string> #include <map> using std::map; using std::pair; using std::string; template<typename Tsize> class Terror { public: //Inserts a message in the map. static Tsize insertMessage(const string& message) { mErrorMessages.insert( pair<Tsize, string>(mErrorMessages.size()+1, message) ); return mErrorMessages.size(); } private: static map<Tsize, string> mErrorMessages; } template<typename Tsize> map<Tsize,string> Terror<Tsize>::mErrorMessages; //** error.h ** #include <iostream> #include "Terror.h" typedef unsigned short errorType; typedef Terror<errorType> error; errorType memoryAllocationError=error::insertMessage("ERROR: out of memory."); //** main.cpp ** #include <iostream> #include "error.h" using namespace std; int main() { try { throw error(memoryAllocationError); } catch(error& err) { } } I have kind of debugging the code and the error happens when the message is being inserted in the static map member. An observation is that if I put the line: errorType memoryAllocationError=error::insertMessage("ERROR: out of memory."); inside the "main()" function instead of at global scope, then everything works fine. But I would like to extend the error messages at global scope, not at local scope. The map is defined static so that all instances of "error" share the same error codes and messages. Do you know how can I get this or something similar. Thank you very much.

    Read the article

  • EntitySpaces 2008 Alpha Released

    - by Editor
    From the EntitySpaces Team: EntitySpaces 2008 Persistence Layer and Business Objects for Microsoft .NET We are very excited to offer this Alpha release for those who want to get a head start with EntitySpaces 2008 (ES2008). This Alpha release supports only C# class generation from within CodeSmith, and only supports Microsoft SqlServer. A subsequent [...]

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >