Search Results

Search found 26098 results on 1044 pages for 'self build'.

Page 221/1044 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • How to take a layer's snapshot with mask layer?

    - by OpenThread
    I added a mask to UIView's layer: CGImageRef maskImageRef = [UIImage imageNamed:"Icon.png"].CGImage; CALayer maskLayer = [CALayer layer]; maskLayer.contents = (__bridge id)maskImageRef; self.layer.mask = maskLayer; Then I use this code to get snapshot from a UIView: [self.layer renderInContext:mainViewContentContext]; But the mask wasn't drawn. How to draw self.layer with mask? Special thx!

    Read the article

  • How do you get and set a class property across multiple functions in Objective-C?

    - by editor
    Following up on this question about sharing objects between classes, I now need to figure out how to share the objects across various functions in a class. First, the setup: In my App Delegate I load menu information from JSON into a NSMutableDictionary and message that through to a view controller using a function called initWithData. I need to use this dictionary to populate a new Table View, which has methods like numberOfRowsInSection and cellForRowAtIndexPath. I'd like to use the dictionary count to return numberOfRowsInSection and info in the dictionary to populate each cell. Unfortunately, my code never gets beyond the init stage and the dictionary is empty so numberOfRowsInSection always returns zero. I thought I could create a class property, synthesize it and then set it. But it doesn't seem to want to retain the property's value. What am I doing wrong here? In the header .h: @interface FirstViewController:UIViewController <UITableViewDataSource, UITableViewDelegate, UITabBarControllerDelegate> { NSMutableDictionary *sectorDictionary; NSInteger sectorCount; } @property (nonatomic, retain) NSMutableDictionary *sectorDictionary; - (id)initWithData:(NSMutableDictionary*)data; @end in the implementation .m: - (id) testFunction:(NSMutableDictionary*)dictionary { NSLog(@"Count #1: %d", [dictionary count]); return nil; } - (id)initWithData:(NSMutableDictionary *)data { if (!(self=[super init])) { return nil; } [self testFunction:data]; // this is where I'd like to set a retained property self.sectorDictionary = data; return nil; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { NSLog(@"Count #2: %d", [self.sectorDictionary count]); return [self.sectorDictionary count]; } Output from NSLog: 2010-05-04 23:00:06.255 JSONApp[15890:207] Count #1: 9 2010-05-04 23:00:06.259 JSONApp[15890:207] Count #2: 0

    Read the article

  • Subclassing Cocoa means no subclass ivars in some cases?

    - by Michael
    The question is generally coming from self = [super init]. In case if I'm subclassing NSSomething and in my init's method self = [super init] returns object of different class, does it mean I am not able to have my very own ivars in my subclass, just because self will be pointing to different class? Appreciate if you could bring some examples if my statement is wrong.

    Read the article

  • python's `with` statement

    - by Prestel Nué
    Hi there, seems like I do not understand something with---the python with statement. Consider this class: class test(object): def __enter__(self): pass def __exit__(self, *ignored): pass now, when using it with with, like in with test() as michael: print repr(michael) I would expect some output like <test instance at memore blah>. But I get None. Something wrong here? Any suggestions would help. (I am using Python 2.6.6.) EDIT: Thanks to ephement for pointing me to the documentation. The __enter__ method should read def __enter__(self): return self

    Read the article

  • Getting a `free()` error when deallocating with `delete` in the backtrace

    - by wonko
    I got the following error from gdb: *** glibc detected *** /.root0/autohome/u132/hsreekum/ipopt/ipopt/debug/Ipopt/examples/ex3/ex3: free(): invalid next size (fast): 0x0000000120052b60 *** Here's the backtrace: #0 0x000000555626b264 in raise () from /lib/libc.so.6 #1 0x000000555626cc6c in abort () from /lib/libc.so.6 #2 0x00000055562a7b9c in __libc_message () from /lib/libc.so.6 #3 0x00000055562aeabc in malloc_printerr () from /lib/libc.so.6 #4 0x00000055562b036c in free () from /lib/libc.so.6 #5 0x000000555561ddd0 in Ipopt::TNLPAdapter::~TNLPAdapter () from /home/ba01/u132/hsreekum/ipopt/ipopt/build/lib/libipopt.so.1 #6 0x00000055556a9910 in Ipopt::GradientScaling::~GradientScaling () from /home/ba01/u132/hsreekum/ipopt/ipopt/build/lib/libipopt.so.1 #7 0x00000055557241b8 in Ipopt::OrigIpoptNLP::~OrigIpoptNLP () from /home/ba01/u132/hsreekum/ipopt/ipopt/build/lib/libipopt.so.1 #8 0x00000055556ae7f0 in Ipopt::IpoptAlgorithm::~IpoptAlgorithm () from /home/ba01/u132/hsreekum/ipopt/ipopt/build/lib/libipopt.so.1 #9 0x0000005555602278 in Ipopt::IpoptApplication::~IpoptApplication () from /home/ba01/u132/hsreekum/ipopt/ipopt/build/lib/libipopt.so.1 #10 0x0000005555614428 in FreeIpoptProblem () from /home/ba01/u132/hsreekum/ipopt/ipopt/build/lib/libipopt.so.1 #11 0x0000000120001610 in main () at ex3.c:169` And here's the code for Ipopt::TNLPAdapter::~TNLPAdapter () TNLPAdapter::~TNLPAdapter() { delete [] full_x_; delete [] full_lambda_; delete [] full_g_; delete [] jac_g_; delete [] c_rhs_; delete [] jac_idx_map_; delete [] h_idx_map_; delete [] x_fixed_map_; delete [] findiff_jac_ia_; delete [] findiff_jac_ja_; delete [] findiff_jac_postriplet_; delete [] findiff_x_l_; delete [] findiff_x_u_; } My question is : why does free() throw an error when ~TNLPAdapter() uses delete[]? Also, I would like to step through ~TNLPAdapter() so I can see which deallocation causes the error. I believe the error occurs in the external library (IPOPT) but I have compiled it with debug flags on ; is this sufficient?

    Read the article

  • How to make sure an action completes before you continue

    - by HurkNburkS
    I am trying to close a UIView thats in one method from another method by calling it, The UIView closes fine but not untill after all of the processes are finished in the current method. I would like to know if there is a way to force the first thing to happen first (i.e. close UIviews) then continue the current method? This is what my method looks like - (void)selectDeselectAllPressed:(UIButton*)button { int id = button.tag; [SVProgressHUD showWithStatus:@"Updating" maskType:SVProgressHUDMaskTypeGradient]; [self displaySelected]; // removes current view so you can load hud will not be behind it if (id == 1) { [self selectAllD]; } else if (id == 2) { [self deselectAllD]; } else if (id == 3) { [self selectAllI]; } else if (id == 4) { [self deselectAllI]; } } as you can see what happens is this method is called when a button is pressed, I would like for the displaySelected method to do what it needs to do before any of the other methods are called? Currently what happes when i debug this is displaySelected method is called the thread walks through that then continues to the if statment then after the method in the if statment has finished then displaySelected changes are made... its so weird. any help would be greatly appreciated.

    Read the article

  • Timed selector never performed

    - by sudo rm -rf
    I've added an observer for my method: [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(closeViewAfterUpdating) name:@"labelUpdatedShouldReturn" object:nil]; Then my relevant methods: -(void)closeViewAfterUpdating; { NSLog(@"Part 1 called"); [self performSelector:@selector(closeViewAfterUpdating2) withObject:nil afterDelay:2.0]; } -(void)closeViewAfterUpdating2; { NSLog(@"Part 2 called"); [self dismissModalViewControllerAnimated:YES]; } The only reason why I've split this method into two parts is so that I can have a delay before the method is fired. The problem is, the second method is never called. My NSLog output shows Part 1 called, but it never fires part 2. Any ideas? EDIT: I'm calling the notification from a background thread, does that make a difference by any chance? Here's how I'm creating my background thread: [NSThread detachNewThreadSelector:@selector(getWeather) toTarget:self withObject:nil]; and in getWeather I have: [[NSNotificationCenter defaultCenter] postNotificationName:@"updateZipLabel" object:textfield.text]; Also, calling: [self performSelector:@selector(closeViewAfterUpdating2) withObject:nil]; does work. EDITx2: I fixed it. Just needed to post the notification in my main thread and it worked just fine.

    Read the article

  • How do I filter values in a Django form using ModelForm?

    - by malandro95
    I am trying to use the ModelForm to add my data. It is working well, except that the ForeignKey dropdown list is showing all values and I only want it to display the values that a pertinent for the logged in user. Here is my model for ExcludedDate, the record I want to add: class ExcludedDate(models.Model): date = models.DateTimeField() reason = models.CharField(max_length=50) user = models.ForeignKey(User) category = models.ForeignKey(Category) recurring = models.ForeignKey(RecurringExclusion) def __unicode__(self): return self.reason Here is the model for the category, which is the table containing the relationship that I'd like to limit by user: class Category(models.Model): name = models.CharField(max_length=50) user = models.ForeignKey(User, unique=False) def __unicode__(self): return self.name And finally, the form code: class ExcludedDateForm(ModelForm): class Meta: model = models.ExcludedDate exclude = ('user', 'recurring',) How do I get the form to display only the subset of categories where category.user equals the logged in user?

    Read the article

  • Why does gcc warn about incompatible struct assignment with a `self = [super initDesignatedInit];' c

    - by gavinbeatty
    I have the following base/derived class setup in Objective-C: @interface ASCIICodeBase : NSObject { @protected char code_[4]; } - (Base *)initWithASCIICode:(const char *)code; @end @implementation ASCIICodeBase - (ASCIICodeBase *)initWithCode:(const char *)code len:(size_t)len { if (len == 0 || len > 3) { return nil; } if (self = [super init]) { memset(code_, 0, 4); strncpy(code_, code, 3); } return self; } @end @interface CountryCode : ASCIICodeBase - (CountryCode *)initWithCode:(const char *)code; @end @implementation CountryCode - (CountryCode *)initWithCode:(const char *)code { size_t len = strlen(code); if (len != 2) { return nil; } self = [super initWithCode:code len:len]; // here return self; } @end On the line marked "here", I get the following gcc warning: warning: incompatible Objective-C types assigning 'struct ASCIICodeBase *', expected 'struct CurrencyCode *' Is there something wrong with this code or should I have the ASCIICodeBase return id? Or maybe use a cast on the "here" line?

    Read the article

  • How does Ruby's Array.| compare elements for equality?

    - by Max Howell
    Here's some example code: class Obj attr :c, true def == that p '==' that.c == self.c end def <=> that p '<=>' that.c <=> self.c end def equal? that p 'equal?' that.c.equal? self.c end def eql? that p 'eql?' that.c.eql? self.c end end a = Obj.new b = Obj.new a.c = 1 b.c = 1 p [a] | [b] It prints 2 objects but it should print 1 object. None of the comparison methods get called. How is Array.| comparing for equality?

    Read the article

  • Is there a way to update the height of a single UITableViewCell, without recalculating the height for every cell?

    - by Chris Vasselli
    I have a UITableView with a few different sections. One section contains cells that will resize as a user types text into a UITextView. Another section contains cells that render HTML content, for which calculating the height is relatively expensive. Right now when the user types into the UITextView, in order to get the table view to update the height of the cell, I call [self.tableView beginUpdates]; [self.tableView endUpdates]; However, this causes the table to recalculate the height of every cell in the table, when I really only need to update the single cell that was typed into. Not only that, but instead of recalculating the estimated height using tableView:estimatedHeightForRowAtIndexPath:, it calls tableView:heightForRowAtIndexPath: for every cell, even those not being displayed. Is there any way to ask the table view to update just the height of a single cell, without doing all of this unnecessary work? Update I'm still looking for a solution to this. As suggested, I've tried using reloadRowsAtIndexPaths:, but it doesn't look like this will work. Calling reloadRowsAtIndexPaths: with even a single row will still cause heightForRowAtIndexPath: to be called for every row, even though cellForRowAtIndexPath: will only be called for the row you requested. In fact, it looks like any time a row is inserted, deleted, or reloaded, heightForRowAtIndexPath: is called for every row in the table cell. I've also tried putting code in willDisplayCell:forRowAtIndexPath: to calculate the height just before a cell is going to appear. In order for this to work, I would need to force the table view to re-request the height for the row after I do the calculation. Unfortunately, calling [self.tableView beginUpdates]; [self.tableView endUpdates]; from willDisplayCell:forRowAtIndexPath: causes an index out of bounds exception deep in UITableView's internal code. I guess they don't expect us to do this. I can't help but feel like it's a bug in the SDK that in response to [self.tableView endUpdates] it doesn't call estimatedHeightForRowAtIndexPath: for cells that aren't visible, but I'm still trying to find some kind of workaround. Any help is appreciated.

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • SQL – Building a High Traffic, Profitable Blog – A Unique Gift on Author’s Birthday

    - by Pinal Dave
    Every July 30th, I like to do something new. It is my birthday and I like to give gifts to everyone this day. Last year, at this time I had written an article A Year Older and 3 SQL Server Books and 3 Video Courses – 33. I had written a total of 3 books by that time and had published total of  3 Pluralsight courses. When I look back the year, I feel that I gave my best to last year. Sine Last July 30th, I have written 6 more books and 5 more video courses. The total is now 9 books and 8 video courses. It seems that I have been producing one new book or course every month since last July. Building a High Traffic, Profitable Blog Out of my 8 courses my favorite course is my latest course at Pluralsight. This course is about how to build a high traffic blog and monetize it. I have been blogging for over 7 years and there have been many hurdles and roadblocks but I have never stopped blogging any single day. There have been many instances when I felt I should just hit delete and remove my entire blog from the web but fortunately I had courage to stand by on my decisions. Well at the end, I kept on fighting through the difficult time and kept on blogging. Every day there was a lesson to learn and every day there was an issue to resolve. I never gave up and kept on building new content. Today after 7 years, when I look back there are many stories to tell. It was impossible to write down the stories so I decided to build a course based on my experience. In this course, I share all the best tricks to build a high traffic, profitable blog. When we talk about profit, people often talk about money but the reality is that profit is much bigger word than money. There are many different ways one can profit from their own blog. In this course, I discuss about all different ways about how you can be profitable by building a high traffic blog. I believe this course is for everybody who aspire to build a website or blog which gives them a profit.  Here are the major topics based out of this course. Introduction Techniques to Engage Blog Readers Social Media – Social Sharing and Social Networking Search Engine Optimization (SEO) Monetizing a Blog Frequently Asked Questions Checklists Personally I believe this is the best gift I can give to all of you my friends. Build a successful high traffic blog and monetize it. Here is the list of the all of my video courses and here is the list of all of the books. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Blogging

    Read the article

  • SqlBuildTask failed due to ArgumentNullException(searchingPaths)

    At one of my customers, they have setup TFS 2010. They are using the UpgradeTemplate.xaml to build all their solutions, including GDR2 database projects. When building the project, I got the following error message DspBuild:   Creating a model to represent the project... C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018: The "SqlBuildTask" task failed unexpectedly. [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018: System.ArgumentNullException: Value cannot be null. [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018: Parameter name: searchingPaths [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Extensibility.ExtensionAssemblyResolver..ctor(List`1 searchingPaths) [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Extensibility.ExtensionTypeLoader.LoadTypes() [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Extensibility.ExtensionManager..ctor(String databaseSchemaProviderType) [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Tasks.TaskHostLoader.LoadImpl(ITaskHost providedHost, TaskLoggingHelper providedLogger) [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Tasks.TaskHostLoader.Load(ITaskHost providedHost, TaskLoggingHelper providedLogger) [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Tasks.DBBuildTask.Execute() [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute() [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask, Boolean& taskResult) [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] Solution To solve this error you set the MSBuild Platform in the Build Defintion to X86:

    Read the article

  • First impressions of Scala

    - by Scott Weinstein
    I have an idea that it may be possible to predict build success/failure based on commit data. Why Scala? It’s a JVM language, has lots of powerful type features, and it has a linear algebra library which I’ll need later. Project definition and build Neither maven or the scala build tool (sbt) are completely satisfactory. This maven **archetype** (what .Net folks would call a VS project template) mvn archetype:generate `-DarchetypeGroupId=org.scala-tools.archetypes `-DarchetypeArtifactId=scala-archetype-simple `-DremoteRepositories=http://scala-tools.org/repo-releases `-DgroupId=org.SW -DartifactId=BuildBreakPredictor gets you started right away with “hello world” code, unit tests demonstrating a number of different testing approaches, and even a ready made `.gitignore` file - nice! But the Scala version is behind at v2.8, and more seriously, compiling and testing was painfully slow. So much that a rapid edit – test – edit cycle was not practical. So Lab49 colleague Steve Levine tells me that I can either adjust my pom to use fsc – the fast scala compiler, or use sbt. Sbt has some nice features It’s fast – it uses fsc by default It has a continuous mode, so  `> ~test` will compile and run your unit test each time you save a file It’s can consume (and produce) Maven 2 dependencies the build definition file can be much shorter than the equivalent pom (about 1/5 the size, as repos and dependencies can be declared on a single line) And some real limitations Limited support for 3rd party integration – for instance out of the box, TeamCity doesn’t speak sbt, nor does IntelliJ IDEA Steeper learning curve for build steps outside the default Side note: If a language has a fast compiler, why keep the slow compiler around? Even worse, why make it the default? I choose sbt, for the faster development speed it offers. Syntax Scala APIs really like to use punctuation – sometimes this works well, as in the following map1 |+| map2 The `|+|` defines a merge operator which does addition on the `values` of the maps. It’s less useful here: http(baseUrl / url >- parseJson[BuildStatus] sure you can probably guess what `>-` does from the context, but how about `>~` or `>+`? Language features I’m still learning, so not much to say just yet. However case classes are quite usefull, implicits scare me, and type constructors have lots of power. Community A number of projects, such as https://github.com/scalala and https://github.com/scalaz/scalaz are split between github and google code – github for the src, and google code for the docs. Not sure I understand the motivation here.

    Read the article

  • Why isn't this driver install working (sudo code)?

    - by Nick
    I have a soundcard that I'd like to use and I've been trying to install it and being a new Ubuntu user, I get about half way through this in the Terminal and it stops cooperating with me... See the link (soundcard hyperlink) but basically what I have here: I do the following and it works: sudo apt-get install subversion svn co https://line6linux.svn.sourceforge.net/svnroot/line6linux Change to the directory cd line6linux/driver/trunk Time to build from the source but first make sure you have the latest build and headers sudo apt-get install build-essential sudo apt-get install linux-headers Then after this point it says must specify file to install. Not sure how to do this or what it means. Then, running make gives the following output: ./set_revision.sh ./set_revision.sh: 9: test: https://line6linux.svn.sourceforge.net/svnroot/line6linux/driver/trunk: unexpected operator make -C /lib/modules/3.2.0-29-generic-pae/build CONFIG_LINE6_USB=m SUBDIRS=/home/nick/line6linux/driver/trunk modules make[1]: Entering directory /usr/src/linux-headers-3.2.0-29-generic-pae' CC [M] /home/nick/line6linux/driver/trunk/audio.o /home/nick/line6linux/driver/trunk/audio.c: In function ‘line6_init_audio’: /home/nick/line6linux/driver/trunk/audio.c:30:57: error: ‘THIS_MODULE’ undeclared (first use in this function) /home/nick/line6linux/driver/trunk/audio.c:30:57: note: each undeclared identifier is reported only once for each function it appears in make[2]: * [/home/nick/line6linux/driver/trunk/audio.o] Error 1 make[1]: * [module/home/nick/line6linux/driver/trunk] Error 2 make[1]: Leaving directory/usr/src/linux-headers-3.2.0-29-generic-pae' make: * [default] Error 2 This is in Ubuntu 12.04.1 LTS Another thing, semi related. Cut, copy, paste? Seems like it's different from program to program. I was in the terminal and hit Ctrl-C and then Ctrl-Shift-V in Firefox and it won't paste. But in terminal it will paste. I'm confused. Here is what it's giving me after I hit "Make": nick@NickUbuntu:~/line6linux/driver/trunk$ make ./set_revision.sh ./set_revision.sh: 9: test: https://line6linux.svn.sourceforge.net/svnroot/line6linux/driver/trunk: unexpected operator make -C /lib/modules/3.2.0-29-generic-pae/build CONFIG_LINE6_USB=m SUBDIRS=/home/nick/line6linux/driver/trunk modules make[1]: Entering directory /usr/src/linux-headers-3.2.0-29-generic-pae' CC [M] /home/nick/line6linux/driver/trunk/audio.o /home/nick/line6linux/driver/trunk/audio.c: In function ‘line6_init_audio’: /home/nick/line6linux/driver/trunk/audio.c:30:57: error: ‘THIS_MODULE’ undeclared (first use in this function) /home/nick/line6linux/driver/trunk/audio.c:30:57: note: each undeclared identifier is reported only once for each function it appears in make[2]: *** [/home/nick/line6linux/driver/trunk/audio.o] Error 1 make[1]: *** [_module_/home/nick/line6linux/driver/trunk] Error 2 make[1]: Leaving directory/usr/src/linux-headers-3.2.0-29-generic-pae' make: * [default] Error 2 Looks like these folks also had similar problems: http://ubuntuforums.org/showthread.php?t=1163608&page=3

    Read the article

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

  • Virtual Lab part 2&ndash;Templates, Patterns, Baselines

    - by Geoff N. Hiten
    Once you have a good virtualization platform chosen, whether it is a desktop, server or laptop environment, the temptation is to build “X”.  “X” may be a SharePoint lab, a Virtual Cluster, an AD test environment or some other cool project that you really need RIGHT NOW.  That would be doing it wrong. My grandfather taught woodworking and cabinetmaking for twenty-seven years at a trade school in Alabama.  He was the first instructor hired at that school and the only teacher for the first two years.  His students built tables, chairs, and workbenches so the school could start its HVAC courses.   Visiting as a child, I also noticed many extra “helper” stands, benches, holders, and gadgets all built from wood.  What does that have to do with a virtual lab, you ask?  Well, that is the same approach you should take.  Build stuff that you will use.  Not for solving a particular problem, but to let the Virtual Lab be part of your normal troubleshooting toolkit. Start with basic copies of various Operating Systems.  Load and patch server and desktop OS environments.  This also helps build your collection of ISO files, another essential element of a virtual Lab.  Once you have these “baseline” images, you can use your Virtualization software’s snapshot capability to freeze the image.  Clone the snapshot and you have a brand new fully patched machine in mere moments.  You may have to sysprep some of the Microsoft OS environments if you are going to create a domain environment or experiment with clustering.  That is still much faster than loading and patching from scratch. So once you have a stock of raw materials (baseline images in this case) where should you start.  Again, my grandfather’s workshop gives us the answer.  In the shop it was workbenches and tables to hold large workpieces that made the equipment more useful.  In a Windows environment the same role falls to the fundamental network services:  DHCP, DNS, Active Directory, Routing, File Services, and Storage services.  Plan your internal network setup.  Build out an AD controller with all the features listed.  Make the actual domain an isolated domain so it will not care about where you take it.  Add the Microsoft iSCSI target.  Once you have this single system, you can leverage it for almost any network environment beyond a simple stand-alone system. Having these templates and fundamental infrastructure elements ready to run means I can build a quick lab in minutes instead of hours.  My solutions are well-tested, my processes fully documented with screenshots, and my plans validated well before I have to make any changes to client systems.  the work I put in is easily returned in increased value and client satisfaction.

    Read the article

  • Visual Studio 2010 Launch Events

    - by Jim Duffy
    Don’t miss out on the opportunity to learn about the new features in Visual Studio 2010. Check out the MSDN Events page and find out when the talented folks of the Developer & Evangelism group will be visiting your city to prove to you that /*Life Runs On Code*/. I’ll be attending the Raleigh event June 2, 2010 from 1:00 - 5:00 PM. North Carolina State University, Jane S. McKimmon Conference Center 1101 Gorman St Raleigh North Carolina 27606 United States From the Raleigh Event page: Event Overview Learn about the rich application platforms that Microsoft® Visual Studio® 2010 supports, including Windows® 7, the Web, SharePoint®, Windows Azure™, SQL®, and Windows® Phone 7 Series. From tighter tester and dev collaboration to new ALM tools, there’s a lot that’s new. Here’s what you can expect: Windows Development with Visual Studio 2010 Visual Studio has always been the best way to build compelling visual solutions for Windows. Visual Studio 2010 continues this trend with great new tooling support for Silverlight 4, WPF, and native development. In this demo heavy session, you’ll see how you can build rich Windows applications with Silverlight 4 using new trusted application features including out-of-browser execution, saving to the file system, and even COM Automation. You’ll also see how you can use the new Task Parallel Library from within a WPF application to take advantage of all those cores in today’s modern computers. Web and Cloud Development with Visual Studio 2010 If you build solutions for the web, then this session is for you. Come see how your existing skills move forward with Visual Studio 2010 both for in-house ASP.NET development and the new frontier of the Cloud. In this session, you’ll see improved designers, new HTML and JavaScript snippets, Web Forms enhancements, and how you can quickly build great web sites using Dynamic Data. You’ll see the changes made to testable web sites with MVC 2.0 and how we’ve integrated JQuery support into the platform. You’ll then see how easy it is to leverage your existing code and move to the cloud with Windows Azure. Windows Phone 7 Developer Tools and Platform Overview This session provides an overview of Visual Studio® 2010 for Windows Phone. Learn about the powerful capabilities of this new application platform and the developer tools experience including basic IDE usage, debugging, packaging, and deployment. This session also shows how you can use Microsoft Expression® Blend™ for Windows Phone to build great Silverlight applications. Have a day. :-|

    Read the article

  • Speed up your Silverlight Debugging for large projects

    - by Aligned
    I'm working on a 5+ year old ASP.NET project that has 74+ projects and we've been adding new Silverlight applications to run in the ASP.NET page islands. My machine at work isn't the most powerful, so I find myself waiting a lot for the whole thing to build. I'm using Visual Studio 2010, so that takes up a lot of resources as well. This causes me to get distracted and I start looking at the news... I need to combat that more :-). I can't get a new machine, that's up to someone else, so I've found a few tricks to help. 1. Only build the Silverlight project you're working with. This will build all referenced projects (you can see these by right clicking and clicking Project Dependencies) and package a new XAP (you can see all the actions in your output build window). Then refresh your page with the Silverlight app and it's up-to-date. 2. I was working with a co-worker (thanks Jordan) who was using the the Debug -> attach to processes window. In the Attach to: row there is a "Select..." button. In the dialog, click "Debug these code types:" and select Silverlight. Hit ok. Then all you need to do is find your process (you might need to click the refresh button). I'm usually debugging in IE, so I select the first one and push "i" on the keyboard. That brings me to the IE windows open. Find the one with type of Silverlight, x86. It is usually directly above one with type of x86 that has the page title for "title". Click attach and watch your output window spit out messages about loading debug symbols and your breakpoints enabled (if this doesn't happen you chose the wrong process, hit stop and try again). Now you can debug the client code as normal, server code requires a full F5 or attaching to the correct process. To improve this even further, bind the menu item to a key stroke. I chose ctrl + x, x. (Tools -> Options -> Keyboard, search for Debug.AttachToProcess, set the shortcut keys globaly and assign). Most of the time I build the project, then hit ctrl + x, x then i, then enter and I'm debugging. The process I want is usually the first IE in the list.

    Read the article

  • SQL Server 2000 and SSL Encryption

    - by Angry_IT_Guru
    We are a datacenter that hsots a SQL Server 2000 environment which provides database services for a product we sell that is loaded as a rich-client applicatin at each of our many clients and their workstations. Currently today, the application uses straight ODBC connections from the client site to our datacenter. We need to begin encrypting the credentials -- since everything is clear-text today and the authentication is weakly encrypted -- and I'm trying to determine the best way to implement SSL on the server with minimizing the impact of the client. A few things, however: 1) We have our own Windows domain and all our servers are joined to our private domain. Our clietns no nothing of our domain. 2) Typically, our clients connect to our datacenter servers either by: a) Using TCP/IP address b) Using a DNS name that we publish via internet, zone transfers from our DNS servers to our customers, or the client can add static HOSTS entries. 3) From what I understand from enabling encryption is that I can go to the Network Utility and select the "encryption" option for the protocol that I wish to encrypt. Such as TCP/IP. 4) When the encryption option is selected, I have a choice of installing a third-party certificate or a self-signed. I have tested the self-signed, but do have potential issues. I'll explain in a bit. If I go with a third-party cert, such as Verisign, or Network solutions... what kind of certificate do I request? These aren't IIS certificates? When I go create a self-signed via Microsoft's certificate server, I have to select "Authentication certificate". What does this translate to in the third-party world? 5) If I create a self-signed certificate, I understand that the "issue to" name has to match the FQDN for the server that is running SQL. In my case, I have to use my private domain name. If I use this, what does this do for my clients when trying to connect to my SQL Server? Surely they cannot resolve my private DNS names on their network.... I've also verified that when the self-signed certificate is installed, it has to be in the local personal store for the user account that is running SQL Server. SQL Server will only start if the FQDN matches the "issue to" of the certificate and SQL is running under the account that has the certificate installed. If I use a self-signed certificate, does this mean I have to have every one of my clients install it to verify? 6) If I used a third-party certificate, which sounds like the best option, do all my clients have to have internet access when accessing my private servers of their private WAN connection to use to verify the certificate? What do I do about the FQDN? It sounds like they have to use my private domain name -- which is not published -- and can no longer use the one that I setup for them to use? 7) I plan on upgrading to SQL 2000 soon. Is setup of SSL any easier/better with SQL 2005 than SQL 2000? Any help or guiadance would be appreciated

    Read the article

  • Flash Builder missing fundamental things

    - by Bart van Heukelom
    All of a sudden Flash Builder 4 is missing all kinds of fundamental things and is generating incorrect errors. I've had the same issue yesterday, where I fixed it by downloading a new Flex SDK and importing that into FB. I did this again, but this time it fixed nothing. I don't think it's something I did, like removing critical references from the build path. The errors also appeared on projects I was not working on at the time. It occurs for ActionScript, Flex and Flex Library projects alike. Update: I find this stack trace in the Flash Builder error log: !ENTRY com.adobe.flexbuilder.project 4 43 2010-05-11 11:55:47.495 !MESSAGE Uncaught exception in compiler !STACK 0 java.lang.NullPointerException at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2592) at macromedia.asc.parser.VariableBindingNode.evaluate(VariableBindingNode.java:64) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2233) at macromedia.asc.parser.ListNode.evaluate(ListNode.java:44) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2578) at macromedia.asc.parser.VariableDefinitionNode.evaluate(VariableDefinitionNode.java:48) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2310) at macromedia.asc.parser.StatementListNode.evaluate(StatementListNode.java:60) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2503) at macromedia.asc.parser.WithStatementNode.evaluate(WithStatementNode.java:44) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2310) at macromedia.asc.parser.StatementListNode.evaluate(StatementListNode.java:60) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2891) at macromedia.asc.parser.FunctionCommonNode.evaluate(FunctionCommonNode.java:106) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2905) at macromedia.asc.parser.FunctionCommonNode.evaluate(FunctionCommonNode.java:106) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:3643) at macromedia.asc.parser.ClassDefinitionNode.evaluate(ClassDefinitionNode.java:106) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:3371) at macromedia.asc.parser.ProgramNode.evaluate(ProgramNode.java:80) at flex2.compiler.as3.As3Compiler.analyze4(As3Compiler.java:709) at flex2.compiler.CompilerAPI.analyze(CompilerAPI.java:3089) at flex2.compiler.CompilerAPI.analyze(CompilerAPI.java:2977) at flex2.compiler.CompilerAPI.batch2(CompilerAPI.java:528) at flex2.compiler.CompilerAPI.batch(CompilerAPI.java:1274) at flex2.compiler.CompilerAPI.compile(CompilerAPI.java:1496) at flex2.tools.oem.Application.compile(Application.java:1188) at flex2.tools.oem.Application.recompile(Application.java:1133) at flex2.tools.oem.Application.compile(Application.java:819) at flex2.tools.flexbuilder.BuilderApplication.compile(BuilderApplication.java:344) at com.adobe.flexbuilder.multisdk.compiler.internal.ASApplicationBuilder$MyBuilder.mybuild(ASApplicationBuilder.java:276) at com.adobe.flexbuilder.multisdk.compiler.internal.ASApplicationBuilder.build(ASApplicationBuilder.java:127) at com.adobe.flexbuilder.multisdk.compiler.internal.ASBuilder.build(ASBuilder.java:190) at com.adobe.flexbuilder.multisdk.compiler.internal.ASItemBuilder.build(ASItemBuilder.java:74) at com.adobe.flexbuilder.project.compiler.internal.FlexProjectBuilder.buildItem(FlexProjectBuilder.java:480) at com.adobe.flexbuilder.project.compiler.internal.FlexProjectBuilder.build(FlexProjectBuilder.java:306) at com.adobe.flexbuilder.project.compiler.internal.FlexIncrementalBuilder.build(FlexIncrementalBuilder.java:157) at org.eclipse.core.internal.events.BuildManager$2.run(BuildManager.java:627) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:170) at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:201) at org.eclipse.core.internal.events.BuildManager$1.run(BuildManager.java:253) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:256) at org.eclipse.core.internal.events.BuildManager.basicBuildLoop(BuildManager.java:309) at org.eclipse.core.internal.events.BuildManager.build(BuildManager.java:341) at org.eclipse.core.internal.events.AutoBuildJob.doBuild(AutoBuildJob.java:140) at org.eclipse.core.internal.events.AutoBuildJob.run(AutoBuildJob.java:238) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)

    Read the article

  • CCNet web dashboard not showing anything when MSBuild fails

    - by cfdev9
    I have a simple project in ccnet using svn & msbuild only. There is a 30 second trigger for svn and the msbuild file compiles a web application then copies it to a numbered build folder. When an error occurs in the msbuild task I get a failed build. When I view a failed build in the web dashboard I can see the 'Modifications since last build' section in the dashboard, but nothing else. I have to click on the build log and read through all of the xml in the error log to see what the error was. Why won't the dashboard show the errors from the build log? I haven't changed anything in the dashboard.config since installing ccnet. Dashboard Version : 1.5.7256.1 <project name="SimpleWebapp1"> <artifactDirectory>C:\Program Files\CruiseControl.NET\server\SimpleWebapp1\Artifacts\</artifactDirectory> <triggers> <intervalTrigger name="continuous" seconds="30" buildCondition="IfModificationExists" initialSeconds="5" /> </triggers> <sourcecontrol type="svn"> <executable>C:\Program Files\CollabNet\Subversion Client\svn.exe</executable> <trunkUrl>https://server:8443/svn/SimpleWebapp1/trunk</trunkUrl> <workingDirectory>D:\CCNetSandbox\SimpleWebapp1</workingDirectory> <username>username</username> <password>password</password> </sourcecontrol> <tasks> <msbuild> <executable> C:\WINDOWS\Microsoft.NET\Framework\v3.5\MSBuild.exe </executable> <workingDirectory> D:\CCNetSandbox\SimpleWebapp1 </workingDirectory> <projectFile>SimpleWebapp1.build</projectFile> <buildArgs>/p:Configuration=Debug /p:Platform="Any CPU"</buildArgs> <targets>CompileLatest</targets> <timeout>900</timeout> <logger>ThoughtWorks.CruiseControl.MsBuild.XMLLogger, C:\Program Files\CruiseControl.NET\server\ThoughtWorks.CruiseControl.MsBuild.dll</logger> </msbuild> </tasks> <publishers> <xmllogger /> <buildpublisher> <publishDir>C:\Program Files\CruiseControl.NET\server\SimpleWebapp1\Artifacts\</publishDir> <useLabelSubDirectory>true</useLabelSubDirectory> </buildpublisher> </publishers> </project>

    Read the article

  • Prevent command window showing when NAnt compiling windows forms application

    - by HollyStyles
    I have been playing with creating little windows forms apps without visual studio. I create my source in Notepad++ and compile with a NAnt build file. When I run the application a command window is also displayed as well as the application window. How can I prevent the command window showing up? using System; using System.Drawing; using System.Windows.Forms; using System.Media; using System.IO; namespace mynamespace { class MyForm : Form { public Button btnPlay; MyForm() { this.SuspendLayout(); this.Text = "My Application"; InitialiseForm(); this.ResumeLayout(false); } private void InitialiseForm() { btnPlay = new Button(); btnPlay.Location = new System.Drawing.Point(30,40); btnPlay.Text = "Play"; btnPlay.Click += new System.EventHandler(btnPlay_Click); this.Controls.Add(btnPlay); } protected void btnPlay_Click(object sender, EventArgs e) { string wav = "testing123.wav"; Stream resourceStream = System.Reflection.Assembly.GetExecutingAssembly().GetManifestResourceStream(wav); SoundPlayer player = new SoundPlayer(resourceStream); player.Play(); } public static void Main() { Application.Run(new MyForm()); } } } Build file <?xml version="1.0"?> <project name="myform" default="build" basedir="."> <description>My Form app</description> <property name="debug" value="true" overwrite="false"/> <target name="clean" description="Remove all generated files"> <delete dir="build"/> </target> <target name="build" description="Compile the source" depends="clean"> <mkdir dir="build"/> <csc target="exe" output="build\MyForm.exe" debug="${debug}" verbose="true"> <resources> <include name="app\resources\*.wav" /> </resources> <sources> <include name="app\MyForm.cs"/> </sources> </csc> </target> </project>

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >