Search Results

Search found 3634 results on 146 pages for 'commit charge'.

Page 37/146 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • iOS: Using Jenkins for nightly internal builds (TestFlight), plus frequent client builds

    - by Amy Worrall
    I'm an iOS dev, working for a small agency. I'm currently on a few smaller projects where I'm the only developer. We recently acquired a Jenkins server, but each project is left to fend for themselves as for how to use it. I'd like to use it for making and distributing builds. My ideal would be: Every commit is built as a single IPA that is placed in a HTTP-accessible location. (It only needs to keep the latest one, otherwise the disk would fill up — some of our apps are 500MB or more.) Once a day it makes a build, signs it with our internal provisioning profile, tacks a build number onto the end of the version number, and sends it to our internal TestFlight team. When manually prompted, it builds the latest commit, gives it a manually specified version number, signs it with the client's provisioning profile and sends it to TestFlight. I'm pretty new to Jenkins. The developer who set up the server is running it on one of our projects, so I know it has the right stuff installed to do Xcode builds. I believe he's only using it to run unit tests though, not to do any of the code signing, IPA creation or TestFlight stuff. So my questions: I've listed three distinct kinds of build. How does Jenkins cope with that? I see there's a "build triggers" section in the config for a Jenkins project, but it doesn't seem to mention different types of build. Should I just set up multiple Jenkins projects, called "App X (continuous)", "App X (nightly)" and "App X (client)"? How do I specify the provisioning profile through Jenkins? If there isn't a way, I guess I could make different configurations in the Xcode project… Has anyone else used Jenkins to actually do the release (i.e. build and push to TestFlight) of beta builds of their apps? Is it a good idea? Or should I continue just doing it manually?

    Read the article

  • SQL SERVER – Get Free Books on While Learning SQL Server 2012 Error Handling

    - by pinaldave
    Fans of this blog are aware that I have recently released my new books SQL Server Functions and SQL Server 2012 Queries. The books are available in market in limited edition but you can avail them for free on Wednesday Nov 14, 2012. Not only they are free but you can additionally learn SQL Server 2012 Error Handling as well. My book’s co-author Rick Morelan is presenting a webinar tomorrow on SQL Server 2012 Error Handling. Here is the brief abstract of the webinar: People are often shocked when they see the demo in this talk where the first statement fails and all other statements still commit. For example, did you know that BEGIN TRAN…COMMIT TRAN is not enough to make everything work together? These mistakes can still happen to you in SQL Server 2012 if you are not aware of the options. Rick Morelan, creator of Joes2Pros, will teach you how to predict the Error Action and control it with & without structured error handling. Register for the webinar now to learn: How to predict the Error Action and control it Nuances between successful and failing SQL statements Essential SQL Server 2012 configuration options Register for the Webinar and be present during the webinar. My co-author will announce a winner (may be more than 1 winner) during the session. If you are present during the session – you are eligible to win the book. The webinar is scheduled for 2 different times to accommodate various time zones. 1) 10am ET/7am PT 2) 1pm ET/11am PT. Each webinar will have their own winner. You can increase your chances by attending both the webinars. Do not miss this opportunity and register for the webinar right now. The recordings of the webinar may not be available. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Collaboration using github and testing the code

    - by wyred
    The procedure in my team is that we all commit our code to the same development branch. We have a test server that runs updated code from this branch so that we can test our code on the servers. The problem is that if we want to merge the development branch to the master branch in order to publish new features to our production servers, some features that may not have been ready will be applied to the production servers. So we're considering having each developer work on a feature/topic branch where each of them work on their own features and when it's ready, merge it into the development branch for testing, and then into the master branch. However, because our test server only pulls changes from the development branch, the developers are unable to test their features. While this is not a huge issue as they can test it on their local machine, the only problem I foresee is if we want to test callbacks from third-party services like sendgrid (where you specify a url for sendgrid to update you on the status of emails sent out). How to handle this problem? Note: We're not advanced git users. We use the Github app for MacOSX and Windows to commit our work.

    Read the article

  • How do I stop my ethernet network connection from dropping?

    - by Sean Hill
    My ethernet-based network connection doesn't stay up consistently. I'm running a ping against the gateway and it will: Work for a minute Freeze, time out, or give multi-second response times Repeat If it's stuck and I disable/enable networking through the network manager applet everything will work fine again for a minute. After 280 packets transmitted I'm getting 41% packet loss. I've tried a different cable and connection to the gateway but this had no effect. The distance to the gateway is just about 3 feet. Seems to work fine if I switch over to Windows, but Ubuntu is my main OS and I can't even use it right now as I depend on the network. My setup... OS: Ubuntu 11.04, dual-booting Windows 7 Mobo: Gigabyte Z68X-UD4-B3 CPU: Intel Core i7 2600K Edit A little clarification... Network Manager is still showing me as connected, but I am unable to reach to gateway or anything beyond. At no point does NM suggest the connection is lost and calling ifconfig shows that I still have an IP address. I tried connecting to a different gateway with a different cable and the same problem arises. As requested: lspci | grep -i eth 07:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) dmesg | tail -f [ 14.024709] EXT4-fs (sda5): re-mounted. Opts: errors=remount-ro,commit=0 [ 14.026443] EXT4-fs (sda7): re-mounted. Opts: commit=0 [ 14.176101] hda-intel: IRQ timing workaround is activated for card #2. Suggest a bigger bdl_pos_adj. [ 23.917731] eth0: no IPv6 routers present [ 726.109697] r8169 0000:07:00.0: eth0: link up [ 733.169494] r8169 0000:07:00.0: eth0: link up [ 753.930119] r8169 0000:07:00.0: eth0: link up [ 880.787332] r8169 0000:07:00.0: eth0: link up [ 1159.161283] r8169 0000:07:00.0: eth0: link up [ 1406.623550] r8169 0000:07:00.0: eth0: link up Edit @roland-taylor: Network is always available under Windows. Pings do not timeout, applications do not complain of no network availability, large downloads are not interrupted or slowed.

    Read the article

  • Work Item Keyboard Shortcuts, Resolving Mercurial Work Items, WikiPlex 2.0

    [Do you tweet? Follow us on Twitter @matthawley and @adacole_msft] We deployed the latest version of the CodePlex software yesterday. Keyboard Shortcuts With this release, we have added a set of keyboard shortcuts for common tasks in the Issue Tracker.  This feature is a popular request in the CodePlex Issue Tracker.  The CodePlex team visits the issue tracker frequently when researching and considering new features.  If you haven’t visited it recently, please take a few moments to log an idea or vote for the features you would most like to see implemented on CodePlex.   To view the available shortcuts, type ? from any page within the issue tracker to see this help dialog: You can see what each shortcut invokes below: Please give us feedback on this feature and let us know what additional shortcuts would be useful. Resolve Work Items When Pushing Mercurial Changes Another feature we added is the ability to resolve work items when push changes to your Mercurial repository, which has been available to our TFS / SVN users for quite some time. The required format is identical to the SVN format listed here. When committing your changes locally, add "Work Items: Id, AnotherId" to your commit message. When you push, CodePlex will detect this comment, add a commit message, and resolve the work item. WikiPlex Goes 2.0! CodePlex continues to improve WikiPlex, our open source wiki engine.  Wikiplex hit another major milestone today with the release of version 2.0!  We have added several new features, including:  interleaving ordered and unordered lists, specifying the height and width for images, a multi-line indentation macro, and a restructuring of some of the API. Visit Matt's announcement for more information on the release or grab the binaries via NuGet or CodePlex.

    Read the article

  • 10.10 boots to command line login prompt

    - by greggory.hz
    I recently installed Ubuntu 10.10 on a computer that was previously running 10.04 (that worked fine). Now, each time I boot up, it starts up in a command line login prompt. I can login and it stays at the command line (as expected). I can then manually start gdm with sudo start gdm and it works fine. I can also enable compiz (using proprietary nvidia drivers) so I'm reasonably confident that it's not a driver problem (at least not in the sense that the drivers just flat out aren't working). Interestingly, if I leave it at the command prompt without logging in, after about 5 or 10 minutes, gnome starts up on its own. I'm not sure what is causing this. This is what dmesg | tail gives me after a manual start of gdm: [ 15.664166] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 270.18 Tue Jan 18 21:46:26 PST 2011 [ 15.991304] type=1400 audit(1297543976.953:11): apparmor="STATUS" operation="profile_load" name="/usr/share/gdm/guest-session/Xsession" pid=990 comm="apparmor_parser" [ 16.606986] eth0: link up, 100Mbps, full-duplex, lpa 0xCDE1 [ 18.798506] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro,commit=0 [ 26.740010] eth0: no IPv6 routers present [ 90.444593] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro,commit=0 [ 189.252208] audit_printk_skb: 21 callbacks suppressed [ 189.252213] type=1400 audit(1297544150.218:19): apparmor="STATUS" operation="profile_replace" name="/usr/lib/cups/backend/cups-pdf" pid=1876 comm="apparmor_parser" [ 189.252584] type=1400 audit(1297544150.218:20): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/cupsd" pid=1876 comm="apparmor_parser" [ 351.159585] lo: Disabled Privacy Extensions

    Read the article

  • CodePlex learns to talk to other services!

    CodePlex is now able to talk to other services! For example, if you want CodePlex to tell Trello to update cards on your Trello board, it can do it. Or if you want CodePlex to notify your Campfire chat room when updates are pushed, it can do that too. To start off, we are going to be adding support for the following services: Campfire – Notify a Campfire chat room when commits occur HipChat – Notify a HipChat chat room when commits occur Trello – Add commit summaries to Trello cards by referencing those cards in commit messages Twitter – Notify your Twitter followers when updates are pushed to your project In addition, we will continue to support our existing integrations with Windows Azure – Continuously deploy to Windows Azure on pushes (For Git and Hg projects) AppHarbor – Continuously deploy to AppHarbor on pushes To set up these integrations for your project, navigate to the project settings page as a project coordinator, and click on the services section as seen below:   While we are starting with these six services, the infrastructure is now in place to allow us to quickly roll out new integrations. We would love to hear which services and integrations you would like to see most on our suggestions page. We realize that there are some services and URLs that only make sense for your project to send notifications to. To support this scenario, we plan to add generic web hooks in the near future. Have ideas on how to improve CodePlex? Please visit our suggestions page! Vote for existing ideas or submit a new one. As always you can reach out to the CodePlex team on Twitter @codeplex or reach me directly @Rick_Marron.    

    Read the article

  • What is the basic loadout for an open source web developer?

    - by DeveloperDon
    Thus far, I have mainly been an embedded developer, but I am interested in having the flexibility to do mobile and web development as well. I think my tools should include the following, but probably a lot more. LAMP stack. Java IDEs like Eclipse and IntelliJ. JS frameworks like Dojo, Node.JS, AngularJS, (is it better to mix or commit to one?). Cloud solutions like EC2 and Azure (again, ok to mix or better to commit to one?). Google APIs. Continuous integration server. Source control tools with Git for new work, SVN, CVS, +others for imports. FTP server. Unit test runners. Bug trackers. OOAD modeling tools or plug-ins? Graphic design tools? Hosting services. XML / JSON / other markup? Content management, SEO? I am also interested to know if there are tools where it might be better to mix, match, or support all available (maybe for source control) and others where the full focus should be on one (maybe Java vs. C# or Windows vs. Linux vs. MacOS). Perhaps some of these questions need context of whether the projects will be greenfield (just pick favorite) or maintenance (no choice, each project continues legacy, sometimes with a poor tools).

    Read the article

  • Parse git log by modified files

    - by MrUser
    I have been told to make git messages for each modified file all one line so I can use grep to find all changes to that file. For instance: $git commit -a modified: path/to/file.cpp/.h - 1) change1 , 2) change2, etc...... $git log | grep path/to/file.cpp/.h modified: path/to/file.cpp/.h - 1) change1 , 2) change2, etc...... modified: path/to/file.cpp/.h - 1) change1 , 2) change2, etc...... modified: path/to/file.cpp/.h - 1) change1 , 2) change2, etc...... That's great, but then the actual line is harder to read because it either runs off the screen or wraps and wraps and wraps. If I want to make messages like this: $git commit -a modified: path/to/file.cpp/.h 1) change1 2) change2 etc...... is there a good way to then use grep or cut or some other tool to get a readout like $git log | grep path/to/file.cpp/.h modified: path/to/file.cpp/.h 1) change1 2) change2 etc...... modified: path/to/file.cpp/.h 1) change1 2) change2 etc...... modified: path/to/file.cpp/.h 1) change1 2) change2 etc......

    Read the article

  • How do you communicate improvements in tools and process to the development team?

    - by birryree
    Hi everyone, My team does a lot of internal tooling and infrastructure work - you can think of us as a small scale version of the teams Facebook, Etsy, Netflix, etc. who build all the infrastructure for scaling their services up to thousands/tens of thousands of servers and supporting millions of users. Lately, we've been running full steam ahead improving much of the tools we use internally, like tools for automatically creating new servers, setting up new application instances, etc. An end result of this has been decreased developer frustration, but increased 'ignorance' by most of the developer team about how to use our tools correctly and effectively. More often than not, my team will be asked by other teams to help them use the tools. Solutions we've thought up or things already in place: All our code is relatively simple and self-explanatory, with good comments where necessary, so developers could read the scripts. Counterargument: You can guess this isn't a particularly good idea, having people read our tools' code to figure out how to use it. All our code is committed to Subversion with very detailed commit messages about changes, developers could read the commit emails. Counterargument: Expect the developers to read all our commits? Ludicrous. Wiki - we have an internal company wiki, that we try to maintain with up to date information, but as we are moving so fast, the wiki has to keep pace as well. Counterargument: As mentioned, we move fast in my team, as more improvements on our tools are added daily. Again still relies on people to read something that might change constantly. Email the team? We could email the team when we have a glut of improvements to communicate. So as you can all see, we are trying to find new ideas, and explore options we haven't thought of yet. Anyone else ever been in a similar situation and have some guidance?

    Read the article

  • Lucene best practice

    - by Dragos
    I am trying to understand how Lucene should be used. From what I have read, creating an IndexReader is costly, so using a Search Manager shoulg be the right choice. However, a SearchManager should be produced by a NRTManager(which, by the way, should replace the IndexWriter for every add or delete operation performed). But in order to have a NRTManager, I should first have an IndexWriter, and here comes my problem. The documentation says: an IndexWriter is thread-safe the constructor of this class takes a Directory object, so it seems creating an instace should be costly(as in the case of an IndexReader) all changes are buffered and flushed periodically(so they seem to encourage using a single instance) but: the changes, although flushed will only be visible after commit or close after finished making updates(add/delete), the instance should be closed I also found this: http://stackoverflow.com/questions/5374419/forgot-to-close-the-lucene-indexwriter-after-adding-documents-to-the-index where it is said that not closing a writer might ruin everything So what am I really supposed to do? Is having a single IndexWriter instance a good idea(make only commit and never close it)? EDIT: What is more, if I use NRTManager, how can I make acommit`? Is it even possible?

    Read the article

  • Is having a single `IndexWriter` instance in Lucene a good idea?

    - by Dragos
    I am trying to understand how Lucene should be used. From what I have read, creating an IndexReader is costly, so using a Search Manager shoulg be the right choice. However, a SearchManager should be produced by a NRTManager(which, by the way, should replace the IndexWriter for every add or delete operation performed). But in order to have a NRTManager, I should first have an IndexWriter, and here comes my problem. The documentation says: an IndexWriter is thread-safe the constructor of this class takes a Directory object, so it seems creating an instace should be costly(as in the case of an IndexReader) all changes are buffered and flushed periodically(so they seem to encourage using a single instance) but: the changes, although flushed will only be visible after commit or close after finished making updates(add/delete), the instance should be closed I also found this: http://stackoverflow.com/questions/5374419/forgot-to-close-the-lucene-indexwriter-after-adding-documents-to-the-index where it is said that not closing a writer might ruin everything So what am I really supposed to do? Is having a single IndexWriter instance a good idea (make only commit and never close it)? EDIT: What is more, if I use NRTManager, how can I make acommit`? Is it even possible?

    Read the article

  • Commenting on Commits

    The source code experience on CodePlex just got more social. Now, you can comment directly on a commit when viewing a project’s source code history. This feature acts as a nice compliment to the existing ability to comment on a pull request's commits. To get started with this feature, navigate to the commit you are interested in from the source tab. As you hover over the line you would like to comment on, an icon to add a comment will appear to the left. Click on the icon, and comment away!     Once you comment on a line, the person who committed the change and anyone involved in the conversation will be notified of the comment by email. Have ideas on how to improve CodePlex? Please visit our suggestions page! Vote for existing ideas or submit a new one. As always you can reach out to the CodePlex team on Twitter @codeplex or reach me directly @Rick_Marron.    

    Read the article

  • Introducing RedPatch

    - by timhill
    The Ksplice team is happy to announce the public availability of one of our git repositories, RedPatch. RedPatch contains the source for all of the changes Red Hat makes to their kernel, one commit per fix and we've published it on oss.oracle.com/git. With RedPatch, you can access the broken-out patches using git, browse them online via gitweb, and freely redistribute the source under the terms of the GPL. This is the same policy we provide for Oracle Linux and the Unbreakable Enterprise Kernel (UEK). Users can freely access the source, view the commit logs and easily identify the changes that are relevant to their environments. To understand why we've created this project we'll need a little history. In early 2011, Red Hat changed how they released their kernel source, going from a tarball that had individual patch files to shipping the kernel source as one giant tarball with a single patch for all Red Hat-introduced changes. For most people who work in the kernel this is merely an inconvenience; driver developers and other out-of-kernel module developers can see the end result to make sure their module still performs as expected. For Ksplice, we build individual updates for each change and rely on source patches that are broken-out, not a giant tarball. Otherwise, we wouldn’t be able to take the right patches to create individual updates for each fix, and to skip over the noise — like a change that speeds up bootup — which is unnecessary for an already-running system. We’ve been taking the monolithic Red Hat patch tarball and breaking it into smaller commits internally ever since they introduced this change. At Oracle, we feel everyone in the Linux community can benefit from the work we already do to get our jobs done, so now we’re sharing these broken-out patches publicly. In addition to RedPatch, the complete source code for Oracle Linux and the Oracle Unbreakable Enterprise Kernel (UEK) is available from both ULN and our public yum server, including all security errata. Check out RedPatch and subscribe to [email protected] for discussion about the project. Also, drop us a line and let us know how you're using RedPatch!

    Read the article

  • Objective-C NSMutableArray Count Causes EXC_BAD_ACCESS

    - by JoshEH
    I've been stuck on this for days and each time I come back to it I keep making my code more and more confusing to myself, lol. Here's what I'm trying to do. I have table list of charges, I tap on one and brings up a model view with charge details. Now when the model is presented a object is created to fetch a XML list of users and parses it and returns a NSMutableArray via a custom delegate. I then have a button that presents a picker popover, when the popover view is called the user array is used in an initWithArray call to the popover view. I know the data in the array is right, but when [pickerUsers count] is called I get an EXC_BAD_ACCESS. I assume it's a memory/ownership issue but nothing seems to help. Any help would be appreciated. Relevant code snippets: Charge Popover (Charge details model view): @interface ChargePopoverViewController ..... NSMutableArray *pickerUserList; @property (nonatomic, retain) NSMutableArray *pickerUserList; @implementation ChargePopoverViewController @synthesize whoOwesPickerButton, pickerUserList; - (void)viewDidLoad { JEHWebAPIPickerUsers *fetcher = [[JEHWebAPIPickerUsers alloc] init]; fetcher.delegate = self; [fetcher fetchUsers]; } -(void) JEHWebAPIFetchedUsers:(NSMutableArray *)theData { [pickerUserList release]; pickerUserList = theData; } - (void) pickWhoPaid: (id) sender { UserPickerViewController* content = [[UserPickerViewController alloc] initWithArray:pickerUserList]; UIPopoverController *popover = [[UIPopoverController alloc] initWithContentViewController:content]; [popover presentPopoverFromRect:whoPaidPickerButton.frame inView:self.view permittedArrowDirections:UIPopoverArrowDirectionAny animated:YES]; content.delegate = self; } User Picker View Controller @interface UserPickerViewController ..... NSMutableArray *pickerUsers; @property(nonatomic, retain) NSMutableArray *pickerUsers; @implementation UserPickerViewController @synthesize pickerUsers; -(UserPickerViewController*) initWithArray:(NSMutableArray *)theUsers { self = [super init]; if ( self ) { self.pickerUsers = theUsers; } return self; } - (NSInteger)pickerView:(UIPickerView *)thePickerView numberOfRowsInComponent:(NSInteger)component { // Dies Here EXC_BAD_ACCESS, but NSLog(@"The content of array is%@",pickerUsers); shows correct array data return [pickerUsers count]; } I can provide additional code if it might help. Thanks in advance.

    Read the article

  • Magento Checkout options

    - by graham barnes
    Hi I want to add some options to my magento, lets say i print on clothing, a customer buys some t-shirts, shirts and jackets from me, it totals to £60+ VAT on the checkout area where i signup and not before I need to add an option where I can add a text box and upload option, can i do this? I ideally then want to add some pricing options if the user has chosen to add some branding to a product or multiple products e.g. if the branding was on the top right of the shirt it will cost £5.00, if on the back it costs £7.00 etc all if possible to be done via the admincp. I also want an option so when they upload their logo for the first time they are charged a one off charge, like a setup fee but If the customer has allready sent in there logo then no charge applies. thanks Graham

    Read the article

  • Vehicle 2 Vehicle Communication Questions

    - by pinnacler
    I have a rare opportunity to meet the man in charge of implementing vehicle 2 vehicle communication for the US Department of Transportation with 2 others in a few hours. Do YOU have any questions for him? I know this is a little outside the normal, but this is a 'reverse' thread and I felt he has some great knowledge on the subject that I want to share with this community. I'll post his answers later today to his questions. Ask about V2V implementation, privacy issues, use cases, or if you've thought of a great way to use V2V and want me to share it with him, he can at least think about it. He is in charge of panel that creates the standard. Or anything else...

    Read the article

  • Salary of a junior freelancer programmer

    - by Frank
    Hi, I'm pursuing my PhD in CS and starting freelancing to pay bills and get some experience. Since I'm new in the freelancing field, I was wondering how much you would charge for a junior programmer to do some work. Like many, I've started freelancing for website. I'm doing pretty much all the work (design, programming, finding hosting/domain). I would like to give details to my client in order for them to know how much cost every part involved in website development. How much should I charge? Charing a hourly rate or a price for the whole project? How you did it and why? Thanks

    Read the article

  • How can you toggle between two sets of values per data series in flot?

    - by Jedidja
    flot has built-in support for multiple data series (sample code) and also dual-axis (sample code). Assuming multiple data series (water, electricity, etc) that each have an amount (usage) and a dollar value (charge for that usage), what would the best way be to to use flot to display either the amount or dollar values for all the data series, while still supporting toggling display for each individual series? The idea is to send down all the data in one GET request and then let the client take care of everything else in Javascript. Ideally we could use triplets somehow {date, amount, charge}, and then possibly split that into two arrays for flot.

    Read the article

  • Good SQL error handling in Strored Procedure

    - by developerit
    When writing SQL procedures, it is really important to handle errors cautiously. Having that in mind will probably save your efforts, time and money. I have been working with MS-SQL 2000 and MS-SQL 2005 (I have not got the opportunity to work with MS-SQL 2008 yet) for many years now and I want to share with you how I handle errors in T-SQL Stored Procedure. This code has been working for many years now without a hitch. N.B.: As antoher "best pratice", I suggest using only ONE level of TRY … CATCH and only ONE level of TRANSACTION encapsulation, as doing otherwise may not be 100% sure. BEGIN TRANSACTION; BEGIN TRY -- Code in transaction go here COMMIT TRANSACTION; END TRY BEGIN CATCH -- Rollback on error ROLLBACK TRANSACTION; -- Raise the error with the appropriate message and error severity DECLARE @ErrMsg nvarchar(4000), @ErrSeverity int; SELECT @ErrMsg = ERROR_MESSAGE(), @ErrSeverity = ERROR_SEVERITY(); RAISERROR(@ErrMsg, @ErrSeverity, 1); END CATCH; In conclusion, I will just mention that I have been using this code with .NET 2.0 and .NET 3.5 and it works like a charm. The .NET TDS parser throws back a SQLException which is ideal to work with.

    Read the article

  • Adopting DBVCS

    - by Wes McClure
    Identify early adopters Pick a small project with a small(ish) team.  This can be a legacy application or a green-field application. Strive to find a team of early adopters that will be eager to try something new. Get the team on board! Research Research the tool(s) that you want to use.  Some tools provide all of the features you would need while some only provide a slice of the pie.  DBVCS requires the ability to manage a set of change scripts that update a database from one version to the next.  Ideally a tool can track database versions and automatically apply updates.  The change script generation process can be manual, but having diff tools available to automatically generate it can really reduce the overhead to adoption.  Finally, an automated tool to generate a script file per database object is an added bonus as your version control system can quickly identify what was changed in a commit (add/del/modify), just like with code changes. Don’t settle on just one tool, identify several.  Then work with the team to evaluate the tools.  Have the team do some tests of the following scenarios with each tool: Baseline an existing database: can the migration tool work with legacy databases?  Caution: most migration platforms do not support baselines or have poor support, especially the fad of fluent APIs. Add/drop tables Add/drop procedures/functions/views Alter tables (rename columns, add columns, remove columns) Massage data – migrations sometimes involve changing data types that cannot be implicitly casted and require you to decide how the data is explicitly cast to the new type.  This is a requirement for a migrations platform.  Think about a case where you might want to combine fields, or move a field from one table to another, you wouldn’t want to lose the data. Run the tool via the command line.  If you cannot automate the tool in Continuous Integration what is the point? Create a copy of a database on demand. Backup/restore databases locally. Let the team give feedback and decide together, what tool they would like to try out. My recommendation at this point would be to include TSqlMigrations and RoundHouse as SQL based migration platforms.  In general I would recommend staying away from the fluent platforms as they often lack baseline capabilities and add overhead to learn a new API when SQL is already a very well known DSL.  Code migrations often get messy with procedures/views/functions as these have to be created with SQL and aren’t cross platform anyways.  IMO stick to SQL based migrations. Reconciling Production If your project is a legacy application, you will need to reconcile the current state of production with your development databases.  Find changes in production and bring them down to development, even if they are old and need to be removed.  Once complete, produce a baseline of either dev or prod as they are now in sync.  Commit this to your VCS of choice. Add whatever schema changes tracking mechanism your tool requires to your development database.  This often requires adding a table to track the schema version of that database.  Your tool should support doing this for you.  You can add this table to production when you do your next release. Script out any changes currently in dev.  Remove production artifacts that you brought down during reconciliation.  Add change scripts for any outstanding changes in dev since the last production release.  Commit these to your repository.   Say No to Shared Dev DBs Simply put, you wouldn’t dream of sharing a code checkout, why would you share a development database?  If you have a shared dev database, back it up, distribute the backups and take the shared version offline (including the dev db server once all projects are using DB VCS).  Doing DB VCS with a shared database is bound to cause problems as people won’t be able to easily script out their own changes from those that others are working on.   First prod release Copy prod to your beta/testing environment.  Add the schema changes table (or mechanism) and do a test run of your changes.  If successful you can schedule this to be run on production.   Evaluation After your first release, evaluate the pain points of the process.  Try to find tools or modifications to existing tools to help fix them.  Don’t leave stones unturned, iteratively evolve your tools and practices to make the process as seamless as possible.  This is why I suggest open source alternatives.  Nothing is set in stone, a good example was adding transactional support to TSqlMigrations.  We ran into situations where an update would break a database, so I added a feature to do transactional updates and rollback on errors!  Another good example is generating change scripts.  We have been manually making these for months now.  I found an open source project called Open DB Diff and integrated this with TSqlMigrations.  These were things we just accepted at the time when we began adopting our tool set.  Once we became comfortable with the base functionality, it was time to start automating more of the process.  Just like anything else with development, never be afraid to try to find tools to make your job easier!   Enjoy -Wes

    Read the article

  • Google intègre CoreOS à sa plateforme cloud, « un hôte idéal pour les systèmes distribués » estime Brandon Philips

    Google intègre CoreOS à sa plateforme cloud, « un hôte idéal pour les systèmes distribués » estime Brandon Philips Google rallonge la liste des distributions Linux pouvant être prises en charge par Google Compute Engine. Ainsi, CoreOS, qui était déjà disponible en preview depuis décembre dernier, fait désormais part entière des offres de la plateforme au même titre que Debian, RedHat ou encore Suse. Les entreprises pourront la tester et l'utiliser facilement pour leurs clusters et leurs applications...

    Read the article

  • Paying by Cash

    - by David Dorf
    I'll grant you paying by cash in the context of stores isn't particularly interesting, but in my quest to try new payment methods I decided to pay by cash at an online store. Using a credit card means I have to hoist myself off the couch, find the card, and enter all those digits. Google Checkout certainly makes that task easier by storing my credit card information, but what happens to all those people that don't have a credit card? What about the ones that are afraid to use credit cards over the internet. There are three main options for cash payment, not all of which are accepted by every merchant. The most popular is PayPal. The issue I have with them is that returns and disputes have to be handled with PayPal, not the merchant. I once used PayPal at a shady online store and lost my money. Yeah, my bad but they wouldn't help me at all. PayPal was purchased by eBay in 2002. BillMeLater is best for larger purchases, because at checkout they actually run a credit check to make sure you're credit worthy. Assuming you are, they pay the merchant on your behalf and mail you a bill, which you better pay quickly or interest will start to accrue. That's nice for the merchant because they get paid right away, and I presume there's no charge-backs. BillMeLater was purchased by eBay in 2008. Last night I tried eBillMe for the first time. After checkout, they send you a bill via email and expect you to pay either via online banking (they provide the instructions to set everything up) or walk-in locations across the US (typically banks). The process was quick and easy. The merchant doesn't ship the product until the bill is paid, so there's a day or two delay. For the merchant there are no charge-backs, and the fees are less than credit cards. For the shopper, they provide buyer protection similar to that offered by credit cards, and 1% cashback on purchases. Once the online bill-pay is setup, its easy to reuse in the future. Seems like a win-win for merchants and shoppers.

    Read the article

  • USA : Microsoft appelle le gouvernement à «réduire le déficit de confiance dans la technologie» un an après la première révélation de Snowden

    USA : Microsoft appelle le gouvernement à « réduire le déficit de confiance qu'il a créé dans la technologie » un an après la première révélation de Snowden Cela fait déjà un an qu'Edward Snowden a fait publier sa première révélation sur les exactions de la NSA et ses programmes de surveillance à l'échelle internationale. Pour marquer le coup, Microsoft a entreprit de s'adresser au gouvernement américain par le biais de Brad Smith, son avocat général et accessoirement Vice-Président en charge...

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >