Search Results

Search found 71854 results on 2875 pages for 'build time'.

Page 41/2875 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Is there a detailed description of optimizations in the Android build process?

    - by Daniel Lew
    I've been curious as to all the optimizations that go into the building of an .apk. I'm curious because of two things I've tried in the past to bring down the size of my .apk: I have had a few large json assets in projects before, as well as a static sqlite database. I tried bringing down the size of the apk by gzipping them before the build process, but the resulting size is exactly the same. I just today tried pngcrush on my /drawable/ folders. The resulting build was exactly the same size as before. I would think that perhaps #1 could be explained by the zip process, but simply zipping the /drawable/ folders in #2 result in different-sized files. Perhaps the build process runs something akin to pngcrush? Regardless, I was wondering if anyone knew where to find a detailed description of all the optimizations in the Android build process. I don't want to waste my time trying to optimize what is already automated, and also I think it'd help my understanding of the resulting apk. Does anyone know if this is documented anywhere?

    Read the article

  • what's an effective way to build a csproj file in C#?

    - by jcollum
    I'd like to avoid a command line for this. I've been using the MSBuild API ( Microsoft.Build.Framework and Microsoft.Build.BuildEngine) with code that looks like this: this.buildEngine = new Engine(); BuildPropertyGroup props = new BuildPropertyGroup(); props.SetProperty("Configuration", "Debug"); this.buildEngine.RegisterLogger(this.logger); Project proj = new Project(this.buildEngine); proj.LoadXml(this.projectFileAndPath, ProjectLoadSettings.None); this.buildEngine.BuildProject(proj, "Build"); However I've run into enough problems that I can't find answers for that I'm really wondering if I'm doing this right. First, I can't find the output (there's no bin directory in any of the places where I figured the dll's would end up). Second, I tried building a project that I had made in VS2008 and the line proj.LoadXml( fails for invalid xml encoding. But of course the xml file is valid, since VS2008 can build it (I checked). At this point I'm beginning to wonder if I've picked up some code that's way out of date or a methodology that's been superseded by something else. Opinions?

    Read the article

  • what's an effective way to build a csproj file in code?

    - by jcollum
    I'd like to avoid a command line for this. I've been using the MSBuild API ( Microsoft.Build.Framework and Microsoft.Build.BuildEngine) with code that looks like this: this.buildEngine = new Engine(); BuildPropertyGroup props = new BuildPropertyGroup(); props.SetProperty("Configuration", "Debug"); this.buildEngine.RegisterLogger(this.logger); Project proj = new Project(this.buildEngine); proj.LoadXml(this.projectFileAndPath, ProjectLoadSettings.None); this.buildEngine.BuildProject(proj, "Build"); However I've run into enough problems that I can't find answers for that I'm really wondering if I'm doing this right. First, I can't find the output (there's no bin directory in any of the places where I figured the dll's would end up). Second, I tried building a project that I had made in VS2008 and the line proj.LoadXml( fails for invalid xml encoding. But of course the xml file is valid, since VS2008 can build it (I checked). At this point I'm beginning to wonder if I've picked up some code that's way out of date or a methodology that's been superseded by something else. Opinions?

    Read the article

  • "date_part('epoch', now() at time zone 'UTC')" not the same time as "now() at time zone 'UTC'" in po

    - by sirlark
    I'm writing a web based front end to a database (PHP/Postgresql) in which I need to store various dates/times. The times are meant to be always be entered on the client side in the local time, and displayed in the local time too. For storage purposes, I store all dates/times as integers (UNIX timestamps) and normalised to UTC. One particular field has a restriction that the timestamp filled in is not allowed to be in the future, so I tried this with a database constraint... CONSTRAINT not_future CHECK (timestamp-300 <= date_part('epoch', now() at time zone 'UTC')) The -300 is to give 5 minutes leeway in case of slightly desynchronised times between browser and server. The problem is, this constraint always fails when submitting the current time. I've done testing, and found the following. In PostgreSQL client: SELECT now() -- returns correct local time SELECT date_part('epoch', now()) -- returns a unix timestamp at UTC (tested by feeding the value into the date function in PHP correcting for its compensation to my time zone) SELECT date_part('epoch', now() at time zone 'UTC') -- returns a unix timestamp at two time zone offsets west, e.g. I am at GMT+2, I get a GMT-2 timestamp. I've figured out obviously that dropping the "at time zone 'UTC'" will solve my problem, but my question is if 'epoch' is meant to return a unix timestamp which AFAIK is always meant to be in UTC, why would the 'epoch' of a time already in UTC be corrected? Is this a bug, or I am I missing something about the defined/normal behaviour here.

    Read the article

  • Build 2012, the first post

    - by Dennis Vroegop
    Yes, I was one of the lucky few who made it to Build. Build, formerly known as the Professional Developers Conference (or PDC) is the place to be if you are a developer on the Microsoft platform. Since I take my job seriously I took out some time on my busy schedule, sighed at the thought of not seeing my family for another week and signed up for it. Now, before I talk about the amazing Surface devices (yes, this posting is written on one of them), the great Lumia 920 we all got, the long deserved love for touch, NUI and other things I have been talking about for years, I need to do some ranting. So if you are anxious to read about the technical goodies you’ll have to wait until the next post. Still here? Good. When I signed up for the Build conference during my holidays this summer it was pretty obvious that demand would be high. Therefor I made sure I was on time. But even though I registered only 7 minutes after the initial opening time the Early Bird discount for the first 500 attendees was already sold out. I later learned that registration actually started 5 minutes before the scheduled time but even though it is still impressive how fast things went. The whole event sold out in 57 minutes Or so they say… A lot of people got put on the waiting list. There was room for about 1500 attendees and I heard that at least 1000 people were on that waiting list, including a lot of people I know. Strangely, all of them got tickets assigned after 2 weeks. Here at the conference I heard from a guy from Nokia that they had shipped 2500 Lumia 920 phones. That number matches the rumors that the organization added 1000 extra tickets. This, of course is no problem. I am not an elitist and I think large crowds have a special atmosphere that I quite like. But…. The Microsoft Campus is not equipped for that sheer volume of visitors. That was painfully obvious during on-site registration where people had to stand in line for over 2 hours. The conference is spread out over 2 buildings, divided by a 15 minute busride (yes, the campus is that big). I have seen queues of over 200 people waiting for the bus and when that arrived it had a capacity of 16. I can assure you: that doesn’t fit. This of course means that travelling from one site to the other might take about 30 minutes. So you arrive at the session room just in time, only to find out it’s full. Since you can’ get into that session you try to find another one but now you’re even more late so you have no chance at all of entering. The doors are closed and you’re told: “Well, you can watch the live stream online”. Mmmm… So I spend thousands of dollars, a week away from home, family and work to be told I can also watch the sessions online? Are you fricking kidding me? I could go on but I won’t. You get the idea. It’s jus badly organized, something I am not really used to in my 20 years of experience at Microsoft events. Yes, I am disappointed. I hope a lot of people here in Redmond will also fill in the evals and that the organization next year will do a better job. Really, Build deserves better. </rantmode>

    Read the article

  • Windows 7 Change internet time settings tells me I have no permissions.

    - by Matthias Vance
    LS, While trying to solve my computer clock always running ahead (even when on, not just on every boot), I apparently broke some security settings. All I did (as far as I can remember) was stop and start the w32time service. Now, whenever I go to the "Internet time" tab, and click "Change settings..." Windows tells me I don't have permissions to do so. Facts I am a member of the Administrators group. In gpedit.msc, I checked that the Administrators group is allowed to change the system time. Kind regards, Matthias Vance

    Read the article

  • Deleted the .AppleDouble files inside my Time Machine backups - are they still OK?

    - by Jon M
    My Ubuntu server is set up to emulate a TimeCapsule (after a very long weekend following the instructions here, here and here). My macbook pro has been backing up happily to it for a month or so now, and all seems well. The other day I was tidying up the extraneous files from my music collection on the server, got a bit loose with the find command... and ended up deleting all the .AppleDouble files underneath '/', which included the Time Machine folder. Now, Time Machine still appears to work fine, it backs up regularly, I can look through all the previous versions of my files, and they seem to restore without trouble. My question is: by deleting the .AppleDouble files, have I actually broken anything? Is the TM data still good, or should I trash it and start fresh (i.e. with a new 'day 0' full backup)?

    Read the article

  • TFS2010 Build Controllers

    - by Kabir Rao
    As I know, In TFS2010, One Build Controller serves One Project Collection. And, ideally one build server should have One build controller into it. However, as per the link below- http://marknic.com/2010/05/14/MultipleTFS2010BuildControllersOnASingleBuildBox.aspx we can install mutiple build controller in a single build box. Can two or more build controller can run at a same time. Because the link suggests that, we need to switch between controllers .... Is it that, we can use one controller at a time.

    Read the article

  • tradeoffs of iSCSI vs. AFP when using Time Machine with a NAS?

    - by ajit.george
    I'm setting up a home NAS device (Synology DS409) that I'm planning to use for Time Machine backups (amongst other things). What are the tradeoffs between using iSCSI or AFP to mount the backup volume? The Synology wiki suggests that iSCSI is better if the Mac will be frequently disconnected from the network or sleeping, from the point of view of the volume automatically remounting. What about filesystem consistency? Given that unplugging a USB drive without properly unmounting it often requires the Time Machine volume to be repaired, would iSCSI have the same issues? Thanks in advance.

    Read the article

  • Why is ntpd not updating the time on my server?

    - by John
    I have ntpd running on my server. It's all the default settings, except I commented out its ability to be a server to other machines: # restrict -4 default kod notrap nomodify nopeer noquery # restrict -6 default kod notrap nomodify nopeer noquery restrict default ignore If I run ntpdate -q ntp.ubuntu.com, I'm told that my machine's clock is off by 7 seconds. What's going on? How can I diagnose what's happening, is there a log I can turn on? more info #1 # ntpq -np remote refid st t when poll reach delay offset jitter ============================================================================== 91.189.94.4 193.79.237.14 2 u 30 64 7 108.518 -0.136 0.361 more info #2 Here's what this looked like when I asked the question: # ntpdate -q ntp.ubuntu.com server 91.189.94.4, stratum 2, offset 7.191308, delay 0.13310 10 Jan 20:38:09 ntpdate[31055]: step time server 91.189.94.4 offset 7.191308 sec And here's what it looks like now, after restarting ntpd a couple times (I'm assuming that's what fixed it): # ntpdate -q ntp.ubuntu.com server 91.189.94.4, stratum 2, offset 0.000112, delay 0.13164 10 Jan 20:47:03 ntpdate[31419]: adjust time server 91.189.94.4 offset 0.000112 sec

    Read the article

  • tradeoffs of iSCSI vs. AFP when using Time Machine with a NAS?

    - by Ajit George
    I'm setting up a home NAS device (Synology DS409) that I'm planning to use for Time Machine backups (amongst other things). What are the tradeoffs between using iSCSI or AFP to mount the backup volume? The Synology wiki suggests that iSCSI is better if the Mac will be frequently disconnected from the network or sleeping, from the point of view of the volume automatically remounting. What about filesystem consistency? Given that unplugging a USB drive without properly unmounting it often requires the Time Machine volume to be repaired, would iSCSI have the same issues?

    Read the article

  • How does Windows handle Time? Updating RTC, etc. (Active Directory and Novell E-Directory)

    - by bshacklett
    I'm troubleshooting some time issues in my domain and before making any big changes I want to have a thorough understanding of what's going on. I've got a few lingering questions at the moment: What sources (rtc, ntp, etc.) are queried by Windows to keep time? How does this differ in a mixed Active Directory / Novell environment? What is the order that each source is queried in? How does Windows decide whether to act as an NTP client, peer or server? In what situations will Windows update the RTC, if ever?

    Read the article

  • cruisecontrol.net failing build with no errors

    - by John Hoge
    Hi, I've been using CCNet for some time now but just upgraded to .net 4. I've installed the new framework on my dev box along with vs2010 and on my CC.net server as well. I've just installed CruiseControl.net version 1.5.6804.1, and changed my MSBuild tasks to point to the new v4.0.30319 framework directory. I've got two projects on .net 4 now that just don't build. They build perfectly well in VS and run well on both Cassini and IIS. I just don't get any error messages from cc.net, just this: BUILD FAILED Project: GoodBay Date of build: 2010-05-11 10:21:59 Running time: 00:00:02 Integration Request: Build (ForceBuild) triggered from SVN Modifications since last build (0)

    Read the article

  • What does "Windows is not a real-time operating system" mean?

    - by hydroparadise
    I came across an application called LatencyMon, that apparently does latency monitoring. I have always understood the more of a load you put on the processor, the less responsive, or more latent, the system becomes. However, in the second section of the LatencyMon page, the first sentence says, "Windows is not a real-time operating system". That got me thinking. I mean, is this any different from any other operatiing system like linux, unix, or OS X? Are there any "Real-Time" operating systems? Or is the merely a marketing scheme to get you to buy their product? EDIT: Also, are there any examples of RTOS's out there?

    Read the article

  • Can one use Airport (Time Capsule) with an external DHCP server?

    - by DNS
    I currently share my DSL connection using a wireless router with DHCP disabled, and dnsmasq running on a Mac Mini serving DHCP & DNS. This setup is important because I have clients doing PXE boot, and I need the control over DHCP that dnsmasq provides. There is also a Time Capsule on the network that's used purely as a backup device; its wireless functions are disabled. The wireless router is starting to get a little flaky, and since it doesn't support 802.11n I'd like to replace it. Rather than buying a new router, I'd like to just use the Time Capsule. But I see no way to disable its DHCP server; when I set the connection type to PPPoE, it insists on serving DHCP. Is there any way to use Airport PPPoE with a DHCP server elsewhere on the network?

    Read the article

  • How do synchronize two folders in Windows 7 in real-time?

    - by acme
    I want Windows 7 to synchronize two folders in real time (maybe running a service that monitors a folder)? Basically I want to monitor a folder and synchronize each change (new files, changed files, deleted files) to another drive. It has to be in real time, so it gets synchronized instantly when a change happens. A one-direction synchronisation is enough. I tried Microsofts SyncToy, but it does only syncing by hand or scheduled. Can this be achieved with Windows 7 itself or does anyone know a freeware application for this?

    Read the article

  • How do synchronize two folders in Windows 7 in real-time?

    - by acme
    I want Windows 7 to synchronize two folders in real time (maybe running a service that monitors a folder)? Basically I want to monitor a folder and synchronize each change (new files, changed files, deleted files) to another drive. It has to be in real time, so it gets synchronized instantly when a change happens. A one-direction synchronisation is enough. I tried Microsofts SyncToy, but it does only syncing by hand or scheduled. Can this be achieved with Windows 7 itself or does anyone know a freeware application for this?

    Read the article

  • Right-Time Retail Part 3

    - by David Dorf
    This is part three of the three-part series.  Read Part 1 and Part 2 first. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Marketing Real-time isn’t just about executing faster; it extends to interactions with customers as well. As an industry, we’ve spent many years analyzing all the data that’s been collected. Yes, that data has been invaluable in helping us make better decisions like where to open new stores, how to assort those stores, and how to price our products. But the recent advances in technology are now making it possible to analyze and deliver that data very quickly… fast enough to impact a potential sale in near real-time. Let me give you two examples. Salesmen in car dealerships get pretty good at sizing people up. When a potential customer walks in the door, it doesn’t take long for the salesman to figure out the revenue at stake. Is this person a real buyer, or just looking for a fun test drive? Will this person buy today or three months from now? Will this person opt for the expensive packages, or go bare bones? While the salesman certainly asks some leading questions, much of information is discerned through body language. But body language doesn’t translate very well over the web. Eloqua, which was acquired by Oracle earlier this year, reads internet body language. By tracking the behavior of the people visiting your web site, Eloqua categorizes visitors based on their propensity to buy. While Eloqua’s roots have been in B2B, we’ve been looking at leveraging the technology with ATG to target B2C. Knowing what sites were previously visited, how often the customer has been to your site recently, and how long they’ve spent searching can help understand where the customer is in their purchase journey. And knowing that bit of information may be enough to help close the deal with a real-time offer, follow-up email, or online customer service pop-up. This isn’t so different from the days gone by when the clerk behind the counter of the corner store noticed you were lingering in a particular aisle, so he walked over to help you compare two products and close the sale. You appreciated the personalized service, and he knew the value of the long-term relationship. Move that same concept into the digital world and you have Oracle’s CX Suite, a cloud-based offering of end-to-end customer experience tools, assembled primarily from acquisitions. Those tools are Oracle Marketing (Eloqua), Oracle Commerce (ATG, Endeca), Oracle Sales (Oracle CRM On Demand), Oracle Service (RightNow), Oracle Social (Collective Intellect, Vitrue, Involver), and Oracle Content (Fatwire). We are providing the glue that binds the CIO and CMO together to unleash synergies that drive the top-line higher, and by virtue of the cloud-approach, keep costs at bay. My second example of real-time marketing takes place in the store but leverages the concepts of Web marketing. In 1962 the decline of personalized service in retail began. Anyone know the significance of that year? That’s when Target, K-Mart, and Walmart each opened their first stores, and over the succeeding years the industry chose scale over personal service. No longer were you known as “Jane with the snotty kid so make sure we check her out fast,” but you suddenly became “time-starved female age 20-30 with kids.” I’m not saying that was a bad thing – it was the right thing for our industry at the time, and it enabled a huge amount of growth, cheaper prices, and more variety of products. But scale alone is no longer good enough. Today’s sophisticated consumer demands scale, experience, and personal attention. To some extent we’ve delivered that on websites via the magic of cookies, your willingness to log in, and sophisticated data analytics. What store manager wouldn’t love a report detailing all the visitors to his store, where they came from, and which products that examined? People trackers are getting more sophisticated, incorporating infrared, video analytics, and even face recognition. (Next time you walk in front on a mannequin, don’t be surprised if it’s looking back.) But the ultimate marketing conduit is the mobile phone. Since each mobile phone emits a unique number on WiFi networks, it becomes the cookie of the physical world. Assuming congress keeps privacy safeguards reasonable, we’ll have a win-win situation for both retailers and consumers. Retailers get to know more about the consumer’s purchase journey, and consumers get higher levels of service with the retailer. When I call my bank, a couple things happen before the call is connected. A reverse look-up on my phone number identifies me so my accounts can be retrieved from Siebel CRM. Then the system anticipates why I’m calling based on recent transactions. In this example, it sees that I was just charged a foreign currency fee, so it assumes that’s the reason I’m calling. It puts all the relevant information on the customer service rep’s screen as it connects the call. When I complain about the fee, the rep immediately sees I’m a great customer and I travel lots, so she suggests switching me to their traveler’s card that doesn’t have foreign transaction fees. That technology is powered by a product called Oracle Real-Time Decisions, a rules engine built to execute very quickly, basically in the time it takes the phone to ring once. So let’s combine the power of that product with our new-found mobile cookie and provide contextual customer interactions in real-time. Our first opportunity comes when a customer crosses a pre-defined geo-fence, typically a boundary around the store. Context is the key to our interaction: that’s the customer (known or anonymous), the time of day and day of week, and location. Thomas near the downtown store on a Wednesday at noon means he’s heading to lunch. If he were near the mall location on a Saturday morning, that’s a completely different context. But on his way to lunch, we’ll let Thomas know that we’ve got a new shipment of ASICS running shoes on display with a simple text message. We used the context to look-up Thomas’ past purchases and understood he was an avid runner. We used the fact that this was lunchtime to select the type of message, in this case an informational message instead of an offer. Thomas enters the store, phone in hand, and walks to the shoe department. He scans one of the new ASICS shoes using the convenient QR Codes we provided on the shelf-tags, but then he starts scanning low-end Nikes. Each scan is another opportunity to both learn from Thomas and potentially interact via another message. Since he historically buys low-end Nikes and keeps scanning them, he’s likely falling back into his old ways. Our marketing rules are currently set to move loyal customer to higher margin products. We could have set the dials to increase visit frequency, move overstocked items, increase basket size, or many other settings, but today we are trying to move Thomas to higher-margin products. We send Thomas another text message, this time it’s a personalized offer for 10% off ASICS good for 24 hours. Offering him a discount on Nikes would be throwing margin away since he buys those anyway. We are using our marketing dollars to change behavior that increases the long-term value of Thomas. He decides to buy the ASICS and scans the discount code on his phone at checkout. Checkout is yet another opportunity to interact with Thomas, so the transaction is sent back to Oracle RTD for evaluation. Since Thomas didn’t buy anything with the shoes, we’ll print a bounce-back coupon on the receipt offering 30% off ASICS socks if he returns within seven days. We have successfully started moving Thomas from low-margin to high-margin products. In both of these marketing scenarios, we are able to leverage data in near real-time to decide how best to interact with the customer and lead to an increase in the lifetime value of the customer. The key here is acting at the moment the customer shows interest using the context of the situation. We aren’t pushing random products at haphazard times. We are tailoring the marketing to be very specific to this customer, and it’s the technology that allows this to happen in near real-time. Conclusion As we enable more right-time integrations and interactions, retailers will begin to offer increased service to their customers. Localized and personalized service at scale will drive loyalty and lead to meaningful revenue growth for the retailers that execute well. Our industry needs to support Commerce Anywhere…and commerce anytime as well.

    Read the article

  • What's a standard productive vs total office hours ratio? [migrated]

    - by marianov
    So it goes like this: we are keeping track of tasks using Redmine. We log time spent doing tasks, but at the end of the week if we add up all the time spent at those tasks there is no way a person has spent 40hs working. I think that's correct because offices have overhead (reading emails, politics, coffee, distractions). What would be a normal productive time vs total time spent ratio? Other areas in the organization just measure time spent in the office (with the rfid badges that open the door) but we don't like that approach and we are trying to convince Auditing to measure us using redmine instead.

    Read the article

  • How to compile and build C# projects with Ant and Mono?

    - by Wing C. Chen
    I am currently working on a project with both java and C# codes within. Java takes the major role in this project. However, C# still takes a small part in it. I am using Ant to build the projects, and would very much like to use it to build C# too. I have learned that it's possible to build C# under Ant with the help of Mono. Can anybody provide any link of tutorial or guidance here? I was trying to google for it. But I haven't found any comprehensive data.

    Read the article

  • Java code to convert a list of dependencies into a build order?

    - by Egon Willighagen
    Given I have a list of dependencies, like: module1 module2 dependsOn module1 module3 dependsOn module1 module4 dependsOn module3 I would like to create a build order where each build step is found on one line, and each line contains a list of one or more modules which can be compiled at the same time, and which only depend on modules compiled earlier. So, for the above data set, create a list like: module1 module2,module3 module4 Now, this is basically just a problem of creating a directed graph, and analyzing it. Now, I am using Ant, and would very much like to use something off-the-shelf... what is the minimum of custom code I need to have it create such a dependency-aware build list starting from the given input? BTW, these modules are actually custom modules, so maven will not work.

    Read the article

  • How to automize multiple projects build process by including digital signature of exe in Delphi?

    - by user193655
    After building a project group of 2 projects with Delphi (2009) I digitally sign the 2 exes using InstallAware Code signing, an exe that shipped with Delphi 2009. How is it possible to automize the digital signature, so when I build I can also attach digital signature. For digital signing I use a pvk (private key) file and an spc (Sw publisher certificate) file. Subquestion: Moreover I created a project group because I have 2 exes, but they are almost the same, the only thing that changes is the Application icon and the application name (one is ProductOne.dpr, the other is ProductTwo.dpr). In practice I have 2 brands of the same product, I have a single build but activation keys details activate one or the other, anyway now I was asked to change the icon and the filename, and for this I need to build 2 projects, activation key is not enough anymore to distinguish between the 2. Anyway if there is a way to do this from a single project it would be better.

    Read the article

  • Using TFS Team Build 2010 to deploy to Dev site and create packages for Staging and Production sites

    - by Kb
    I am trying to configure a TFS Team Build 2010 to perform automatic deployment to development environment and creation of deployment packages for staging and production environment. In the field for MSBuildArguments in the build definition I have: /p:DeployOnBuild=True <br/> /p:DeployTarget=MsDeployPublish <br/> /p:MSDeployPublishMethod=RemoteAgent <br/> /p:CreatePackageOnPublish=True <br/> /p:DeployIISAppPath=devwebsitename<br/> /p:MsDeployServiceUrl=http://deployserver/MsDeployAgentService<br/> /p:UserName=username<br/> /p:Password=password<br/> Automatic deployment of dev web site is ok and I get a package for the web site generated How can I (the same build) get deploy packages for the other environments : Staging and Production? Or am I missing som basic concept here?

    Read the article

  • Should I auto-increment the assembly version when I build my software?

    - by rwmnau
    In Visual Studio 2003, you could easily set your project assembly to auto-increment every time you built it, but with Visual Studio 2005, this functionality was removed. You can still auto-increment your assembly version on every build, but it's a complicated custom build step instead of an integrated feature. I'm not sure why this was removed, but here's a question I should have asked a while ago - Should I be using a workaround to continue to auto-increment when I build, or is there a good reason to stop doing this, in favor of manually incrementing? Since Microsoft removed it from VS, perhaps there's a good reason, and I'm wondering if anybody knows it.

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >