Search Results

Search found 27011 results on 1081 pages for 'buy vs build'.

Page 204/1081 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • Android App Build system differences between Eclipse and Ant?

    - by Amy Winarske
    The Eclipse build for my 1.6 application project is succeeding and the Ant build is failing. I'm looking for help on why they aren't behaving the same way. We are developing on Mac OSX 10.5.8 with Eclipse 3.5 against SDK 1.6 + Google APIs. There are no setting changes in Eclipse, either at workspace or project level. Similarly, our ant is also a vanilla- flavored unmodified installation of 1.7.1. JDK is 1.5.0_22. The CLASSPATH environment variable is not set. JAVA_HOME is /Library/Java/ Home The application was initially created by a team member using the Eclipse plugins. The application references two jar files, one of which has a dependency on javax.xml.bind.annotation.XmlSeeAlso, which is not defined anywhere in our code or in android.jar. The other jar file has an explicit dependency on android.jar. I generated the Ant build file using android update. The Eclipse project builds an apk and runs the application in the emulator. I think this is incorrect behavior. The Android ant project fails to build. I think this is correct behavior. MyClass.java:98: cannot access javax.xml.bind.annotation.XmlSeeAlso [javac] file javax/xml/bind/annotation/XmlSeeAlso.class not found Any ideas as to why the two build methods are behaving differently? I would expect them both to fail. Thanks! -Amy

    Read the article

  • How do I create an EAR file with an ant build including certain files?

    - by user149100
    I'm using eclipse to build an ear file using ant. I'm using oc4j, and I want to make sure that orion-application.xml is included in the build. What I'm currently using but does not work is: <target name="ear" depends="" <echoBuilding the ear file</echo <copy todir="${build.dir}/META-INF" <fileset dir="${conf.dir}" includes="orion-application.xml"/ </copy <ear destfile="${dist.dir}/${ant.project.name}.ear" appxml="${conf.dir}/application.xml" <fileset dir="${dist.dir}" includes="*.jar,*.war"/ </ear </target What is the right way to add this to the ear?

    Read the article

  • Difference in DLL when compiling on Build Server instead of Dev Machine.

    - by Achilles
    I have an application that loads user controls into .NET web application. When I compile and test the application locally on my dev machine it works on my machine. The project builds successfully using MSBuild on our build server. However when I deploy the dll generated by MSBuild on the build server I get the following error when the application loads the control: BC30456: 'CreateResourceBasedLiteralControl' is not a member of 'ASP.usercontrols_somecontrol_ascx'. I took a look and compared the dll generated on my machine and compared it(looked at the file size) with the one created by the build server and noticed a difference in the file size. This is confusing considering the code being built locally and on the build server is IDENTICAL. I manually compared each file by hand. So my question is: What is causing this error? What would be different between MSBuild's compilation of the code and what is going on in Visual Studio when compiling the code?

    Read the article

  • Will more CPUs/cores help with VS.NET build times?

    - by LoveMeSomeCode
    I was wondering if anyone knew whether Visual Studio .NET had a parallel build process or not? I have a solution with lots of projects, every project has lots of markup/code, lots of types, etc. Just sitting there with intellisense on runs it up to about 700MB. But the build times are really slow and only seem to max out one of my two cpu cores. Does this mean the build process is single threaded? My solution's build dependency chain isn't linear, so I don't see why it couldn't be building some of the projects in parallel. I remember Joel Spolsky blogging about his new SSD, and how it didn't help with compile times, but he didn't mention which compiler he was using. We're using VS 2005. Anyone know how it's compilation works? And is it any different/better in 2008/2010?

    Read the article

  • How can CruiseControl.Net fail a build based on changing metrics?

    - by skolima
    I would like CruiseControl.Net to fail a build when some code metrics change in a 'wrong' direction, i.e. code coverage decreases or Gendarme defect count increases. The Gendarme metrics are already tracked in report.xml file (because they are presented on web dashboard graphs), the code coverage is only reported on build status page (and saved in build report xml). How can I achieve this?

    Read the article

  • How do I build a hello world class with maven?

    - by httpinterpret
    Now the source code is ready, how can I build it with maven? Suppose the source file is hw.java I've googled some time, all the solutions requires me to set the directory in a fixed manner. But what I want to do is keep hw.java in current directory (.), and then: vi pom.xml ... mkdir build cd build maven ... Can I have that kind of freedom with maven?

    Read the article

  • What build param(s) to use so VS 2010 can gen .obj & link .objs but NOT create an .exe?

    - by Csourcecode
    Question title pretty much asks it all. I know I could set the project to be a .lib build and have it fail to build/link a .lib .... and the .objs tend to be in the appropriate config dir That seems like a shi*-a** backdoor way to get VS to gen objs Is there a flag/param I can set somewhere in the property sheet properties/options for Visual Studio so it links what it needs to & gens the respective objs for each source file? It's so freaking easy to just gen object files using gcc (and link in appropriate lib routines WITHOUT creating an executable) ... I'm sure I could also hack up a custom build rule but that seems like overkill [and since I'm not up to speed on the build rules for whatever version of make VS 2010 is using it's easier to ask someone else here for the simple solution]

    Read the article

  • can I run C# built-in unit test in build machine?

    - by 5YrsLaterDBA
    can I run C# built-in unit test in build machine which doesn't have Visual Studio installed? We are thinking add unit test to our Visual Studio 2008 C# project. Our build machine doesn't have VS installed and we want to integrate the new unit test with our auto-build system. Is MSTest the executable to launch the Team Test unit test?

    Read the article

  • How can I make a workspace-folder level build script visible in the Eclipse Project Explorer?

    - by Chris
    I have a number of interdependent projects in an Eclipse workspace. Eclipse manages dependencies between them within the IDE but I'm starting work on a master build script that will sit in the folder about all the projects (the workspace folder). I haven't decided on if I will use Maven, Gradle or Ant/Ivy tet, but my question is, is there a way so that I can see a build script in the workspace folder in the Project/Package explorer? Currently it only shows me projects, but assuming I decide on an Ant build, I want to be able to see the main build.xml file in this window. I've played around with settings to no avail. Is it possible? If not, should I just set up an external run configuration instead?

    Read the article

  • Calling Tortoise from command line and build if new code (how to know if tortoise updated anything)?

    - by Iakob
    I am writing a batch file which is supposed to update the source files from tortoise and - if anything new was gotten - build the solution. Should be a very simple task. My batchfile looks like this (I've removed the non-essentials) set updatepath=%1 set solution=%2 set output=%3.txt call TortoiseProc.exe /command:update /path:%updatepath% /closeonend:2 call %devenv% %solution% /Build Debug /Out %output% Now, I'd like to know if tortoise actually got new code for me and the not build if it didn't. How do I do this? I am running Windows Vista (The batch script is called from another batch script about 7 times - one for each project I need updated and - perhaps - build).

    Read the article

  • What's the best way to build software to not require the newest glibc?

    - by ZorbaTHut
    I'm attempting to build a binary package that can be run on multiple Linux distributions. It's currently built on Ubuntu 10.04, but it fails on Ubuntu 8.04 with the following error: ./test: /usr/lib/libstdc++.so.6: version `GLIBCXX_3.4.11' not found (required by ./test) ./test: /usr/lib/libstdc.so.6: version `GLIBC_2.11' not found (required by ./test) What's the preferred way to solve this problem? Is there a way to install an old glibc on a new box and build against it, or do I have to build on an old distribution? And if I build against an old glibc, will it work on a new glibc? Or, alternatively, are there just some handy compiler flags or packages I could install to solve the problem?

    Read the article

  • What's the best way to build software that doesn't require the newest glibc?

    - by ZorbaTHut
    I'm attempting to build a binary package that can be run on multiple Linux distributions. It's currently built on Ubuntu 10.04, but it fails on Ubuntu 8.04 with the following error: ./test: /usr/lib/libstdc++.so.6: version `GLIBCXX_3.4.11' not found (required by ./test) ./test: /usr/lib/libstdc.so.6: version `GLIBC_2.11' not found (required by ./test) What's the preferred way to solve this problem? Is there a way to install an old glibc on a new box and build against it, or do I have to build on an old distribution? And if I build against an old glibc, will it work on a new glibc? Or, alternatively, are there just some handy compiler flags or packages I could install to solve the problem?

    Read the article

  • Can Eclipse not hard-code ECLIPSE_HOME when exporting build.xml?

    - by stevex
    I have an Eclipse project that I'm attempting to set up to build both with Eclipse and externally with Ant. It seems like a good way to do this is to have Eclipse generate a build.xml file that I can then use with ant. I'd like to set it up so the build.xml can be regenerated from Eclipse whenever the need arises, which means no hand-editing the build.xml file. But Eclipse writes one entry in there that has a hard-coded path to a directory on my computer, which makes it unsuitable for checking in to a source repository. Specifically it's this entry that's the trouble: <property name="ECLIPSE_HOME" value="D:/Eclipse/Eclipse Galileo (3.5) SR1"/> Is there some way to have Eclipse not output this line, or to make it a relative reference or something that makes sense to check in?

    Read the article

  • In the CCTray utility, how to clear the "... is fixing the build" message if set by misstake for exa

    - by Tor
    Do anyone know if there is a possibility to clear the message that have been set by the CCTray menu entry “Volunteer to fix build”? It can be set by misstake or pretty often we in the CM team, and others use this possibility to volunteer to fix the build to show the rest of the developers and CM personnel that we will take a first look at the build problem and if we do not manage to solve it we want to reset / clear this message out so the next resource can take over.

    Read the article

  • Do you know any build systems with decent support for parallelization?

    - by dahpgjgamgan
    Hi, I am looking for a build system (working on ms windows) that has good support for parallelization of tasks/targets (or whatever you call them). To be more specific - during build (that is initiated on MS Windows machine) I need to copy source files to a number of different machines (which are not necessarily running Windows) and start a remote job on each of them - and I really like to do that on all machines at once. Does anyone know a build system that's capable of executing such a task in parallel. From what I googled, the options currently available are: -j switch in make - but i don't know if nmake supports this -some custom nAnt tasks -msbuild has some form of support for parallelization - seems similiar to make (meaning you don't specify what to do in parallel, just specify that it would be nice to build things that way) -fake (f# make) is written in functional programming language which are known to have good parallelization support - but I'm not very skillful in functional programming area. Any other solutions I could explore?

    Read the article

  • Cleaner HTML Markup with ASP.NET 4 Web Forms - Client IDs (VS 2010 and .NET 4.0 Series)

    - by ScottGu
    This is the sixteenth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release. Today’s post is the first of a few blog posts I’ll be doing that talk about some of the important changes we’ve made to make Web Forms in ASP.NET 4 generate clean, standards-compliant, CSS-friendly markup.  Today I’ll cover the work we are doing to provide better control over the “ID” attributes rendered by server controls to the client. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Clean, Standards-Based, CSS-Friendly Markup One of the common complaints developers have often had with ASP.NET Web Forms is that when using server controls they don’t have the ability to easily generate clean, CSS-friendly output and markup.  Some of the specific complaints with previous ASP.NET releases include: Auto-generated ID attributes within HTML make it hard to write JavaScript and style with CSS Use of tables instead of semantic markup for certain controls (in particular the asp:menu control) make styling ugly Some controls render inline style properties even if no style property on the control has been set ViewState can often be bigger than ideal ASP.NET 4 provides better support for building standards-compliant pages out of the box.  The built-in <asp:> server controls with ASP.NET 4 now generate cleaner markup and support CSS styling – and help address all of the above issues.  Markup Compatibility When Upgrading Existing ASP.NET Web Forms Applications A common question people often ask when hearing about the cleaner markup coming with ASP.NET 4 is “Great - but what about my existing applications?  Will these changes/improvements break things when I upgrade?” To help ensure that we don’t break assumptions around markup and styling with existing ASP.NET Web Forms applications, we’ve enabled a configuration flag – controlRenderingCompatbilityVersion – within web.config that let’s you decide if you want to use the new cleaner markup approach that is the default with new ASP.NET 4 applications, or for compatibility reasons render the same markup that previous versions of ASP.NET used:   When the controlRenderingCompatbilityVersion flag is set to “3.5” your application and server controls will by default render output using the same markup generation used with VS 2008 and .NET 3.5.  When the controlRenderingCompatbilityVersion flag is set to “4.0” your application and server controls will strictly adhere to the XHTML 1.1 specification, have cleaner client IDs, render with semantic correctness in mind, and have extraneous inline styles removed. This flag defaults to 4.0 for all new ASP.NET Web Forms applications built using ASP.NET 4. Any previous application that is upgraded using VS 2010 will have the controlRenderingCompatbilityVersion flag automatically set to 3.5 by the upgrade wizard to ensure backwards compatibility.  You can then optionally change it (either at the application level, or scope it within the web.config file to be on a per page or directory level) if you move your pages to use CSS and take advantage of the new markup rendering. Today’s Cleaner Markup Topic: Client IDs The ability to have clean, predictable, ID attributes on rendered HTML elements is something developers have long asked for with Web Forms (ID values like “ctl00_ContentPlaceholder1_ListView1_ctrl0_Label1” are not very popular).  Having control over the ID values rendered helps make it much easier to write client-side JavaScript against the output, makes it easier to style elements using CSS, and on large pages can help reduce the overall size of the markup generated. New ClientIDMode Property on Controls ASP.NET 4 supports a new ClientIDMode property on the Control base class.  The ClientIDMode property indicates how controls should generate client ID values when they render.  The ClientIDMode property supports four possible values: AutoID—Renders the output as in .NET 3.5 (auto-generated IDs which will still render prefixes like ctrl00 for compatibility) Predictable (Default)— Trims any “ctl00” ID string and if a list/container control concatenates child ids (example: id=”ParentControl_ChildControl”) Static—Hands over full ID naming control to the developer – whatever they set as the ID of the control is what is rendered (example: id=”JustMyId”) Inherit—Tells the control to defer to the naming behavior mode of the parent container control The ClientIDMode property can be set directly on individual controls (or within container controls – in which case the controls within them will by default inherit the setting): Or it can be specified at a page or usercontrol level (using the <%@ Page %> or <%@ Control %> directives) – in which case controls within the pages/usercontrols inherit the setting (and can optionally override it): Or it can be set within the web.config file of an application – in which case pages within the application inherit the setting (and can optionally override it): This gives you the flexibility to customize/override the naming behavior however you want. Example: Using the ClientIDMode property to control the IDs of Non-List Controls Let’s take a look at how we can use the new ClientIDMode property to control the rendering of “ID” elements within a page.  To help illustrate this we can create a simple page called “SingleControlExample.aspx” that is based on a master-page called “Site.Master”, and which has a single <asp:label> control with an ID of “Message” that is contained with an <asp:content> container control called “MainContent”: Within our code-behind we’ll then add some simple code like below to dynamically populate the Label’s Text property at runtime:   If we were running this application using ASP.NET 3.5 (or had our ASP.NET 4 application configured to run using 3.5 rendering or ClientIDMode=AutoID), then the generated markup sent down to the client would look like below: This ID is unique (which is good) – but rather ugly because of the “ct100” prefix (which is bad). Markup Rendering when using ASP.NET 4 and the ClientIDMode is set to “Predictable” With ASP.NET 4, server controls by default now render their ID’s using ClientIDMode=”Predictable”.  This helps ensure that ID values are still unique and don’t conflict on a page, but at the same time it makes the IDs less verbose and more predictable.  This means that the generated markup of our <asp:label> control above will by default now look like below with ASP.NET 4: Notice that the “ct100” prefix is gone. Because the “Message” control is embedded within a “MainContent” container control, by default it’s ID will be prefixed “MainContent_Message” to avoid potential collisions with other controls elsewhere within the page. Markup Rendering when using ASP.NET 4 and the ClientIDMode is set to “Static” Sometimes you don’t want your ID values to be nested hierarchically, though, and instead just want the ID rendered to be whatever value you set it as.  To enable this you can now use ClientIDMode=static, in which case the ID rendered will be exactly the same as what you set it on the server-side on your control.  This will cause the below markup to be rendered with ASP.NET 4: This option now gives you the ability to completely control the client ID values sent down by controls. Example: Using the ClientIDMode property to control the IDs of Data-Bound List Controls Data-bound list/grid controls have historically been the hardest to use/style when it comes to working with Web Form’s automatically generated IDs.  Let’s now take a look at a scenario where we’ll customize the ID’s rendered using a ListView control with ASP.NET 4. The code snippet below is an example of a ListView control that displays the contents of a data-bound collection — in this case, airports: We can then write code like below within our code-behind to dynamically databind a list of airports to the ListView above: At runtime this will then by default generate a <ul> list of airports like below.  Note that because the <ul> and <li> elements in the ListView’s template are not server controls, no IDs are rendered in our markup: Adding Client ID’s to Each Row Item Now, let’s say that we wanted to add client-ID’s to the output so that we can programmatically access each <li> via JavaScript.  We want these ID’s to be unique, predictable, and identifiable. A first approach would be to mark each <li> element within the template as being a server control (by giving it a runat=server attribute) and by giving each one an id of “airport”: By default ASP.NET 4 will now render clean IDs like below (no ctl001-like ids are rendered):   Using the ClientIDRowSuffix Property Our template above now generates unique ID’s for each <li> element – but if we are going to access them programmatically on the client using JavaScript we might want to instead have the ID’s contain the airport code within them to make them easier to reference.  The good news is that we can easily do this by taking advantage of the new ClientIDRowSuffix property on databound controls in ASP.NET 4 to better control the ID’s of our individual row elements. To do this, we’ll set the ClientIDRowSuffix property to “Code” on our ListView control.  This tells the ListView to use the databound “Code” property from our Airport class when generating the ID: And now instead of having row suffixes like “1”, “2”, and “3”, we’ll instead have the Airport.Code value embedded within the IDs (e.g: _CLE, _CAK, _PDX, etc): You can use this ClientIDRowSuffix approach with other databound controls like the GridView as well. It is useful anytime you want to program row elements on the client – and use clean/identified IDs to easily reference them from JavaScript code. Summary ASP.NET 4 enables you to generate much cleaner HTML markup from server controls and from within your Web Forms applications.  In today’s post I covered how you can now easily control the client ID values that are rendered by server controls.  In upcoming posts I’ll cover some of the other markup improvements that are also coming with the ASP.NET 4 release. Hope this helps, Scott

    Read the article

  • How to do I build and install the gspca webcam driver?

    - by sam
    I tried to install gspca to run Orite webcam. But I failed to install gspca on ubuntu 12.04 64 bits. It lost a lot of headers,here are my instructions but failed. wget http://mxhaard.free.fr/spca50x/Download/gspcav1-20071224.tar.gz tar zxvf gspcav1-20071224.tar.gz cd gspcav1-20071224/ sudo ./gspca_build sudo touch /usr/src/linux-headers-3.2.0-25-generic/include/linux/config.h sudo mkdir /usr/src/linux-headers-3.2.0-25-generic/include/asm sudo touch /usr/src/linux-headers-3.2.0-25-generic/include/asm/semaphore.h sudo touch /usr/src/linux-headers-3.2.0-25-generic/include/linux/videodev.h sudo touch /usr/src/linux-headers-3.2.0-25-generic/include/linux/smp_lock.h How to solve it? Thank you~ I move to /usr/src and make sam@sam:/usr/src/gspcav1-20071224$ sudo make make -C /lib/modules/`uname -r`/build SUBDIRS=/usr/src/gspcav1-20071224 CC=cc modules make[1]: Entering directory `/usr/src/linux-headers-3.2.0-25-generic' CC [M] /usr/src/gspcav1-20071224/gspca_core.o /usr/src/gspcav1-20071224/gspca_core.c:37:26: fatal error: linux/config.h: No such file or directory compilation terminated. make[2]: *** [/usr/src/gspcav1-20071224/gspca_core.o] Error 1 make[1]: *** [_module_/usr/src/gspcav1-20071224] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-25-generic' make: *** [default] Error 2 sam@sam:/usr/src/gspcav1-20071224$

    Read the article

  • How do I build and install the gspca webcam driver?

    - by sam
    I tried to install gspca to run Orite webcam on Ubuntu 12.04 64-bit, but I failed. It lost a lot of headers, here are my instructions but failed. wget http://mxhaard.free.fr/spca50x/Download/gspcav1-20071224.tar.gz tar zxvf gspcav1-20071224.tar.gz cd gspcav1-20071224/ sudo ./gspca_build sudo touch /usr/src/linux-headers-3.2.0-25-generic/include/linux/config.h sudo mkdir /usr/src/linux-headers-3.2.0-25-generic/include/asm sudo touch /usr/src/linux-headers-3.2.0-25-generic/include/asm/semaphore.h sudo touch /usr/src/linux-headers-3.2.0-25-generic/include/linux/videodev.h sudo touch /usr/src/linux-headers-3.2.0-25-generic/include/linux/smp_lock.h How to solve it? I move to /usr/src and make: sam@sam:/usr/src/gspcav1-20071224$ sudo make make -C /lib/modules/`uname -r`/build SUBDIRS=/usr/src/gspcav1-20071224 CC=cc modules make[1]: Entering directory `/usr/src/linux-headers-3.2.0-25-generic' CC [M] /usr/src/gspcav1-20071224/gspca_core.o /usr/src/gspcav1-20071224/gspca_core.c:37:26: fatal error: linux/config.h: No such file or directory compilation terminated. make[2]: *** [/usr/src/gspcav1-20071224/gspca_core.o] Error 1 make[1]: *** [_module_/usr/src/gspcav1-20071224] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-25-generic' make: *** [default] Error 2 sam@sam:/usr/src/gspcav1-20071224$

    Read the article

  • Jenkins Paramerized Trigger + Copy Artifact

    - by Josh Kelley
    I'm working on setting up Jenkins to handle our release builds. A release build consists of a Windows installer that includes some binaries that must be built on Linux. Here's what I have so far: The Windows portion and Linux portion are set up as separate Jenkins projects. The Windows project is parameterized, taking the Subversion tag to build and release. As part of its build, the Windows project triggers a build of that same Subversion tag for the Linux project (using the Parameterized Trigger plugin) then copies the artifacts from the Linux project (using the Copy Artifact plugin) to the Windows project's workspace so that they can be included in the Windows installer. Where I'm stuck: Right now, Copy Artifact is set up to copy the last successful build. It seems more robust to configure Copy Artifact to copy from the exact build that Parameterized Trigger triggered, but I'm having trouble figuring out how to make that work. There's an option for a "build selector" parameter that I think is intended to help with this, but I can't figure out how it's supposed to be set up (and blindly experimenting with different possibilities is somewhat painful when the build takes an hour or two to find success or failure). How should I set this up? How does build selector work?

    Read the article

  • Trial/Free & Full Version VS. Free App + In-app billing?

    - by SERPRO
    I'm just wondering what would be the best strategy to publish an application on the Android Market. If you have a free and paid version you have two codes to update (I know it will be 99% the same but still) and besides all the popular paid apps are quite easy to find for "free" in "alternative" markets. Also if you have any stored data in the trial/free version you lose it when you buy the full version.. On the other hand if you put a free application but inside you allow the user to unlock options (remove ads/more settings/etc...) you only have to worry about one code. I don't know the drawbacks of that strategy and how easy/hard is to hack that to get all the options for "free".

    Read the article

  • How to build a good service layer in ASP.NET?

    - by Swippen
    I have looked through some questions, technologies for building a good service layer but I have some questions regarding this that I need help with. First some information of what I have for requirements. We currently have a number of web applications that talk to each other in a spiderweb looking way (all talking to each other in a confusing way via webservices and database data). We want to change this so that all applications go through a service layer where we can work more with cache and encapsulate common functionality and more. We want this layer to also have a Web API so that 3rd party clients can consume information from the service. The problem I see is that if we build the service layer with say MVC4 Web API don't we need to communicate between the application using the webAPI meaning we have to construct URLs and consume JSON/Xml. That does not sound too effective. I assume a better method would be working with entities and WCF to communicate between the application but then we might loose the Web API magic? So the question is if there is a way to consume a service layer as both a Web API (JSON/XML) and as a more backend service layer with entities. If we are forced to use 2 different service layers we might have to duplicate some functionality and other bad things. Hope the question is clear enough and please ask if you need any more information.

    Read the article

  • Using the promote builds plugin to tag subversion repository in jenkins

    - by mark
    We have a task which builds based on data from 4 different SVN repositories. I want to allow QA promote a build, so that the revisions participating in the build are tagged with the build number and some optional label. I have encountered the following problem - the promoted build may not be the most recent build. How do I know the SVN revision of each of the four repositories used during that build? I know that each build has this information in the revision.txt and build.xml files associated with the build, but how does it become available in the context of promotion? Thanks. P.S. Asked here before, but did not get a satisfying answer.

    Read the article

  • forwarding my domain to ning site, vs paying for mapping. SEO value? [closed]

    - by myf
    Possible Duplicate: Could I buy a domain name to increase traffic to my site like this? hello, and thanks for your time to answer. really appreciate that! my domain url is keyword stuffed (homes for sale and the city name). does it make a difference if I foward that to my ning site, which is www.homesforsale(in city name).ning.com or is it just the same for SEO / pagerank value as paying ning for the proper url mapping. thanks so much!

    Read the article

  • Wireless card on HP laptop not working

    - by D. Strout
    I just bought an HP Envy m6-1125dx online from Best Buy. When I got it home and started it up, the wireless card did not work well - at all. I could connect, but any real usage would cause the connection to start dropping every 30 seconds or so, and it would be really slow. Taking another look at the reviews on the Best Buy site, it seems only a few others had this problem, so I took it to my local Best Buy and exchanged it for another unit. Got it home again and the card had the same issues. Which leads to my dilemma. First: does this model have several different cards that it could come with? Mine is a Ralink RT5390R (on both units I received). If it does, then I can keep exchanging until I get a unit with a different card. I wouldn't ask this, except it seems weird that only a few people mentioned this issue, so I thought that might be one possibility. I looked in to replacing the card with a different one myself, but it seems that HP blocks certain wireless cards. However, some people reported success in replacing the card, and this site said it was only an issue on "older HP computer[s]". Can anyone confirm this? Finally, if that fails/will not work, does anyone know what I can get through Best Buy? I am concerned that they will not put any different card than the Ralink, and after two of those, I don't want that. Can I ask Best Buy support to use a different card? Can they even get another card from HP? I guess the base question is: should I attempt to replace the card myself (two days via Amazon to get a new card), should I try to get the laptop repaired through Best Buy (two - four weeks), should I go for a different model laptop from Best Buy, or should I try a different unit of the same model (three's the charm?).

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >