Search Results

Search found 18014 results on 721 pages for 'build automation'.

Page 200/721 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • Hidden divs for "lazy javascript" loading? Possible security/other issues?

    - by xyld
    I'm curious about people's opinion's and thoughts about this situation. The reason I'd like to lazy load javascript is because of performance. Loading javascript at the end of the body reduces the browser blocking and ends up with much faster page loads. But there is some automation I'm using to generate the html (django specifically). This automation has the convenience of allowing forms to be built with "Widgets" that output content it needs to render the entire widget (extra javascript, css, ...). The problem is that the widget wants to output javascript immediately into the middle of the document, but I want to ensure all javascript loads at the end of the body. When the following widget is added to a form, you can see it renders some <script>...</script> tags: class AutoCompleteTagInput(forms.TextInput): class Media: css = { 'all': ('css/jquery.autocomplete.css', ) } js = ( 'js/jquery.bgiframe.js', 'js/jquery.ajaxQueue.js', 'js/jquery.autocomplete.js', ) def render(self, name, value, attrs=None): output = super(AutoCompleteTagInput, self).render(name, value, attrs) page_tags = Tag.objects.usage_for_model(DataSet) tag_list = simplejson.dumps([tag.name for tag in page_tags], ensure_ascii=False) return mark_safe(u'''<script type="text/javascript"> jQuery("#id_%s").autocomplete(%s, { width: 150, max: 10, highlight: false, scroll: true, scrollHeight: 100, matchContains: true, autoFill: true }); </script>''' % (name, tag_list,)) + output What I'm proposing is that if someone uses a <div class=".lazy-js">...</div> with some css (.lazy-js { display: none; }) and some javascript (jQuery('.lazy-js').each(function(index) { eval(jQuery(this).text()); }), you can effectively force all javascript to load at the end of page load: class AutoCompleteTagInput(forms.TextInput): class Media: css = { 'all': ('css/jquery.autocomplete.css', ) } js = ( 'js/jquery.bgiframe.js', 'js/jquery.ajaxQueue.js', 'js/jquery.autocomplete.js', ) def render(self, name, value, attrs=None): output = super(AutoCompleteTagInput, self).render(name, value, attrs) page_tags = Tag.objects.usage_for_model(DataSet) tag_list = simplejson.dumps([tag.name for tag in page_tags], ensure_ascii=False) return mark_safe(u'''<div class="lazy-js"> jQuery("#id_%s").autocomplete(%s, { width: 150, max: 10, highlight: false, scroll: true, scrollHeight: 100, matchContains: true, autoFill: true }); </div>''' % (name, tag_list,)) + output Nevermind all the details of my specific implementation (the specific media involved), I'm looking for a consensus on whether the method of using lazy-loaded javascript through hidden a hidden tags can pose issues whether security or other related? One of the most convenient parts about this is that it follows the DRY principle rather well IMO because you don't need to hack up a specific lazy-load for each instance in the page. It just "works". UPDATE: I'm not sure if django has the ability to queue things (via fancy template inheritance or something?) to be output just before the end of the </body>?

    Read the article

  • How to check if new version of Chrome is available?

    - by serg
    I am trying to build an extension that would notify a user when new version of Chrome is available. I tried to inspect network traffic when Chrome is checking for an update and it is sending a request to http://74.125.95.113/service/update2?w=3:{long_encoded_string} page that returns XML with information I need: <?xml version="1.0" encoding="UTF-8"?> <gupdate xmlns="http://www.google.com/update2/response" protocol="2.0" server="prod"> <daystart elapsed_seconds="31272"/> <app appid="{8A69D345-D564-463C-AFF1-A69D9E530F96}" status="ok"> <updatecheck status="noupdate"/> <ping status="ok"/> </app> </gupdate> Besides sending {long_encoded_string} as URL parameter it is also sending some encoded cookie. Maybe someone familiar with Chrome build process can shed some light on those encoded strings and how to build them? Maybe there is another easier way (I have a feeling that string encoding is a dead end for me)?

    Read the article

  • Can't unwrap Optional.None tableviewcell

    - by Mathew Padley
    I've a table view that has a custom table view cell in it. My problem is that when i try and assign a value to a variable in the custom table view cell I get the stated error. Now, I think its because the said variable is not initialised, but its got me completely stumped. This is the custom table cell: import Foundation import UIKit class LocationGeographyTableViewCell: UITableViewCell { //@IBOutlet var Map : MKMapView; @IBOutlet var AddressLine1 : UILabel; @IBOutlet var AddressLine2 : UILabel; @IBOutlet var County : UILabel; @IBOutlet var Postcode : UILabel; @IBOutlet var Telephone : UILabel; var location = VisitedLocation(); func Build(location:VisitedLocation) -> Void { self.location = location; AddressLine1.text = "test"; } } My cell for row at index path is: override func tableView(tableView: UITableView!, cellForRowAtIndexPath indexPath: NSIndexPath!) -> UITableViewCell! { var addressCell = tableView.dequeueReusableCellWithIdentifier("ContactDetail") as? LocationGeographyTableViewCell; if !addressCell { addressCell = LocationGeographyTableViewCell(style: UITableViewCellStyle.Value1, reuseIdentifier: "ContactDetail"); } addressCell!.Build(Location); return addressCell; } As I say I'm completely baffled, the Build function calls the correct function in the tableviewcell. Any help will be gratefully appreciated. Ta

    Read the article

  • setup Qt and PyQt on mac osx so my app can also deployable on windows

    - by hk_programmer
    Hi, I've been coding with Python and C++ and now need to work on building a gui for data visualization purposes. I work on Mac Snow Leopard (intel), python 3.1 using gcc 4.2.1 (from Xcode 3.1) I wanted to first install Qt and then PyQt. And my goals are to be able to: - quickly prototype GUI and the accompanied logic that drives the GUI using PyQt and python - if I decided I need the speed, or if it's fairly easy to translate my GUI into C++ using the Qt tools, I have the options to translate my app into C++ - Be able to deploy my application onto Windows (both the python and c++ version of my app) Give the goals above, what are the correct steps I should take and what issues i should be aware of when setting up Qt and PyQt. Which other deployment tools do I need? From my readings so far, here's what I have: download the Qt source for mac and configure it with -platform macx-g++42 -arch x86_64 -no-framework (i've read somewhere that building as framework causes some trouble in deployment and/or debugging, can't find the article anymore) download latest SIP source and build download latest PyQt and build from source (any special options I should pay attention to?) For deployment, I've read that I would need to use py2exe/cx_freeze for windows, p2app for mac: http://arstechnica.com/open-source/guides/2009/03/how-to-deploying-pyqt-applications-on-windows-and-mac-os-x.ars but seems like what the article describe is deploying an app you build on windows on the windows platform and vice versa. How do you deploy to windows (is it even possible?) if you are writing your Qt app on a mac ? Really appreciate the help

    Read the article

  • Sql Server 2005 multiple insert with c#

    - by bottlenecked
    Hello. I have a class named Entry declared like this: class Entry{ string Id {get;set;} string Name {get;set;} } and then a method that will accept multiple such Entry objects for insertion into the database using ADO.NET: static void InsertEntries(IEnumerable<Entry> entries){ //build a SqlCommand object using(SqlCommand cmd = new SqlCommand()){ ... const string refcmdText = "INSERT INTO Entries (id, name) VALUES (@id{0},@name{0});"; int count = 0; string query = string.Empty; //build a large query foreach(var entry in entries){ query += string.Format(refcmdText, count); cmd.Parameters.AddWithValue(string.Format("@id{0}",count), entry.Id); cmd.Parameters.AddWithValue(string.Format("@name{0}",count), entry.Name); count++; } cmd.CommandText=query; //and then execute the command ... } } And my question is this: should I keep using the above way of sending multiple insert statements (build a giant string of insert statements and their parameters and send it over the network), or should I keep an open connection and send a single insert statement for each Entry like this: using(SqlCommand cmd = new SqlCommand(){ using(SqlConnection conn = new SqlConnection(){ //assign connection string and open connection ... cmd.Connection = conn; foreach(var entry in entries){ cmd.CommandText= "INSERT INTO Entries (id, name) VALUES (@id,@name);"; cmd.Parameters.AddWithValue("@id", entry.Id); cmd.Parameters.AddWithValue("@name", entry.Name); cmd.ExecuteNonQuery(); } } } What do you think? Will there be a performance difference in the Sql Server between the two? Are there any other consequences I should be aware of? Thank you for your time!

    Read the article

  • Pythagoras tree with g2d

    - by owca
    I'm trying to build my first fractal (Pythagoras Tree): in Java using Graphics2D. Here's what I have now : import java.awt.*; import java.awt.geom.*; import javax.swing.*; import java.util.Scanner; public class Main { public static void main(String[] args) { int i=0; Scanner scanner = new Scanner(System.in); System.out.println("Give amount of steps: "); i = scanner.nextInt(); new Pitagoras(i); } } class Pitagoras extends JFrame { private int powt, counter; public Pitagoras(int i) { super("Pythagoras Tree."); setSize(1000, 1000); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setVisible(true); powt = i; } private void paintIt(Graphics2D g) { double p1=450, p2=800, size=200; for (int i = 0; i < powt; i++) { if (i == 0) { g.drawRect((int)p1, (int)p2, (int)size, (int)size); counter++; } else{ if( i%2 == 0){ //here I must draw two squares } else{ //here I must draw right triangle } } } } @Override public void paint(Graphics graph) { Graphics2D g = (Graphics2D)graph; paintIt(g); } So basically I set number of steps, and then draw first square (p1, p2 and size). Then if step is odd I need to build right triangle on the top of square. If step is even I need to build two squares on free sides of the triangle. What method should I choose now for drawing both triangle and squares ? I was thinking about drawing triangle with simple lines transforming them with AffineTransform but I'm not sure if it's doable and it doesn't solve drawing squares.

    Read the article

  • Compilig + testing an Android library with the JDK?

    - by Jarle Hansen
    Hi all, I am creating a library for Android that others can include in their own project. So far I have been working on it as a normal Java project with JDK 1.6 setup as system library. This works just fine in Eclipse when I add the android.jar. The issue comes when I try to my build script. I am running Gradle and doing a normal compile and test build cycle. My thoughts were that it does not matter if I compile it with a normal JDK, since this is not a standalone application. The benefits by creating a normal Java project is that Gradle does support this much better. My project also does not contain any UI at all. However, the problem is that of course android.jar and the JDK contains lots of the same classes and I think that this is what messes up my build script. Everything crashes when running the tests (the tests are in the same project under src/test/java). My question is, how should I create this project that is meant to be included in Android projects as a third party library? Should I create it as an Android project in Eclipse even though I am only creating a library that does not use any of the UI features? Also, should the tests be in a separate project? Thanks for all responses!

    Read the article

  • Dealing with Expression Blend's lack of support for C++/CLI projects

    - by Brian Ensink
    I have a WPF C# project that references a C++/CLI mixed mode project. I'm having trouble using the WPF project in Expression Blend 3. I'm new to Blend so perhaps this is obvious, but it won't display the xaml designer properly until it builds the project. In my case it complains that my custom commands are not "recognized or accessible" and the solution is to build the project in Blend. But I can't build the project because it references a C++/CLI mixed mode project which Blend won't load. The WPF project is pure C# it just happens to reference a C++/CLI mixed mode project but I'm not asking Blend to do anything with the mixed-mode assembly. How can I work around this problem? Edit: I was able to get it to build by removing the reference to the C++/CLI mixed mode project and replacing it with a reference to the actual assembly. However this is not ideal because in my past experience Visual Studio will not always be able to resolve the reference when switching between release and debug configurations.

    Read the article

  • .bat file to update loopback controller to external ip

    - by cable729
    Okay, so I've figured out how to get my external ip using wget: wget -q -O - http://whatismyip.com/automation/n09230945.asp that outputs the ip to the command console. adding currentip.txt to the end will write it to a text file. But what I want to do is use netsh interface ip set address name="Local Area Connection 2" source=static addr=[WHAT DO I PUT HERE] Also, a way to make the command prompt not flash would be nice too :)

    Read the article

  • What /else/ causes this?

    - by Mordachai
    MFC Toolbox Library.lib(SimpleFileIO.obj) : error LNK2005: _wcsnlen already defined in libcmtd.lib(wcslen_s.obj) fatal error LNK1169: one or more multiply defined symbols found This is driving me nuts. Normally, one would get this if the various projects that are a part of their solution do not agree on which CRT to use (single threaded, multi-threaded, release or debug). However, I have been over this thing about 500 times now, and they all agree. Background: this is a VS 2010 project just converted from VS 2008. MFC Toolbox Library.lib is set to compile as a static library, using /MTd, as is the target .exe I am trying to compile in this solution. Further, the solution that this is being converted from (VS 2008) already compiles & links properly!!! So it's not like that there is a disagreement between the two .vcproj's - or at least there wasn't before the conversion. Furthermore, the MFC Toolbox Library is used by about 25 other projects in another solution - and in that solution (Master Build English) it compiles & links against those other projects without complaint in both debug and release targets. I have just spent the last hour going over every single project property for this target project (Cimex Header Viewer) vs. several different target exe projects in Master Build English solution - and I cannot find a difference. They appear to be identical, excepting that they're different names. I've tried doing a clean & build all. I'm simply out of ideas. Does anyone have a thought on what else I might investigate??? I think I'm ready to start chewing glass. :(

    Read the article

  • Dependency injection in C++

    - by Yorgos Pagles
    This is also a question that I asked in a comment in one of Miško Hevery's google talks that was dealing with dependency injection but it got buried in the comments. I wonder how can the factory / builder step of wiring the dependencies together can work in C++. I.e. we have a class A that depends on B. The builder will allocate B in the heap, pass a pointer to B in A's constructor while also allocating in the heap and return a pointer to A. Who cleans up afterwards? Is it good to let the builder clean up after it's done? It seems to be the correct method since in the talk it says that the builder should setup objects that are expected to have the same lifetime or at least the dependencies have longer lifetime (I also have a question on that). What I mean in code: class builder { public: builder() : m_ClassA(NULL),m_ClassB(NULL) { } ~builder() { if (m_ClassB) { delete m_ClassB; } if (m_ClassA) { delete m_ClassA; } } ClassA *build() { m_ClassB = new class B; m_ClassA = new class A(m_ClassB); return m_ClassA; } }; Now if there is a dependency that is expected to last longer than the lifetime of the object we are injecting it into (say ClassC is that dependency) I understand that we should change the build method to something like: ClassA *builder::build(ClassC *classC) { m_ClassB = new class B; m_ClassA = new class A(m_ClassB, classC); return m_ClassA; } What is your preferred approach?

    Read the article

  • How to handle 30k files in a project which requires them?

    - by Jeremiah
    Visual Studio 2010 RC - Silverlight Application We have a library of images that we need to have access to. They are given to us from a vendor (through an installer) and they are not in a database, they are files in a folder (a very large monster of a folder). We do not control when the images change, so the vendor needs to be able to override them individually. We get updates frequently enough from this vendor to state that these images change "randomly" and without our (programmer) knowledge. The problem: I don't want 30K images in SVN. Heck, I don't even want to imagine them in my Solution. However, our application requires them in order to run properly. So, our build/staging servers need access to these images (we have two build servers). The Question: How would you handle it when your application will not work as specified without access to each of 30k images and you don't control when those images change? I'm do not want to have a crazy large SVN repository. Because I don't know when any of these images change, I really don't want them in my solution (definitely do not want a large solution, either). I also don't want a bunch of manual steps to do every time these images change. Our mantra, up to this point, has always been, any developer could download from SVN, compile and run our app. These images are going to kill that mantra. I'm tempted to make a WCF service that will return images if they exist and a dummy image if they don't. This way all dev boxes will return a dummy image and our build/staging/production boxes will return real images (ones that actually have the vendor's image installer installed on). This has to be a solved problem. What have other people done to handle these types of problems? I'm open to suggestions.

    Read the article

  • Best way to parse this particular string using awk / sed?

    - by Jack
    Hi, I need to get a particular version string from a file (call it version.lst) and use it to compare another in a shell script. For example sake, the file contains lines that look like this: V1.000 -- build date and other info here -- APP1 V1.000 -- build date and other info here -- APP2 V1.500 -- build date and other info here -- APP3 .. and so on. Let's say I am trying to grab the first version (in this case, V1.000) from APP1. Obviously, the versions can change and I want this to be dynamic. What I have right now works: var = `cat version.lst | grep " -- APP1" | grep -Eo V[0-9].[0-9]{3}` Pipe to grep will get the line containing APP1 and the second pipe to grep will get the version string. However, I hear grep is not the way to do this so I'd like to learn the best way using awk or sed. Any ideas? I am new to both and haven't found a tutorial easy enough to learn the syntax of it. Do they support egrep? Thanks!

    Read the article

  • Eclipse Ant Builder problem

    - by styx777
    I made a custom ant script to automatically create a jar file each time I do a build. This is how it looks like: <?xml version="1.0" encoding="UTF-8"?> <project name="TestProj" basedir="." default="jar"> <property name="dist" value="dist" /> <property name="build" value="bin/test/testproj" /> <target name="jar"> <jar destfile="${dist}/TestProj.jar"> <manifest> <attribute name="Main-Class" value="test.testproj.TestProj" /> </manifest> <fileset dir="${build}" /> </jar> </target> </project> I added it by Right clicking my project properties builders clicked new Ant builder then I specified the location of the above xml file. However, when I run it by doing: java -jar TestProj.jar I get a NoClassDefFoundError test/testproj/TestProj I'm using Eclipse in Ubuntu. TestProj is the name of the class and it's in package test.testproj I'm pretty sure there's something wrong with the manifest and probably the location of the xml file as well but I'm not sure how to fix this. Any ideas?

    Read the article

  • Rails 3 HABTM Strange Association: Project and Employee in a tree.

    - by Mauricio
    Hi guys I have to adapt an existing model to a new relation. I have this: A Project has many Employees. the Employees of a Project are organized in some kind of hierarchy (nothing fancy, I resolved this adding a parent_id for each employee to build the 'tree') class Employee < AR:Base belongs_to :project belongs_to :parent, :class_name => 'Employee' has_many :childs, :class_name => 'Employee', :foreign_column => 'parent_id' end class Project < AR:Base has_many :employees, end That worked like a charm, now the new requirement is: The Employees can belong to many Projects at the same time, and the hierarchy will be different according to the project. So I though I will need a new table to build the HABTM, and a new class to access the parent_id to build the tree. Something like class ProjectEmployee < AR:Base belongs_to :project belongs_to :employee belongs_to :parent, :class_name => 'Employee' # <--- ?????? end class Project < AR:Base has_many :project_employee has_many :employees, :through => :project_employee end class Employee < AR:Base has_many :project_employee has_many :projects, :through => :project_employee end How can I access the parent and the childs of an employee for a given project? I need to add and remove childs as wish from the employees of a project. Thank you!

    Read the article

  • Iterate Through JSON Data for Specific Element - Similar to XPath

    - by Highroller
    I am working on an embedded system and my theory of the overall process follows this methodology: 1. Send a command to the back end asking for account information in JSON format. 2. The back end writes a JSON file with all accounts and associated information (there could be 0 to 16 accounts). 3. Here's where I get stuck - use JavaScript (we are using the JQuery library) to iterate through the returned information for a specific element (similar to XPath) and build an array based on the number of elements found to populate a drop-down box to select the account you want to view and then do stuff with the account info. So my code looks like this: loadAccounts = function() { $.getJSON('/info?q=voip.accounts[]', function(result) { var sipAcnts = $("#sipacnts"); $(sipAcnts).empty(); // empty the dropdown (if necessarry) // Get the 'label' element and stick it in an array // build the array and append it to the sipAcnts dropdown // use array index to ref the accounts info and do stuff with it } So what I need is the JSON version of XPath to build the array of voip.accounts.label. The first account info looks something like this: { "result_set": { "voip.accounts[0]": { "label": "Dispatch1", "enabled": true, "user": "1234", "name": "Jane Doe", "type": "sip", "sip": { "lots and lots of stuff": }, } } } Am I over complicating the issue? Any wisdom anyone could thrown down would be greatly appreciated.

    Read the article

  • How to determine if a target will be executed?

    - by Scott Langham
    Hi, I'm writing an msbuild file and have something like this: <ValidateDependsOn>$(ValidateDependsOn);ValidateA</ValidateDependsOn> <ValidateDependsOn>$(ValidateDependsOn);ValidateB</ValidateDependsOn> <Target Name="BuildA"> <!-- stuff --> </Target> <Target Name="BuildB"> <!-- stuff --> </Target> <Target Name="ValidateA"> <Error /> <!-- check properties and machine environment are suitable to run BuildA --> </Target> <Target Name="ValidateB"> <Error /> <!-- check properties and machine environment are suitable to run BuildB --> </Target> Builds can take a while. Originally we had the Build steps depending on the Validate steps, but sometimes a validate step wouldn't run until the middle of the build and you would have wasted time getting there. So, we moved the validate steps to the start by using the ValidateDependsOn pattern to insert the targets to run up front. The problem now though is that sometimes during a build BuildB may not actually run, and this means I don't need and in fact, don't want ValidateB to run. Is there any way I can selectively update ValidateDependsOn by conditionally knowing which targets will actually be run? I'm looking for something equivalent to: <ValidateDependsOn Condition="TargetWillRun(BuildB)">$(ValidateDependsOn);ValidateB</ValidateDependsOn>

    Read the article

  • How can I install an application on iPhone automatically?

    - by D33pN16h7
    Hello, I need a way to install a distribuible application without user intervention, of course I currently have a distribution profile installed on my device (I can install or uninstall the application by means of iTunes or iPCU), the problem remain on the side of automation "no user intervention is required", basically I need to develop a software (maybe hack iTunesMobileDevice.dll) that install the application when a valid device (the one with a valid distribution profile) is connected to one machine (application server), so any ideas??.... Thanks in advance!

    Read the article

  • How can I change the default startup directory for cmd.exe?

    - by Nano HE
    Hi. My Procedure last day as below Click Start, Run and type Regedit.exe Navigate to the following branch: HKEY_CURRENT_USER \ Software \ Microsoft \ Command Processor In the right-pane, double-click Autorun and set the startup folder path as its data, preceded by “CD /d “. If Autorun value is missing, you need to create one, of type REG_EXPAND_SZ or REG_SZ in the above location. Example: To set the startup directory to D:\learning\perl, set the Autorun value data to CD /d D:\learning\perl Then I clicked Start, run and type cmd. It successfully. I could do perl practice more conveniently now. But today, I find when I try to build my Visual Studio 2005 solution which included some Pre-build event Command like this: perl.exe MyAppVersion.pl perl.exe AttrScan.pl It doesn't work. Show error: can't find the path. I check the environment variable setting and find the variable-path and it's value-c:\perl\bin\; still exist. Finially, I try to removed the Regedit.exe configuration "Autorun" value and test again. The issue fixed. I only changed the default startup directory for cmd.exe command. Why the pre-build event perl command was impacted? (I am using winxp and activePerl 5.8)

    Read the article

  • How best to organize projects folders for unit tests in .NET?

    - by Dan Bailiff
    So I'm trying to introduce unit testing to my group. I've successfully upgraded a VS'05 web site project to a VS'08 web application, and now have a solution with the web app project and a unit test project. The issue now is how to fit this back into the source repository such that we don't break the build system and the unit test projects are persisted as well. Right now we have something like this: c:\root c:\root\projectA c:\root\projectB c:\root\projectC where projectA contains the sln file and all other related files/folders for the project. Now I have this new solution that looks like this: c:\root\projectA (parent folder) c:\root\projectA\projectA (the production code project) c:\root\projectA\projectA_Test (the unit test project) c:\root\projectA\TestResults c:\root\projecta\projectA.sln How do I integrate this new structure back into the code repository? I'd really prefer to keep the production code folder where it was in the source repository for the sake of the build, but is this necessary? If I keep the production code project in its usual place then where do I keep my unit test projects and how do I connect them with a sln file? Is it better to use this new structure and adjust the build process? I'd love to hear how other people are dealing with this issue of upgrading legacy projects to unit testing.

    Read the article

  • Source code dependency manager for C++

    - by 7vies
    There are already some questions about dependency managers here, but it seems to me that they are mostly about build systems, while I am looking for something targeted purely at making dependency tracking and resolution simpler (and I'm not necessarily interested in learning a new build system). So, typically we have a project and some common code with another project. This common code is organized as a library, so when I want to get the latest code version for a project, I should also go get all the libraries from the source control. To do this, I need a list of dependencies. Then, to build the project I can reuse this list too. I've looked at Maven and Ivy, but I'm not sure if they would be appropriate for C++, as they look quite heavily java-targeted (even though there might be plugins for C++, I haven't found people recommending them). I see it as a GUI tool producing some standardized dependency list which can then be parsed by different scripts etc. It would be nice if it could integrate with source control (tag, get a tagged version with dependencies etc), but that's optional. Would you have any suggestions? Maybe I'm just missing something, and usually it's done some other way with no need for such a tool? Thanks.

    Read the article

  • Create word document and add image from .NET app

    - by fearofawhackplanet
    I need a way of generating a word document (from a template or something) and inserting an image at a specific place. Does anyone have any pointers on the best way to do this? I worked on a project that used Office Automation in .NET 1.1 a few years ago, and it was really unspeakably poor. I'm assuming OA has either been improved or been superceeded by a better solution, but I'm not finding much advice on google.

    Read the article

  • how to trigger a script located on a machine in one domain from a machine on another domain

    - by user326814
    Hi, I am basically from QA. What we testers do each day is 1. Open a web browser. Type in http://11.12.13.27.8080/cruisecontrol (since we are in a particular network, only we can access this) 2. Check if the latest nightly build has been successful. If it is successful, deploy it on a test environment by clicking on 'Deploy this build' link. This deploying takes around 1-1.5 hours. During this time we cannot use our machines to work on anything else. Only after this deploying can we begin to test. Now, i wanted to know if its possible to do the below. When at home in the morning, i use something which will trigger a script (which will be on my machine at workplace). This script will inturn automatically deploy the build. I already have such a similar script. What i want to know is how is it possible to trigger this script from my home machine? Is it even possible? For e.g the external trigger will say "Deploy xxx branch on yyy test environment". So the script on my workplace machine will be invoked and it will automatically deploy it before i actually come to my desk. Please help. I am from QA and have no idea about all this.

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >