Search Results

Search found 17285 results on 692 pages for 'incremental build'.

Page 496/692 | < Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >

  • Unit testing custom controls in Silverlight

    - by Hrvoje
    I have several custom controls (some kind of frames for content and layout management, like wrap panel), and would like to write unit tests for them. It's hard to find any good examples except Silverlight control toolkit, which has some helper classes to do unit tests and it's quite complicated. For MVVM classes it's easy to write tests because they don't use SL dependency system and infrastructure. Questions: how to unit test DepedenyProperty, what do I need to test how to test attached property do I test bindings with theme or UserControl, like simple textblock content binding, or command/event binding in MVVM with UserControl what else do I test in my custom controls, beside my business logic any good tutorial to achieve tests like those in control toolkit How do I start? Is SL controls toolkit only option for learning? For testing framework i'm using one from control toolkit, and for continuus integration on TFS build server I planned to use Statlight (from codeplex). Any advice on that?

    Read the article

  • Best way to style GWT widgets in a library

    - by helios
    I'm developing some widgets into a library for internal use at the company I work for. I don't know what's the recommended way to style the widgets. There are at least these ways: use Widget.setPrimaryStyleName and let the user provide an external css. We use maven archetypes to build applications so we can provide default styles. Anyway I don't like it very much. use the GWT 2.0 CssResourceBundle. So we can compile the CSS into the module and it will be optimized (and it can be browser-dependant too). provide a module with the styling. Something like the default GWT themes. But I don't know how exactly this works. I want to: make the components as cohesive as I can (don't depend on externally included css's) leave open the door to modify styles (if I want to change the way some widget looks in a concrete application). What's your experience in this subject?

    Read the article

  • Remote Development Workflow with Tomcat and Eclipse

    - by Smithers
    Currently, I have tomcat installed on the production server to serve my java webapps. I develop in eclipse at my personal workstation and then I use an ant script to build the project into a war file and deploy that on the server. This setup works well when I am on the same network as the server because deploying is almost instantaneous. However, now that I am working remotely uploading the war file to the server is slow and in most cases very redundant (there are about .5 GB of static media included in the war file). Is there a better way to update my webapp on tomcat from eclipse and if so what are the best options for implementing such a solution with minimal effort?

    Read the article

  • Compass accuracy dilemma

    - by mob1lejunkie
    I need to build compass for my application. From reading the documentation it seems there are two reasonable ways of doing this: Sensor.TYPE_ORIENTATION method: This is the easy way of doing it. The problem with this is it is not accurate. When I compare my reading with Snaptic Compass it is about 10-15 degress off which for my purposes is unacceptable. Sensor.TYPE_ACCELEROMETER, Sensor.TYPE_MAGNETIC_FIELD and getRotationMatrix() in conjunction with remapCoordinateSystem() and getOrientation() method: The documentation says this "is usually more accurate". The problem is regardless of the delay I register with listener the compass goes crazy even when the device is stationary on flat surface. Any suggestions for solving this problem will be greatly appreciated.

    Read the article

  • Best practices for handling string in VC++?

    - by Hiren Gujarati
    As I am new to Visual C++, there are so many types for handling string. When I use some type and go ahead with coding but on next step, there are in-build functions that use other types & it always require to convert one type of string to other. I found so many blogs but so confused when see so many answers & try but some are working & some are not. Please give your answer or links that gives ultimate solution for handling different types of strings in visual c++.

    Read the article

  • rgl.snapshot() No Longer Works

    - by bill_080
    I just upgraded R and rgl to the following versions. Now, rgl.snapshot() no longer works. It worked in previous versions. Is there a way around this? R version 2.12.1 (2010-12-16) rgl version 0.92.798 > library(rgl) > x<-rnorm(100) > y<-rnorm(100) > z<-rnorm(100) > r<-0.2 > p <- plot3d(x, y, z, axes=FALSE, box=FALSE, radius=r, type='s', + xlab="", ylab="", zlab="", col=rainbow(100)) > rgl.snapshot("C:\\Temp\\pic.png", fmt="png", top=TRUE ) Error in rgl.snapshot("C:\\Temp\\pic.png") : pixmap save format not supported in this build

    Read the article

  • Inline Conditional Statement in Stored Procedure

    - by Jason
    Here is the pseudo-code for my inline query in my code: select columnOne from myTable where columnOne = '#variableOne#' if len(variableTwo) gt 0 and columnTwo = '#variableTwo#' end I would like to move this into a stored procedure but am having trouble building the query correctly. I assume it would be something like select columnOne from myTable where columnOne = @variableOne CASE WHEN len(@variableTwo) <> 0 THEN and columnTwo = @variableTwo END This is giving me a syntax error. Could someone tell me what I've got wrong. Also, I would like to keep it to only one query and not just have one if statement. Also, I do not want to build the sql in the stored procedure and run Exec() on it.

    Read the article

  • Dynamic Navigation

    - by Dooie
    I am building a project in asp.net 4.0. My navigation will be database driven where i return a datatable from the db containing all the pages of my site, some will be top level while others will be children and sometimes children of children n-times. Im thinking of going down the nested repeater route and databinding from code behind, dynamically generating repeaters for children, but have read that this is not a best practice and should consider the listview control. Im wanting to build a list of links using an unordered list. I cannot find a solid example and was hoping for some pointers/ideas. Thanks Doo

    Read the article

  • java jdbc connection to mysql problem

    - by fatnjazzy
    Hi, I am trying to connect to mysql from java web application in eclipse. Connection con = null; try { //DriverManager.registerDriver(new com.mysql.jdbc.Driver()); Class.forName("com.mysql.jdbc.Driver"); con = DriverManager.getConnection("jdbc:mysql://localhost/db_name","root" ,""); if(!con.isClosed()) System.out.println("Successfully connected to " + "MySQL server using TCP/IP..."); } catch(Exception e) { System.err.println("Exception: " + e.getMessage()); } finally { try { if(con != null) con.close(); } catch(SQLException e) { System.out.println(e.toString()); } } I am always getting the Exception: com.mysql.jdbc.Driver I have downloaded this jar http://forums.mysql.com/read.php?39,218287,220327 import it to the "java build path/lib" the mysql version is 5.1.3 under. running: mysql 5.1.3 (db is up and running queries form PHP) windows XP java jee Thanks

    Read the article

  • Is there a free code coverage tool suitable for use with .NET 4 and NUnit?

    - by Damian Powell
    Is there a free code coverage tool suitable for use with .NET 4 and NUnit that runs from the command line (and is thus suitable for use on a build server)? Please note that any tools that require editions of Visual Studio higher than Professional are not appropriate in this case. I am asking this question because I can't get NCover 1.5.8 to work with NUnit 2.5.5 on a .NET 4 C# app. I can run the unit tests, and I can generate a Coverage.Xml file, but it is empty - it contains no sequence points. After a lot of research, I have concluded that this is because NCover 1.5.8 simply doesn't work with .NET 4. However, if you know better, please feel free to answer this question from another user.

    Read the article

  • What is my problem with ASP.NET pubslishing?

    - by Shankarooni
    I am done testing my site and I want to upload it to a site like this http://www.university.edu/mydepartment/myname the admin told me the server runs on .NET 3.5. So i used Linq ... now i tried to upload the site by two ways: when i just copy everything (with modification of web.config database settings) i get an error: CS0246: The type or namespace name 'DataClassesDataContext' could not be found (are you missing a using directive or an assembly reference?) Version Information: Microsoft .NET Framework Version:2.0.50727.3082; ASP.NET Version:2.0.50727.3082 Note here it says version 2.0 did he just lie to me? or its my configuration mistake? anyway, i added the reference, nothing changes. I tried also publishing (Build, publish) with option to keep the pre-comiled site updatable, and I get one line saying: this is a makefile and should be deleted! what is going on?

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Possible to set filter on subform from parent form before subform data loads

    - by tbone
    I have frmParentForm with multiple controls used to build a filter for frmSubForm. On frmParentForm_Load, I am doing (simplified example): Me.sbfInvoice_List.Form.filter = "[created_on] >= #" & Me.RecentOrderDateCutoff & "#" Me.sbfInvoice_List.Form.FilterOn = True The problem is, on initial load, it seems the subform load is occurring first, so the entire table is loaded. Is there a way (in a different event perhaps) to properly set the subform filter from the parent form so it is applied before the subform does its initial data load? (The subform can exist on its own, or as a child of many different parent forms (sometimes filtered, sometimes not), so I'd rather not put some complicated hack in the subform itself to accomplish this.)

    Read the article

  • Has anyone managed to get Visual Studio 2003 running on Windows 7?

    - by Jeremy White
    Yes, I know... I could set up a virtual machine running XP. Unfortunately our build environment is such that we need to be running VC2003, 2005 and 2008 concurrently and it would be much more convenient if I could run 2003 natively on Windows 7 for the few projects we have that require it. I realize some things may not be available in the IDE, but I was able to run 2003 under windows Vista and if I could get the same base level of functionality under Windows 7 I would be extremely happy. Right now I get an error opening the *.pdb file when I compile after switching vc2003 to run as Administrator under compatibility mode for XP SP 2. Thanks!

    Read the article

  • Rails: form input type and getting the filename

    - by Shyam
    Hi, As I am using Ruby on Rails to build an application, which only runs locally, I am lost in the woods (a nuby without a compass). I have a simple MVC application and my view is missing one thing I could really use. I want to select a local file just to retrieve it's filename. I know it's relatively easy to use the form tag helpers for uploading: <%= file_field 'upload', 'datafile' %></p> I wonder how I could get the filename from the selected file, without uploading the file.

    Read the article

  • Test for external undefined references in Linux

    - by Charles
    Is there a built in linux utility that I can use to test a newly compiled shared library for external undefined references? Gcc seems to be intelligent enough to check for undefined symbols in my own binary, but if the symbol is a reference to another library gcc does not check at link time. Instead I only get the message when I try to link to my new library from another program. It seems a little silly to get undefined reference messages in a library when I am compiling a different project so I want to know if I can do a check on all references internal and external when I build the library not when I link to it. Example error: make -C UnitTests debug make[1]: Entering directory `~/projects/Foo/UnitTests` g++ [ tons of objects ] -L../libbar/bin -lbar -o UnitTests libbar.so: undefined reference to `DoSomethingFromAnotherLibrary` collect2: ld returned 1 exit status make[1]: *** [~/projects/Foo/UnitTests] Error 1

    Read the article

  • How to add multiple files to py2app?

    - by Niek de Klein
    I have a python script which makes a GUI. When a button 'Run' is pressed in this GUI it runs a function from an imported package (which I made) like this from predictmiP import predictor class MiPFrame(wx.Frame): [...] def runmiP(self, event): predictor.runPrediction(self.uploadProtInterestField.GetValue(), self.uploadAllProteinsField.GetValue(), self.uploadPfamTextField.GetValue(), \ self.edit_eval_all.Value, self.edit_eval_small.Value, self.saveOutputField) When I run the GUI directly from python it all works well and the program writes an output file. However, when I make it into an app, the GUI starts but when I press the button nothing happens. predictmiP does get included in build/bdist.macosx-10.3-fat/python2.7-standalone/app/collect/, like all the other imports I'm using (although it is empty, but that's the same as all the other imports I have). How can I get multiple python files, or an imported package to work with py2app?

    Read the article

  • jQuery: Animated header plugin

    - by Fverswijver
    I'm looking for a jQuery plugin that can help me with the following: I have a list of images I want to use for my header but they are pretty big (height especially) and I don't want to resize them to fit my small header div. What I'd want is a plugin that allows the images to start at the bottom of the div (or rather the top of the image at the top of the div) and move upwards so the entire image can be seen, and once up they are shown entirely (bottom of image at bottom of div) they should "blend" (opacity toggle or something alike) with the next image and thus create a continuous loop with all the images. I've looked through several plugins but have never found one that can achieve what I'm looking for (maybe I'm asking for a tad too much) but my JS is not sufficient enough to build it myself. Thanks!

    Read the article

  • BASH: How to remove all files except those named in a manifest?

    - by brice
    I have a manifest file which is just a list of newline separated filenames. How can I remove all files that are not named in the manifest from a folder? I've tried to build a find ./ ! -name "filename" command dynamically: command="find ./ ! -name \"MANIFEST\" " for line in `cat MANIFEST`; do command=${command}"! -name \"${line}\" " done command=${command} -exec echo {} \; $command But the files remain. [Note:] I know this uses echo. I want to check what my command does before using it.

    Read the article

  • how to import other schema jars when using the scomp tool

    - by MikeJiang
    there is a huge amount of xml schemas for the business, some of them are common types like Money.xsd, Address.xsd, etc, while others are business specific like Customer.xsd, ShippingOrder.xsd, etc. So I decide to compile these schemas into 2 jars, one is commonbeans.jar, the other is businessbeans.jar. I've separated them into different folders. to build the commonbeans.jar is simple, just run "scomp -out commonbeans.jar ....\common*.xsd"; while run "scomp -out businessbeans.jar ....\business*.xsd" is a different story, there are errors say can't find those common types, and run "scomp -out businessbeans.jar ....\business*.xsd ....\business*.xsd" will blindly duplicate all the common types into the businessbeans.jar. so is there any way to link the commonbeans.jar when compile those busimess schemas, maybe something like "scomp -out businessbeans.jar ....\business*.xsd commonbeans.jar". I hope my poor english has expressed my issue!

    Read the article

  • how to modify a json array with jQuery

    - by Emin
    I have the following json array of objects in my code var groups = [ { "gid": 28, "name": "Group 1", "ishidden": false, "isprivate": false }, { "gid": 16, "name": "Group 2", "ishidden": true, "isprivate": false }, { "gid": 31, "name": "Group 3", "ishidden": true, "isprivate": false }, { "gid": 11, "name": "Group 4", "ishidden": false, "isprivate": false }, { "gid": 23, "name": "Group 5", "ishidden": false, "isprivate": false } ]; I can access or iterate through this with no problm using jQuery. However a situation arose where I need to change a value of one of the items (e.g. change the ishidden property to true for gid: 28) and then run some other jQuery function against it. Is this possible? or do I have to re-build the whole object ? If possible, how can I achieve this? any help would be appreciated!

    Read the article

  • Client-side session timeout redirect in ASP.Net

    - by Mercury821
    I want to build a way to automatically redirect users to Timeout.aspx when their session expires due to inactivity. My application uses forms authentication and relies heavily on update panels within the same aspx page for user interaction, so I don't want to simply redirect after a page-level timer expires. For the same reason, I can't use '<meta http-equiv="refresh"/>' What I want to do is create a simple ajax web service with a method called IsSessionTimedOut(), that simply returns a boolean. I will use a javascript timer to periodically call the method, and if it returns true, then redirect to Timeout.aspx. However, I don't want calling this method to reset the session timeout timer, or the session would never time out because of the service call. Is there a clean way to avoid this catch-22? Hopefully there is an easy solution that has so far eluded me.

    Read the article

  • How do you create multiple versions of an ActiveX control?

    - by Peter Ruderman
    Hopefully this is a straightforward question, but googling has proved fruitless (and frustrating, to say the least). Links to good documentation would be greatly appreciated. Here's the problem. We have a web application with an associated ActiveX control. (The control wraps a crufty old MFC application if it matters.) Moving forward, we expect to release multiple versions of this application, and each will have a corresponding version of the control. If someone accesses two versions of the web application, then that user should end up with two different versions of the control on his system. (The controls should play nice and not clobber each other.) In addition, I want to automate this process. Our system has a global version number that applies to all components. If we change the version number, the next build should produce a new version of the control. What's the best way to do this?

    Read the article

  • Google Visualization Annotated Time Line, removing data points.

    - by Vitaly Babiy
    I am trying to build a graph that will change resolution depending on how far you are zoomed in. Here is what it looks like when you are complete zoomed out. So this looks good so when I zoom in I get a higher resolution data and my graph looks like this: The problem is when I zoom out the higher resolution data does not get cleared out of the graph: The tables below the graphs are table display what is in the DataTable. This is what drawing code looks like. var g_graph = new google.visualization.AnnotatedTimeLine(document.getElementById('graph_div_json')); var table = new google.visualization.Table(document.getElementById('table_div_json')); function handleQueryResponse(response){ log("Drawing graph") var data = response.getDataTable() g_graph.draw(data, {allowRedraw:true, thickness:2, fill:50, scaleType:'maximized'}) table.draw(data, {allowRedraw:true}) } I am try to find a way for it to only displaying the data that is in the DataTable. I have tried removing the allowRedraw flag but then it breaks the zooming operation. Any help would be greatly appreciated. Thanks

    Read the article

  • How do I use dependencies in a makefile without calling a target?

    - by rassie
    I'm using makefiles to convert an internal file format to an XML file which is sent to other colleagues. They would make changes to the XML file and send it back to us (Don't ask, this needs to be this way ;)). I'd like to use my makefile to update the internal files when this XML changes. So I have these rules: %.internal: $(DATAFILES) # Read changes from XML if any # Create internal representation here %.xml: %.internal # Convert to XML here Now the XML could change because of the workflow described above. But since no data files have changed, make would tell me that file.internal is up-to-date. I would like to avoid making %.internal target phony and a circular dependency on %.xml obviously doesn't work. Any other way I could force make to check for changes in the XML file and re-build %.internal?

    Read the article

< Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >