Search Results

Search found 7061 results on 283 pages for 'target'.

Page 45/283 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • JavaCC: Please help me understand token ambiguity.

    - by java.is.for.desktop
    Hello, everyone! I had already many problems with understanding, how ambiguous tokens can be handled elegantly (or somehow at all) in JavaCC. Let's take this example: I want to parse XML processing instruction. The format is: "<?" <target> <data> "?>": target is an XML name, data can be anything except ?>, because it's the closing tag. So, lets define this in JavaCC: (I use lexical states, in this case DEFAULT and PROC_INST) TOKEN : <#NAME : (very-long-definition-from-xml-1.1-goes-here) > TOKEN : <WSS : (" " | "\t")+ > // WSS = whitespaces <DEFAULT> TOKEN : {<PI_START : "<?" > : PROC_INST} <PROC_INST> TOKEN : {<PI_TARGET : <NAME> >} <PROC_INST> TOKEN : {<PI_DATA : ~[] >} // accept everything <PROC_INST> TOKEN : {<PI_END : "?>" > : DEFAULT} Now the part which recognizes processing instructions: void PROC_INSTR() : {} { ( <PI_START> (t=<PI_TARGET>){System.out.println("target: " + t.image);} <WSS> (t=<PI_DATA>){System.out.println("data: " + t.image);} <PI_END> ) {} } The problem is (i guess): hence <PI_DATA> recognizes "everything", my definition is wrong. Let's test it with <?mytarget here-goes-some-data?>: The target is recognized: "target: mytarget". But now I get my favorite JavaCC parsing error: !! procinstparser.ParseException: Encountered "" at line 1, column 15. !! Was expecting one of: !! Encountered nothing? Was expecting nothing? Or what? Thank you, JavaCC! I know, that I could use the MORE keyword of JavaCC, but this would give me the whole processing instruction as one token, so I'd had to parse/tokenize it further by myself. Why should I do that? Am I writing a parser that does not parse? What I would need is telling JavaCC to recognize "everything until ?>" as processing instruction data. How can it be done? Thank you.

    Read the article

  • VS2008: File creation fails randomly in unit testing?

    - by Tim
    I'm working on implementing a reasonably simple XML serializer/deserializer (log file parser) application in C# .NET with VS 2008. I have about 50 unit tests right now for various parts of the code (mostly for the various serialization operations), and some of them seem to be failing mostly at random when they deal with file I/O. The way the tests are structured is that in the test setup method, I create a new empty file at a certain predetermined location, and close the stream I get back. Then I run some basic tests on the file (varying by what exactly is under test). In the cleanup method, I delete the file again. A large portion (usually 30 or more, though the number varies run to run) of my unit tests will fail at the initialize method, claiming they can't access the file I'm trying to create. I can't pin down the exact reason, since a test that will work one run fails the next; they all succeed when run individually. What's the problem here? Why can't I access this file across multiple unit tests? Relevant methods for a unit test that will fail some of the time: [TestInitialize()] public void LogFileTestInitialize() { this.testFolder = System.Environment.GetFolderPath( System.Environment.SpecialFolder.LocalApplicationData ); this.testPath = this.testFolder + "\\empty.lfp"; System.IO.File.Create(this.testPath); } [TestMethod()] public void LogFileConstructorTest() { string filePath = this.testPath; LogFile target = new LogFile(filePath); Assert.AreNotEqual(null, target); Assert.AreEqual(this.testPath, target.filePath); Assert.AreEqual("empty.lfp", target.fileName); Assert.AreEqual(this.testFolder + "\\empty.lfp.lfpdat", target.metaPath); } [TestCleanup()] public void LogFileTestCleanup() { System.IO.File.Delete(this.testPath); } And the LogFile() constructor: public LogFile(String filePath) { this.entries = new List<Entry>(); this.filePath = filePath; this.metaPath = filePath + ".lfpdat"; this.fileName = filePath.Substring(filePath.LastIndexOf("\\") + 1); } The precise error message: Initialization method LogFileParserTester.LogFileTest.LogFileTestInitialize threw exception. System.IO.IOException: System.IO.IOException: The process cannot access the file 'C:\Users\<user>\AppData\Local\empty.lfp' because it is being used by another process..

    Read the article

  • Java.lang.reflext.Proxy returning another proxy from invocation results in ClassCastException on ass

    - by matao
    So I'm playing with geotools and I thought I'd proxy one of their data-access classes and trace how it was being used in their code. I coded up a dynamic proxy and wrapped a FeatureSource (interface) in it and off it went happily. Then I wanted to look at some of the transitive objects returned by the featureSource as well, since the main thing a FeatureSource does is return a FeatureCollection (FeatureSource is analogous to a sql DataSource and featurecollection to an sql statement). in my invocationhandler I just passed the call through to the underlying object, printing out the target class/method/args and result as I went, but for calls that returned a FeatureCollection (another interface), I wrapped that object in my proxy (the same class but a new instance, shouldn't matter should it?) and returned it. BAM! Classcast exception: java.lang.ClassCastException: $Proxy5 cannot be cast to org.geotools.feature.FeatureCollection at $Proxy4.getFeatures(Unknown Source) at MyClass.myTestMethod(MyClass.java:295) the calling code: FeatureSource<SimpleFeatureType, SimpleFeature> featureSource = ... // create the FS featureSource = (FeatureSource<SimpleFeatureType, SimpleFeature>) FeatureSourceProxy.newInstance(featureSource, features); featureSource.getBounds();// ok featureSource.getSupportedHints();// ok DefaultQuery query1 = new DefaultQuery(DefaultQuery.ALL); FeatureCollection<SimpleFeatureType, SimpleFeature> results = featureSource.getFeatures(query1); //<- explosion here the Proxy: public class FeatureSourceProxy implements java.lang.reflect.InvocationHandler { private Object target; private List<SimpleFeature> features; public static Object newInstance(Object obj, List<SimpleFeature> features) { return java.lang.reflect.Proxy.newProxyInstance( obj.getClass().getClassLoader(), obj.getClass().getInterfaces(), new FeatureSourceProxy(obj, features) ); } private FeatureSourceProxy(Object obj, List<SimpleFeature> features) { this.target = obj; this.features = features; } public Object invoke(Object proxy, Method m, Object[] args)throws Throwable{ Object result = null; try { if("getFeatures".equals(m.getName())){ result = interceptGetFeatures(m, args); } else{ result = m.invoke(target, args); } } catch (Exception e) { throw new RuntimeException("unexpected invocation exception: " + e.getMessage(), e); } return result; } private Object interceptGetFeatures(Method m, Object[] args) throws Exception{ return newInstance(m.invoke(target, args), features); } } Is it possible to dynamically return proxies of interfaces from a proxied interface or am I doing something wrong? cheers!

    Read the article

  • Cutom event dispatchment location

    - by Martino Wullems
    Hello, I've been looking into custom event (listeners) for quite some time, but never succeeded in making one. There are so many different mehods, extending the Event class, but also Extending the EventDispatcher class, very confusing! I want to settle with this once and for all and learn the appriopate technique. package{ import flash.events.Event; public class CustomEvent extends Event{ public static const TEST:String = 'test'; //what exac is the purpose of the value in the string? public var data:Object; public function CustomEvent(type:String, bubbles:Boolean = false, cancelable:Boolean = false, data:Object = null):void { this.data = data; super(); } } } As far as I know a custom class where you set the requirements for the event to be dispatched has to be made: package { import flash.display.MovieClip; public class TestClass extends MovieClip { public function TestClass():void { if (ConditionForHoldToComplete == true) { dispatchEvent(new Event(CustomEvent.TEST)); } } } } I'm not sure if this is correct, but it should be something along the lines of this. Now What I want is something like a mouseevent, which can be applied to a target and does not require a specific class. It would have to work something like this: package com.op_pad._events{ import flash.events.MouseEvent; import flash.utils.Timer; import flash.events.TimerEvent; import flash.events.EventDispatcher; import flash.events.Event; public class HoldEvent extends Event { public static const HOLD_COMPLETE:String = "hold completed"; var timer:Timer; public function SpriteEvent(type:String, bubbles:Boolean=true, cancelable:Boolean=false) { super( type, bubbles, cancelable ); timer = new Timer(1000, 1); //somehow find the target where is event is placed upon -> target.addEventlistener target.addEventListener(MouseEvent.MOUSE_DOWN, startTimer); target.addEventListener(MouseEvent.MOUSE_UP, stopTimer); } public override function clone():Event { return new SpriteEvent(type, bubbles, cancelable); } public override function toString():String { return formatToString("MovieEvent", "type", "bubbles", "cancelable", "eventPhase"); } ////////////////////////////////// ///// c o n d i t i o n s ///// ////////////////////////////////// private function startTimer(e:MouseEvent):void { timer.start(); timer.addEventListener(TimerEvent.TIMER_COMPLETE, complete); } private function stopTimer(e:MouseEvent):void { timer.stop() } public function complete(e:TimerEvent):void { dispatchEvent(new HoldEvent(HoldEvent.HOLD_COMPLETE)); } } } This obviously won't work, but should give you an idea of what I want to achieve. This should be possible because mouseevent can be applied to about everything.The main problem is that I don't know where I should set the requirements for the event to be executed to be able to apply it to movieclips and sprites. Thanks in advance

    Read the article

  • JavaCC: How can one exclude a string from a token? (A.k.a. understanding token ambiguity.)

    - by java.is.for.desktop
    Hello, everyone! I had already many problems with understanding, how ambiguous tokens can be handled elegantly (or somehow at all) in JavaCC. Let's take this example: I want to parse XML processing instruction. The format is: "<?" <target> <data> "?>": target is an XML name, data can be anything except ?>, because it's the closing tag. So, lets define this in JavaCC: (I use lexical states, in this case DEFAULT and PROC_INST) TOKEN : <#NAME : (very-long-definition-from-xml-1.1-goes-here) > TOKEN : <WSS : (" " | "\t")+ > // WSS = whitespaces <DEFAULT> TOKEN : {<PI_START : "<?" > : PROC_INST} <PROC_INST> TOKEN : {<PI_TARGET : <NAME> >} <PROC_INST> TOKEN : {<PI_DATA : ~[] >} // accept everything <PROC_INST> TOKEN : {<PI_END : "?>" > : DEFAULT} Now the part which recognizes processing instructions: void PROC_INSTR() : {} { ( <PI_START> (t=<PI_TARGET>){System.out.println("target: " + t.image);} <WSS> (t=<PI_DATA>){System.out.println("data: " + t.image);} <PI_END> ) {} } Let's test it with <?mytarget here-goes-some-data?>: The target is recognized: "target: mytarget". But now I get my favorite JavaCC parsing error: !! procinstparser.ParseException: Encountered "" at line 1, column 15. !! Was expecting one of: !! Encountered nothing? Was expecting nothing? Or what? Thank you, JavaCC! I know, that I could use the MORE keyword of JavaCC, but this would give me the whole processing instruction as one token, so I'd had to parse/tokenize it further by myself. Why should I do that? Am I writing a parser that does not parse? The problem is (i guess): hence <PI_DATA> recognizes "everything", my definition is wrong. I should tell JavaCC to recognize "everything except ?>" as processing instruction data. But how can it be done? NOTE: I can only exclude single characters using ~["a"|"b"|"c"], I can't exclude strings such as ~["abc"] or ~["?>"]. Another great anti-feature of JavaCC. Thank you.

    Read the article

  • Generate MetaData with ANT

    - by Neil Foley
    I have a folder structure that contains multiple javascript files, each of these files need a standard piece of text at the top = //@include "includes.js" Each folder needs to contain a file named includes.js that has an include entry for each file in its directory and and entry for the include file in its parent directory. I'm trying to achive this using ant and its not going too well. So far I have the following, which does the job of inserting the header but not without actually moving or copying the file. I have heard people mentioning the <replace> task to do this but am a bit stumped. <?xml version="1.0" encoding="UTF-8"?> <project name="JavaContentAssist" default="start" basedir="."> <taskdef resource="net/sf/antcontrib/antcontrib.properties"> <classpath> <pathelement location="C:/dr_workspaces/Maven Repository/.m2/repository/ant-contrib/ant-contrib/20020829/ant-contrib-20020829.jar"/> </classpath> </taskdef> <target name="start"> <foreach target="strip" param="file"> <fileset dir="${basedir}"> <include name="**/*.js"/> <exclude name="**/includes.js"/> </fileset> </foreach> </target> <target name="strip"> <move file="${file}" tofile="${a_location}" overwrite="true"> <filterchain> <striplinecomments> <comment value="//@" /> </striplinecomments> <concatfilter prepend="${basedir}/header.txt"> </concatfilter> </filterchain> </move> </target> </project> As for the generation of the include files in the dir I'm not sure where to start at all. I'd appreciate if somebody could point me in the right direction.

    Read the article

  • Monitoring UDP socket in glib(mm) eats up CPU time

    - by Gyorgy Szekely
    Hi, I have a GTKmm Windows application (built with MinGW) that receives UDP packets (no sending). The socket is native winsock and I use glibmm IOChannel to connect it to the application main loop. The socket is read with recvfrom. My problem is: this setup eats 25% percent CPU time on a 3GHz workstation. Can somebody tell me why? The application is idle in this case, and if I remove the UDP code, CPU usage drops down to almost zero. As the application has to perform some CPU intensive tasks, I could image better ways to spend that 25% Here are some code excerpts: (sorry for the printf's ;) ) /* bind */ void UDPInterface::bindToPort(unsigned short port) { struct sockaddr_in target; WSADATA wsaData; target.sin_family = AF_INET; target.sin_port = htons(port); target.sin_addr.s_addr = 0; if ( WSAStartup ( 0x0202, &wsaData ) ) { printf("WSAStartup failed!\n"); exit(0); // :) WSACleanup(); } sock = socket( AF_INET, SOCK_DGRAM, 0 ); if (sock == INVALID_SOCKET) { printf("invalid socket!\n"); exit(0); } if (bind(sock,(struct sockaddr*) &target, sizeof(struct sockaddr_in) ) == SOCKET_ERROR) { printf("failed to bind to port!\n"); exit(0); } printf("[UDPInterface::bindToPort] listening on port %i\n", port); } /* read */ bool UDPInterface::UDPEvent(Glib::IOCondition io_condition) { recvfrom(sock, (char*)buf, BUF_SIZE*4, 0, NULL, NULL); /* process packet... */ } /* glibmm connect */ Glib::RefPtr channel = Glib::IOChannel::create_from_win32_socket(udp.sock); Glib::signal_io().connect( sigc::mem_fun(udp, &UDPInterface::UDPEvent), channel, Glib::IO_IN ); I've read here in some other question, and also in glib docs (g_io_channel_win32_new_socket()) that the socket is put into nonblocking mode, and it's "a side-effect of the implementation and unavoidable". Does this explain the CPU effect, it's not clear to me? Whether or not I use glib to access the socket or call recvfrom() directly doesn't seem to make much difference, since CPU is used up before any packet arrives and the read handler gets invoked. Also glibmm docs state that it's ok to call recvfrom() even if the socket is polled (Glib::IOChannel::create_from_win32_socket()) I've tried compiling the program with -pg and created a per function cpu usage report with gprof. This wasn't usefull because the time is not spent in my program, but in some external glib/glibmm dll.

    Read the article

  • Execute a jar file using Ant

    - by geetha
    I am trying to create a runnable jar file from java classes using ant. The java classes use external jars. When I execute the build.xml its showing class not found exception while running the java program. Its compiling fine. Part of My source code: <path id="project-libpath"> <fileset dir="${lib.dir}"> <include name="*.jar"/> </fileset> </path> <path id="project-classpath"> <fileset dir="C:/xmldecode/lib"> <include name="*.jar"/> </fileset> </path> <target name="compile" depends="prepare"> <javac srcdir="${src.dir}" destdir="${classes.dir}"> <classpath refid="project-classpath"/> </javac> </target> <target name="jar" depends="compile"> <copy todir="${classes.dir}"> <fileset dir="C:/xmldecode/lib"/> </copy> <pathconvert property="mf.classpath" pathsep=";"> <path refid="project-classpath" /> <flattenmapper /> </pathconvert> <jar destfile="${jar.dir}/${ant.project.name}.jar" basedir="${classes.dir}"> <manifest> <attribute name="Main-Class" value="${main-class}"/> <attribute name="Class-Path" value="${mf.classpath}"/> </manifest> </jar> </target> <target name="run" depends="jar"> <java jar="${jar.dir}/${ant.project.name}.jar" fork="true"> </java>

    Read the article

  • Making an efficient algorithm

    - by James P.
    Here's my recent submission for the FB programming contest (qualifying round only requires to upload program output so source code doesn't matter). The objective is to find two squares that add up to a given value. I've left it as it is as an example. It does the job but is too slow for my liking. Here's the points that are obviously eating up time: List of squares is being recalculated for each call of getNumOfDoubleSquares(). This could be precalculated or extended when needed. Both squares are being checked for when it is only necessary to check for one (complements). There might be a more efficient way than a double-nested loop to find pairs. Other suggestions? Besides this particular problem, what do you look for when optimizing an algorithm? public static int getNumOfDoubleSquares( Integer target ){ int num = 0; ArrayList<Integer> squares = new ArrayList<Integer>(); ArrayList<Integer> found = new ArrayList<Integer>(); int squareValue = 0; for( int j=0; squareValue<=target; j++ ){ squares.add(j, squareValue); squareValue = (int)Math.pow(j+1,2); } int squareSum = 0; System.out.println( "Target=" + target ); for( int i = 0; i < squares.size(); i++ ){ int square1 = squares.get(i); for( int j = 0; j < squares.size(); j++ ){ int square2 = squares.get(j); squareSum = square1 + square2; if( squareSum == target && !found.contains( square1 ) && !found.contains( square2 ) ){ found.add(square1); found.add(square2); System.out.println( "Found !" + square1 +"+"+ square2 +"="+ squareSum); num++; } } } return num; }

    Read the article

  • jQuery .load(), don't show new content until images loaded

    - by Jarred
    Hi. I have been working on a jQuery photo slideshow. It scales the images to the browser size, and slides them left and right. There is no pre-determined size or aspect ratio, the script does everything on the fly. It requires that all images be fully loaded, so it can custom resize each individual image based on it's own aspect ratio ( width():height(), etc ), calculate the width of containing div, and calculate the slide distance from one image to another. As a stand-alone, it works pretty well (despite my lack of skills)! I simply hide the slideshow containing div at (document).ready, allow the images to load, then run the slideshow prep scripts at (window).load. Once this is done, it only then makes the slideshow divs, images, etc appear, properly sized, positioned and ready to roll. The ultimate goal is to be able to load in any number of slideshows without refreshing the page. The point of this is to be able to play uninterrupted background music. I know music on websites is annoying, but the target market likes it, a lot! I am using (target).load(page.php .element, function prepInsertNewShow() { //Prepare slideshow resizeImages(); slideArray(); //Show slideshow (target).fadeIn(); }); and it definitely works! The problem is that I cannot find a way to hold off on preparing and showing the new content until the images have finished loading. It is running the slideshow prep scripts (which are totally dependent on the images being fully loaded), before the images are loaded. This results in a completely jacked up show! What I want to do is this - (target).load(page.php .element, function prepInsertNewShow() { //Wait until images are loaded $('img').load( function() { //Prepare slideshow resizeImages(); slideArray(); //Show slideshow (target).fadeIn(); } }); But this doesn't seem to work, the new content is never shown. You can see a live version here. The initial gallery loads correctly, everything looks good. The only nav link that works is Galleries Engagement, which will load a new show (a containing div with multiple <img> tags). You will see that the images are not centered, the containing div and slide distances are much too small, as they were calculated using images that were not actually loaded. Is there any way I can delay handling and showing new content until it is fully loaded? Any suggestions would be most appreciated, thanks for your time! PS - It just occurred to me while typing this that a decent solution may be to insert "width=x" height="x" into the <img> tags, so the script can work from those values, even if the images have not loaded...hmm...

    Read the article

  • How to configure multiple iSCSI Portal Groups on a EqualLogic PS6100?

    - by kce
    I am working on a migration from a VMware vSphere environment to a Hyper-V Cluster utilizing Windows Server 2012 R2. The setup is pretty small, an EqualLogic PS6100e and two Dell PowerConnect 5424 switches and handful of R710s and R620s. The SAN was configured as a non-RFC1918 network that is not assigned to our organization and since I am working on building a new virtualization environment I figured that this would be an appropriate time to do a subnet migration. I configured a separate VLAN and subnet on the switches and the two previously unused NICs on the PS6100's controllers. At this time I only have a single Hyper-V host cabled in but I can successfully ping the PS6100 from the host. From the PS6100 I can ping each of the four NICs that currently on the storage network. I cannot connect the Microsoft iSCSI Initiator to the Target. I have successfully added the Target Portals (the IP addresses of PS6100 NICs) and the Targets are discovered but listed as inactive. If I try to Connect to them I get the following error, "Log onto Target - Connection Failed" and ISCSIPrt 1 and 70 events are recorded in the Event Log. I have verified that access control to the volume is not the problem by temporarily disabling it. I suspect the problem is with the Portal Group IP address which is still listed as Group Address of old subnet (I know, I know I might be committing the sin of the X/Y problem but everything else looks good): RFC3720 has this to say about Network Portal and Portal Groups: Network Portal: The Network Portal is a component of a Network Entity that has a TCP/IP network address and that may be used by an iSCSI Node within that Network Entity for the connection(s) within one of its iSCSI sessions. A Network Portal in an initiator is identified by its IP address. A Network Portal in a target is identified by its IP address and its listening TCP port. Portal Groups: iSCSI supports multiple connections within the same session; some implementations will have the ability to combine connections in a session across multiple Network Portals. A Portal Group defines a set of Network Portals within an iSCSI Network Entity that collectively supports the capability of coordinating a session with connections spanning these portals. Not all Network Portals within a Portal Group need participate in every session connected through that Portal Group. One or more Portal Groups may provide access to an iSCSI Node. Each Network Portal, as utilized by a given iSCSI Node, belongs to exactly one portal group within that node. The EqualLogic Group Manager documentation has this to say about the Group IP Address: You use the group IP address as the iSCSI discovery address when connecting initiators to iSCSI targets in the group. If you modify the group IP address, you might need to change your initiator configuration to use the new discovery address Changing the group IP address disconnects any iSCSI connections to the group and any administrators logged in to the group through the group IP address. Which sounds equivalent to me (I am following up with support to confirm). I think a reasonable explanation at this point is that the Initiator can't complete the connection to the Target because the Group IP Address / Network Portal is on a different subnet. I really want to avoid a cutover and would prefer to run both subnets side-by-side until I can install and configure each Hyper-V host. Question/s: Is my assessment at all reasonable? Is it possible to configure multiple Group IP Addresses on the EqualLogic PS6100? I don't want to just change it as it will disconnect the remaining ESXi hosts. Am I just Doing It Wrong(TM)?

    Read the article

  • To sample or not to sample...

    - by [email protected]
    Ideally, we would know the exact answer to every question. How many people support presidential candidate A vs. B? How many people suffer from H1N1 in a given state? Does this batch of manufactured widgets have any defective parts? Knowing exact answers is expensive in terms of time and money and, in most cases, is impractical if not impossible. Consider asking every person in a region for their candidate preference, testing every person with flu symptoms for H1N1 (assuming every person reported when they had flu symptoms), or destructively testing widgets to determine if they are "good" (leaving no product to sell). Knowing exact answers, fortunately, isn't necessary or even useful in many situations. Understanding the direction of a trend or statistically significant results may be sufficient to answer the underlying question: who is likely to win the election, have we likely reached a critical threshold for flu, or is this batch of widgets good enough to ship? Statistics help us to answer these questions with a certain degree of confidence. This focuses on how we collect data. In data mining, we focus on the use of data, that is data that has already been collected. In some cases, we may have all the data (all purchases made by all customers), in others the data may have been collected using sampling (voters, their demographics and candidate choice). Building data mining models on all of your data can be expensive in terms of time and hardware resources. Consider a company with 40 million customers. Do we need to mine all 40 million customers to get useful data mining models? The quality of models built on all data may be no better than models built on a relatively small sample. Determining how much is a reasonable amount of data involves experimentation. When starting the model building process on large datasets, it is often more efficient to begin with a small sample, perhaps 1000 - 10,000 cases (records) depending on the algorithm, source data, and hardware. This allows you to see quickly what issues might arise with choice of algorithm, algorithm settings, data quality, and need for further data preparation. Instead of waiting for a model on a large dataset to build only to find that the results don't meet expectations, once you are satisfied with the results on the initial sample, you can  take a larger sample to see if model quality improves, and to get a sense of how the algorithm scales to the particular dataset. If model accuracy or quality continues to improve, consider increasing the sample size. Sampling in data mining is also used to produce a held-aside or test dataset for assessing classification and regression model accuracy. Here, we reserve some of the build data (data that includes known target values) to be used for an honest estimate of model error using data the model has not seen before. This sampling transformation is often called a split because the build data is split into two randomly selected sets, often with 60% of the records being used for model building and 40% for testing. Sampling must be performed with care, as it can adversely affect model quality and usability. Even a truly random sample doesn't guarantee that all values are represented in a given attribute. This is particularly troublesome when the attribute with omitted values is the target. A predictive model that has not seen any examples for a particular target value can never predict that target value! For other attributes, values may consist of a single value (a constant attribute) or all unique values (an identifier attribute), each of which may be excluded during mining. Values from categorical predictor attributes that didn't appear in the training data are not used when testing or scoring datasets. In subsequent posts, we'll talk about three sampling techniques using Oracle Database: simple random sampling without replacement, stratified sampling, and simple random sampling with replacement.

    Read the article

  • SQL SERVER – Integrate Your Data with Skyvia – Cloud ETL Solution

    - by Pinal Dave
    In our days data integration often becomes a key aspect of business success. For business analysts it’s very important to get integrated data from various sources, such as relational databases, cloud CRMs, etc. to make correct and successful decisions. There are various data integration solutions on market, and today I will tell about one of them – Skyvia. Skyvia is a cloud data integration service, which allows integrating data in cloud CRMs and different relational databases. It is a completely online solution and does not require anything except for a browser. Skyvia provides powerful etl tools for data import, export, replication, and synchronization for SQL Server and other databases and cloud CRMs. You can use Skyvia data import tools to load data from various sources to SQL Server (and SQL Azure). Skyvia supports such cloud CRMs as Salesforce and Microsoft Dynamics CRM and such databases as MySQL and PostgreSQL. You even can migrate data from SQL Server to SQL Server, or from SQL Server to other databases and cloud CRMs. Additionally Skyvia supports import of CSV files, either uploaded manually or stored on cloud file storage services, such as Dropbox, Box, Google Drive, or FTP servers. When data import is not enough, Skyvia offers bidirectional data synchronization. With this tool, you can synchronize SQL Server data with other databases and cloud CRMs. After performing the first synchronization, Skyvia tracks data changes in the synchronized data storages. In SQL Server databases (and other relational databases) it creates additional tracking tables and triggers. This allows synchronizing only the changed data. Skyvia also maps records by their primary key values to each other, so it does not require different sources to have the same primary key structure. It still can match the corresponding records without having to add any additional columns or changing data structure. The only requirement for synchronization is that primary keys must be autogenerated. With Skyvia it’s not necessary for data to have the same structure in integrated data storages. Skyvia supports powerful mapping mechanisms that allow synchronizing data with completely different structure. It provides support for complex mathematical and string expressions when mapping data, using lookups, etc. You may use data splitting – loading data from a single CSV file or source table to multiple related target tables. Or you may load data from several source CSV files or tables to several related target tables. In each case Skyvia preserves data relations. It builds corresponding relations between the target data automatically. When you often work with cloud CRM data, native CRM data reporting and analysis tools may be not enough for you. And there is a vast set of professional data analysis and reporting tools available for SQL Server. With Skyvia you can quickly copy your cloud CRM data to an SQL Server database and apply corresponding SQL Server tools to the data. In such case you can use Skyvia data replication tools. It allows you to quickly copy cloud CRM data to SQL Server or other databases without customizing any mapping. You need just to specify columns to copy data from. Target database tables will be created automatically. Skyvia offers powerful filtering settings to replicate only the records you need. Skyvia also provides capability to export data from SQL Server (including SQL Azure) and other databases and cloud CRMs to CSV files. These files can be either downloadable manually or loaded to cloud file storages or FTP server. You can use export, for example, to backup SQL Azure data to Dropbox. Any data integration operation can be scheduled for automatic execution. Thus, you can automate your SQL Azure data backup or data synchronization – just configure it once, then schedule it, and benefit from automatic data integration with Skyvia. Currently registration and using Skyvia is completely free, so you can try it yourself and find out whether its data migration and integration tools suits for you. Visit this link to register on Skyvia: https://app.skyvia.com/register Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Cloud Computing

    Read the article

  • Praise for Europe's Smart Metering & Conservation Efforts

    - by caroline.yu
    Recently, a writer at the Home Energy Team praised the UK for its efforts towards smart metering and energy conservation, with an article entitled UK Blazing A Trail With Smart Metering At Home? The article highlighted that the Department of Energy and Climate Change has announced that smart metering will be introduced in the next decade and that all UK households will have smart meters by the year 2020. In fact, the UK is not the only country striving to achieve carbon reduction targets, as many of its European counterparts have begun to take positive steps towards tackling the issue of energy conservation by implementing innovative new metering and billing technologies as well as promoting alternative energy solutions, such as wind and solar power. Since 1997, the states of the European Union, including France, Germany and Spain, have been working towards achieving a target of 12 percent renewable energy electricity by 2010. Germany in particular has made a significant achievement so far, having surpassed the target early in 2007. This success is largely due to the German Renewable Energy Act (EEG), which promoted the use of renewable energy. Recently, analysis from the European Wind Energy Association (EWEA) found that 21 of the EU Member States are meeting or exceeding their national target to achieve 20 percent renewable energy by 2020. However, six states - Belgium, Italy, Luxembourg, Malta, Bulgaria and Denmark - say they will not manage to reach their target through domestic action alone. Bulgaria and Denmark believe that with fresh national initiatives they could meet or exceed their targets, but others, including Italy, may need to import renewable energy from neighboring non-EU countries. Top achievers, according to the EWEA report, are Spain, which believes its renewable energy will reach 22.7 percent by 2020, as well as Germany, Estonia, Greece, Ireland, Poland, Slovakia and Sweden, who will all exceed their targets. "Importantly, the way that this renewable energy is controlled and distributed must be addressed in order to ensure its success," said Bastian Fischer, vice president and general manager EMEA, Oracle Utilities. "A smart gird infrastructure can enable utilities to deal with load distribution in times of increased need and ensure power is always available from these means. A smart grid also underpins the success of metering and billing technologies, such as smart metering, and allows utilities to deal with increased usage data and provide accurate billing." Outside of Europe, Australia has made significant steps towards improving water conservation. The Australian Department of Sustainability and Environment took some of the recent advancements made in the energy sector, including new metering and billing solutions, and applied them to the water industry, enhancing customer service and reducing consumption as a result. The adoption of smart metering in Europe is mainly driven by regulation, but significant technological improvements are being made the world over to change the way we use all kinds of energy. However, the developing markets are lagging behind. One of the primary reasons for this is the lack of infrastructure in place to use as a foundation for setting up energy-saving solutions, which is slowing the adoption of technologies such as smart meters. However, these countries do benefit from fewer outdated infrastructure and legacy systems, which is often cited by others as a difficult barrier to deploying new solutions. As a result, some countries should find new technologies easier to implement and adapt to in the immediate future, without this roadblock.

    Read the article

  • .Net Reflector 6.5 EAP now available

    - by CliveT
    With the release of CLR 4 being so close, we’ve been working hard on getting the new C# and VB language features implemented inside Reflector. The work isn’t complete yet, but we have some of the features working. Most importantly, there are going to be changes to the Reflector object model, and we though it would be useful for people to see the changes and have an opportunity to comment on them. Before going any further, we should tell you what the EAP contains that’s different from the released version. A number of bugs have been fixed, mainly bugs that were raised via the forum. This is slightly offset by the fact that this EAP hasn’t had a whole lot of testing and there may have been new bugs introduced during the development work we’ve been doing. The C# language writer has been changed to display in and out co- and contra-variance markers on interfaces and delegates, and to display default values for optional parameters in method definitions. We also concisely display values passed by reference into COM calls. However, we do not change callsites to display calls using named parameters; this looks like hard work to get right. The forthcoming version of the C# language introduces dynamic types and dynamic calls. The new version of Reflector should display a dynamic call rather than the generated C#: dynamic target = MyTestObject(); target.Hello("Mum"); We have a few bugs in this area where we are not casting to dynamic when necessary. These have been fixed on a branch and should make their way into the next EAP. To support the dynamic features, we’ve added the types IDynamicMethodReferenceExpression, IDynamicPropertyIndexerExpression, and IDynamicPropertyReferenceExpression to the object model. These types, based on the versions without “Dynamic” in the name, reflect the fact that we don’t have full information about the method that is going to be called, but only have its name (as a string). These interfaces are going to change – in an internal version, they have been extended to include information about which parameter positions use runtime types and which use compile time types. There’s also the interface, IDynamicVariableDeclaration, that can be used to determine if a particular variable is used at dynamic call sites as a target. A couple of these language changes have also been added to the Visual Basic language writer. The new features are exposed only when the optimization level is set to .NET 4. When the level is set this high, the other standard language writers will simply display a message to say that they do not handle such an optimization level. Reflector Pro now has 4.0 as an optional compilation target and we have done some work to get the pdb generation right for these new features. The EAP version of Reflector no longer installs the add-in on startup. The first time you run the EAP, it displays the integration options dialog. You can use the checkboxes to select the versions of Visual Studio into which you want to install the EAP version. Note that you can only have one version of Reflector Pro installed in Visual Studio; if you install into a Visual Studio that has another version installed, the previous version will be removed. Please try it out and send your feedback to the EAP forum.

    Read the article

  • GoldenGate 12c - MySQL Active-Active Replication Setup

    - by Jinyu Wang-Oracle
    Active-active  (also called Master-Master or Bi-Directional) replication captures data changes from two or more systems and replicat the changes to synchronize the data.  Active-Active replication is often needed for high availability, load balancing and scaling out purposes.   Oracle GoldenGate is known to be one of the first and the best replication tool handling active-active replications. As of Oracle GoldenGate 12c, it provides (Refer to Oracle GoldenGate 12.1.2 Documentation - Configuring Oracle GoldenGate for Active-Active High Availability for more information) the followings: Robust loop-back prevention Comprehensive conflict resolution and detection support Heterogeneous support across different database versions and operation systems.  Oracle GoldenGate supports active-active configurations for DB2 on z/OS, LUW, and IBM i, MySQL, Oracle, SQL/MX,SQL Server, Sybase, and Teradata. However, the setup is different from database to database. In this example, I will show you how to setup an active-active data replication between two MySQL database instances. The example setup below is to have active-active replication between MySQL 5.5 and MySQL 5.6 instances and is shown as follows: MySQL 5.5 (Manager Port: 15105)  Extract EXTRACT demoex01 SETENV (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.5.38/data/mysql.sock') DBOPTIONS CONNECTIONPORT 3305 DBOPTIONS HOST oraclelinux6.localdomain SOURCEDB test USERID root, PASSWORD mysql EXTTRAIL ./dirdat/extract/de TRANLOGOPTIONS ALTLOGDEST "/home/oracle/software/mysql_5.5.38/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl REPORTROLLOVER AT 05:30 ON saturday TABLE test.TCUSTMER; TABLE test.TCUSTORD; Pump EXTRACT demopm01 RMTHOST localhost, MGRPORT 15106, COMPRESS, TIMEOUT 30 RMTTRAIL ./dirdat/replicat/ps PASSTHRU TABLE test.TCUSTMER; TABLE test.TCUSTORD; Replicat replicat demorp01 setenv (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.5.38/data/mysql.sock') dboptions host oraclelinux6.localdomain, connectionport 3305 targetdb test, userid root, password mysql sourcedefs ./dirdat/replicat/democust.def discardfile ./dirrpt/demprp01.dsc, purge REPERROR (DEFAULT, ABEND) REPERROR(1062, IGNORE) map test.TCUSTMER, target test.TCUSTMER,colmap(usedefaults, region_code="region code"); map test.TCUSTORD, target test.TCUSTORD; MySQL 5.6 (Manager Port: 15106) Replicat replicat demorp01 setenv (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.6.19/data/mysql.sock') dboptions host oraclelinux6.localdomain, connectionport 3306 targetdb test, userid root, password mysql --assumetargetdefs sourcedefs ./dirdat/replicat/democust.def discardfile ./dirrpt/demprp01.dsc, purge map test.TCUSTMER, target test.TCUSTMER, colmap(usedefaults, "region code"=region_code); map test.TCUSTORD, target test.TCUSTORD; Extract EXTRACT demoex01 SETENV (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.6.19/data/mysql.sock') DBOPTIONS CONNECTIONPORT 3306 DBOPTIONS HOST oraclelinux6.localdomain SOURCEDB test USERID root, USERID mysql EXTTRAIL ./dirdat/extract/de TRANLOGOPTIONS ALTLOGDEST "/usr/local/mysql56/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl TABLE test.TCUSTMER; TABLE test.TCUSTORD; Pump EXTRACT demopm01 RMTHOST localhost, MGRPORT 15105, COMPRESS, TIMEOUT 30 RMTTRAIL ./dirdat/replicat/ps PASSTHRU TABLE test.TCUSTMER; TABLE test.TCUSTORD; The setup parameters are quite self-explanatory. The key setup is to avoid the replication data  looping. Oracle GoldenGate for MySQL uses the information in the replication checkpoint table to identify the transaction applied by replicats and thus avoid extracting those transactions by Oracle GoldenGate extracts. The example setup in the extract in MySQL 5.5 instance is shown as follows.  TRANLOGOPTIONS ALTLOGDEST "/home/oracle/software/mysql_5.5.38/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl Setting up an active-active replication is often more complicated than this and requires the following additional considerations. I would elaborate on this in the follow-up discussions. 

    Read the article

  • Ti Launchpad

    - by raysmithequip
    Just thought I would get a couple of notes up here for reference to anyone that is interested...it is now Feb 2011 and I have not been posting here enough to remember this blog. Back in Nov 2010 I ordered the Ti launchpad msp430, it is a little target board kit replete with a mini USB cable, two very inexpensive programmable mcu's and a couple of pin headers with a couple of led's on board, a spi connector some on board jumpers and two programmable micro switches....all for less than $5.00...INCLUDING SHIPPING!!....not bad when the ardruino's are running around 20.00 for the target board, atmega328 and cable off of eBay...I wont even mention the microchip pic right now.  Naw, for $5.00 the Ti launchpad kit is about the cheapest fun around...if-uns your a geek that is... Well, the launchpad was backordered for almost two months, came like Xmas eve in fact...I had almost forgotten it!! And really, it was way late and not my idea of an Xmas present for myself.  That would of been the web expressions 4 I bought a few weeks back.  With all the holidays, I did not even look at it till last week, in fact I passed the wrapped board around at my local ham club meeting during points of personal privilege....some oh's and ahhs but mostly duhs...I actually ordered it to avoid downloading the huge code compressor studio 4 (CCS) that was supposed to be included on the cd.  No cd.  I had already downloaded IAR  another programming IDE for these little micro bugs. In my spare time I toyed with IAR and the launchpad board but after about two days of playing delete the driver with windows I decided to just download CCS 4, the code limited version, and give that a shot......CCS 4, is a good rewrite from the earlier versions, it is based on Eclipse as an IDE and includes the drivers for the msp430 target board I received in the kit.  Once installed I quickly configured the debugger for the target chip which was already plugged into the dip socket at the factory, msp430G2131 from he drop down list and clicked ok...I was in!! The CCS4 is full of bells and whistles compared to the IAR, which I would of preferred for the simplicity.  But the code compressor studio really does have it all!!..the code limited version is free, and of all things will give you java script editor box.  The whole layout in debugger mode reminds me of any modern programmer IDE...I mean sure give me Tex anytime but you simply must admire all the boxes and options included in the GUI.  It was a simple matter to check the assembly code in the flash and ram memory that came preloaded for the launchpad kit.  Assembly.  I am right now looking for my old assembly textbooks...sure I remember how to use mov and add etc but a couple of the commands are a little more than vague anymore.  Still, these little mcu's are about 50 cents each and might just work in a couple of projects I have lined up for the near future.  I may document the code here.  Luckily, I plan to write the code in c++ for the main project but if it has to be assembly, no prob.  For reference, the program that came already on the 2131 in the kit was a temperature indicator that alternately flashed red and green leds and changed the intensity of either depending on whether the temp was rising or falling...neat.  Neat enough that it might be worthwhile banging out a little GUI in windows 7 to test the new user device system calls, maybe put a temp gauge widget up on the desktop...just to keep from getting bored.  If you see some assembly code on this blog, you know I was doing something with one of the many mcu's out there.....thats all for now, more to follow...a bit later, of course.

    Read the article

  • Session Sharing with another User on *NIX and Windows

    - by Giri Mandalika
    Oracle Solaris Since Solaris is not widely known for its graphical interface, let's just focus on sharing a terminal session in read-only mode with another user on the same system. Here is an example. eg., % finger Login Name TTY Idle When Where root Super-User pts/1 Sat 16:57 dhcp-amer-vpn-rmdc-a sunperf ??? pts/2 4 Sat 16:41 pitcher.sfbay.sun.com In this example, two users root and sunperf are connected to the same system from two different terminals pts/1 and pts/2 respectively. If the root user wants to show something to sunperf user -- what s/he is doing in her/his terminal, for example, it can be accomplished with the following command. script -a /dev/null | tee -a <target_terminal eg., # script -a /dev/null | tee -a /dev/pts/2 Script started, file is /dev/null # # uptime 5:04pm up 1 day(s), 2:56, 2 users, load average: 0.81, 0.81, 0.81 # # isainfo -v 64-bit sparcv9 applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc 32-bit sparc applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc v8plus div32 mul32 # # exit Script done, file is /dev/null After the script .. | tee .. command, sunperf user should be able to see the root user's stdin and stdout contents in her/his own terminal until the script session exits in root user's terminal. Since this kind of sharing is based on capturing and redirecting the contents to the target terminal, the users on the receiving end won't be able to see whatever is being edited on initiators' terminal [using editors such as vi]. Also it is not possible to share the session with any connected user on the system unless the initiator has the necessary permissions and privileges. The script utility records everything printed in a terminal session, while the tee utility replicates the contents of the screen capture on to the standard output of the target terimal. The tee utility does not buffer the output - so, the screen capture from the initiators' terminal appears almost right away in the target terminal. Though I never tested, this technique may work on all *NIX and Linux flavors with little or no changes. Also there might be other ways to accomplish this. [Thanks to Sujeet for sharing this tip] Microsoft Windows Most of the Windows users may rely on VNC services to share a desktop session. Another way to share the desktop session is to use the Remote Desktop Connection (RDC) client. Here are the steps. Connect to the target Windows system using Remote Desktop Connection client Launch Windows Task Manager Navigate to the "Users" tab Find the user session that you want to connect to and have full control over as the other user who is currently holding that session Select the user name in Windows Task Manager, right click and choose the option "Remote Control" A window pops up on the other user's session with the message "<USER is requesting to control your session remotely. Do you accept the request?" Once the other user says "Yes", you will be granted access to that session. Since then both users should be able to see the same screen and even control the session from their respective workstations.

    Read the article

  • Oracle GoldenGate 11g Release 2 Launch Webcast Replay Available

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";} For those of you who missed Oracle GoldenGate 11g Release 2 launch webcasts last week, the replay is now available from the following url. Harnessing the Power of the New Release of Oracle GoldenGate 11g I would highly recommend watching the webcast to meet many new features of the new release and hear the product management team respond to the questions from the audience in a nice long Q&A section. In my blog last week I listed the media coverage for this new release. There is a new article published by ITJungle talking about Oracle GoldenGate’s heterogeneity and support for DB2 for iSeries: Oracle Completes DB2/400 Support in Data Replication Tool As mentioned in last week’s blog, we received over 150 questions from the audience and in this blog I'd like to continue to post some of the frequently asked,  questions and their answers: Question: What are the fundamental differences between classic data capture and integrated data capture? Do both use the redo logs in the source database? Answer: Yes, they both use redo logs. Classic capture parses the redo log data directly, whereas the Integrated Capture lets the Oracle database parse the redo log record using an internal API. Question: Does GoldenGate version need to match Oracle Database version? Answer: No, they are not directly linked. Oracle GoldenGate 11g Release 2 supports Oracle Database version 10gR2 as well. For Oracle Database version 10gR1 and Oracle Database version 9i you will need GoldenGate11g Release 1 or lower. And for Oracle Database 8i you need Oracle GoldenGate 10 or earlier versions. Question: If I already use Data Guard, do I need GoldenGate? Answer: Data Guard is designed as the best disaster recovery solution for Oracle Database. If you would like to implement a bidirectional Active-Active replication solution or need to move data between heterogeneous systems, you will need GoldenGate. Question: On Compression and GoldenGate, if the source uses compression, is it required that the target also use compression? Answer: No, the source and target do not need to have the same compression settings. Question: Does GG support Advance Security Option on the Source database? Answer: Yes it does. Question: Can I use GoldenGate to upgrade the Oracle Database to 11g and do OS migration at the same time? Answer: Yes, this is a very common project where GoldenGate can eliminate downtime, give flexibility to test the target as needed, and minimize risks with fail-back option to the old environment. For more information on database upgrades please check out the following white papers: Best Practices for Migrating/Upgrading Oracle Database Using Oracle GoldenGate 11g Zero-Downtime Database Upgrades Using Oracle GoldenGate Question: Does GoldenGate create any trigger in the source database table level or row level to for real-time data integration? Answer: No, GoldenGate does not create triggers. Question: Can transformation be done after insert to destination table or need to be done before? Answer: It can happen in the Capture (Extract) process, in the  Delivery (Replicat) process, or in the target database. For more resources on Oracle GoldenGate 11gR2 please check out our Oracle GoldenGate 11gR2 resource kit as well.

    Read the article

  • Writing a method to 'transform' an immutable object: how should I approach this?

    - by Prog
    (While this question has to do with a concrete coding dilemma, it's mostly about what's the best way to design a function.) I'm writing a method that should take two Color objects, and gradually transform the first Color into the second one, creating an animation. The method will be in a utility class. My problem is that Color is an immutable object. That means that I can't do color.setRGB or color.setBlue inside a loop in the method. What I can do, is instantiate a new Color and return it from the method. But then I won't be able to gradually change the color. So I thought of three possible solutions: 1- The client code includes the method call inside a loop. For example: int duration = 1500; // duration of the animation in milliseconds int steps = 20; // how many 'cycles' the animation will take for(int i=0; i<steps; i++) color = transformColor(color, targetColor, duration, steps); And the method would look like this: Color transformColor(Color original, Color target, int duration, int steps){ int redDiff = target.getRed() - original.getRed(); int redAddition = redDiff / steps; int newRed = original.getRed() + redAddition; // same for green and blue .. Thread.sleep(duration / STEPS); // exception handling omitted return new Color(newRed, newGreen, newBlue); } The disadvantage of this approach is that the client code has to "do part of the method's job" and include a for loop. The method doesn't do it's work entirely on it's own, which I don't like. 2- Make a mutable Color subclass with methods such as setRed, and pass objects of this class into transformColor. Then it could look something like this: void transformColor(MutableColor original, Color target, int duration){ final int STEPS = 20; int redDiff = target.getRed() - original.getRed(); int redAddition = redDiff / steps; int newRed = original.getRed() + redAddition; // same for green and blue .. for(int i=0; i<STEPS; i++){ original.setRed(original.getRed() + redAddition); // same for green and blue .. Thread.sleep(duration / STEPS); // exception handling omitted } } Then the calling code would usually look something like this: // The method will usually transform colors of JComponents JComponent someComponent = ... ; // setting the Color in JComponent to be a MutableColor Color mutableColor = new MutableColor(someComponent.getForeground()); someComponent.setForeground(mutableColor); // later, transforming the Color in the JComponent transformColor((MutableColor)someComponent.getForeground(), new Color(200,100,150), 2000); The disadvantage is - the need to create a new class MutableColor, and also the need to do casting. 3- Pass into the method the actual mutable object that holds the color. Then the method could do object.setColor or similar every iteration of the loop. Two disadvantages: A- Not so elegant. Passing in the object that holds the color just to transform the color feels unnatural. B- While most of the time this method will be used to transform colors inside JComponent objects, other kinds of objects may have colors too. So the method would need to be overloaded to receive other types, or receive Objects and have instanceof checks inside.. Not optimal. Right now I think I like solution #2 the most, than solution #1 and solution #3 the least. However I'd like to hear your opinions and suggestions regarding this.

    Read the article

  • SCons does not clean all files

    - by meowsqueak
    I have a file system containing directories of "builds", each of which contains a file called "build-info.xml". However some of the builds happened before the build script generated "build-info.xml" so in that case I have a somewhat non-trivial SCons SConstruct that is used to generate a skeleton build-info.xml so that it can be used as a dependency for further rules. I.e.: for each directory: if build-info.xml already exists, do nothing. More importantly, do not remove it on a 'scons --clean'. if build-info.xml does not exist, generate a skeleton one instead - build-info.xml has no dependencies on any other files - the skeleton is essentially minimal defaults. during a --clean, remove build-info.xml if it was generated, otherwise leave it be. My SConstruct looks something like this: def generate_actions_BuildInfoXML(source, target, env, for_signature): cmd = "python '%s/bin/create-build-info-xml.py' --version $VERSION --path . --output ${TARGET.file}" % (Dir('#').abspath,) return cmd bld = Builder(generator = generate_actions_BuildInfoXML, chdir = 1) env.Append(BUILDERS = { "BuildInfoXML" : bld }) ... # VERSION = some arbitrary string, not important here # path = filesystem path, set elsewhere build_info_xml = "%s/build-info.xml" % (path,) if not os.path.exists(build_info_xml): env.BuildInfoXML(build_info_xml, None, VERSION = build) My problem is that 'scons --clean' does not remove the generated build-info.xml files. I played around with env.Clean(t, build_info_xml) within the 'if' but I was unable to get this to work - mainly because I could not work out what to assign to 't' - I want a generated build-info.xml to be cleaned unconditionally, rather than based on the cleaning of another target, and I wasn't able to get this to work. If I tried a simple env.Clean(None, "build_info_xml") after but outside the 'if' I found that SCons would clean every single build-info.xml file including those that weren't generated. Not good either. What I'd like to know is how SCons goes about determining which files should be cleaned and which should not. Is there something funny about the way I've used a generator function that prevents SCons from recording this target as a Clean candidate?

    Read the article

  • MSBuild Script and VS2010 publish apply Web.config Transform

    - by Jason
    So, I have VS 2010 installed and am in the process of modifying my MSBuild script for our TeamCity build integration. Everything is working great with one exception. How can I tell MSBuild that I want to apply the Web.conifg transform files that I've created when I publish the build... I have the following which produces the compiled web site but, it outputs a Web.config, Web.Debug.config and, Web.Release.config files (All 3) to the compiled output directory. In studio when I perform a publish to file system it will do the transform and only output the Web.config with the appropriate changes... <Target Name="CompileWeb"> <MSBuild Projects="myproj.csproj" Properties="Configuration=Release;" /> </Target> <Target Name="PublishWeb" DependsOnTargets="CompileWeb"> <MSBuild Projects="myproj.csproj" Targets="ResolveReferences;_CopyWebApplication" Properties="WebProjectOutputDir=$(OutputFolder)$(WebOutputFolder); OutDir=$(TempOutputFolder)$(WebOutputFolder)\;Configuration=Release;" /> </Target> Any help would be great..! I know this can be done by other means but I would like to do this using the new VS 2010 way if possible

    Read the article

  • How to insert inline content from one FlowDocument into another?

    - by Robert Rossney
    I'm building an application that needs to allow a user to insert text from one RichTextBox at the current caret position in another one. I spent a lot of time screwing around with the FlowDocument's object model before running across this technique - source and target are both FlowDocuments: using (MemoryStream ms = new MemoryStream()) { TextRange tr = new TextRange(source.ContentStart, source.ContentEnd); tr.Save(ms, DataFormats.Xaml); ms.Seek(0, SeekOrigin.Begin); tr = new TextRange(target.CaretPosition, target.CaretPosition); tr.Load(ms, DataFormats.Xaml); } This works remarkably well. The only problem I'm having with it now is that it always inserts the source as a new paragraph. It breaks the current run (or whatever) at the caret, inserts the source, and ends the paragraph. That's appropriate if the source actually is a paragraph (or more than one paragraph), but not if it's just (say) a line of text. I think it's likely that the answer to this is going to end up being checking the target to see if it consists entirely of a single block, and if it does, setting the TextRange to point at the beginning and end of the block's content before saving it to the stream. The entire world of the FlowDocument is a roiling sea of dark mysteries to me. I can become an expert at it if I have to (per Dostoevsky: "Man is the animal who can get used to anything."), but if someone has already figured this out and can tell me how to do this it would make my life far easier.

    Read the article

  • How to start an iPhone 3.1.3 project in Xcode 3.2.3 (iPhone SDK 4 beta)

    - by Zordid
    Hi there! I am having big problems since I downloaded the beta version of iPhone SDK 4.0. Okay, I just started to look at iPhone development a few weeks ago, but I cannot figure out how Xcode is supposed to work: whenever I start a new project, I choose a template like "View-based application" or so. Now, the target will always (at least I did not find a preference anywhere!) be the latest SDK: 4.0. But then: switching the target back to, say, 3.1.3 the template files seem to contain errors! Starting an empty application this way yields an exception: Terminating app due to uncaught exception 'NSUnknownKeyException', reason: [...] this class is not key value coding-compliant for the key rootViewController sick Now, my (stupid) question: How do I develop an application NOT targeting the latest SDK, but the standard 3.1.3 SDK?? In other words: I would expect Xcode not only to ask for a project type in the New Project window, but also for my desired target!! Am I right that the templates generated with this step are not valid for any other target than 4.0? How can that be?? ...I want my Eclipse back! sigh Can anybody help me please?

    Read the article

  • How to propertly reference a namespace for Microsoft.Sdc.Tasks.XmlFile.GetValue

    - by æther
    Hi, i want to use MSBuild to insert a custom xml element into web.config. After looking up online, i found such solution: 1) Store element in the .build file in projectextensions <ProjectExtensions> <CustomElement name="CustomElementName"> ... </CustomElement> </ProjectExtensions> 2) Retrieve the element with GetValue <Target name="ModifyConfig"> <XmlFile.GetValue Path="$(MSBuildProjectFullPath)" XPath="Project/ProjectExtensions/CustomElement[@name='CustomElementName']"> <Output TaskParameter="Value" PropertyName="CustomElementProperty"/> </XmlFile.GetValue> </Target> This will not work as i need to reference a namespace the .build project is using for it to find the needed element (checked the .build file with XPath Visualizer). So, i look up for a further solution and come to this: <ItemGroup> <XmlNamespace Include="MSBuild"> <Prefix>msb</Prefix> <Uri>http://schemas.microsoft.com/developer/msbuild/2003</Uri> </XmlNamespace> </ItemGroup> <Target name="ModifyConfig"> <XmlFile.GetValue Path="$(MSBuildProjectFullPath)" Namespaces="$(XmlNamespace)" XPath="/msb:Project/msb:ProjectExtensions/msb:CustomElement[@name='CustomElementName']" > <Output TaskParameter="Value" PropertyName="CustomElementProperty"/> </XmlFile.GetValue> </Target> But for some reason namespace is not recognized - MSBuild reports the following error: C:...\mybuild.build(53,9): error : A task error has occured. C:...\mybuild.build(53,9): error : Message = Namespace prefix 'msb' is not defined. I tried some variations of referencing it differently but none work, and there is not much about propertly referencing those namespaces online also. Can you tell me what am i doing wrong and how to do it propertly?

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >