Search Results

Search found 11759 results on 471 pages for 'isolation level'.

Page 449/471 | < Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >

  • Languages and VMs: Features that are hard to optimize and why

    - by mrjoltcola
    I'm doing a survey of features in preparation for a research project. Name a mainstream language or language feature that is hard to optimize, and why the feature is or isn't worth the price paid, or instead, just debunk my theories below with anecdotal evidence. Before anyone flags this as subjective, I am asking for specific examples of languages or features, and ideas for optimization of these features, or important features that I haven't considered. Also, any references to implementations that prove my theories right or wrong. Top on my list of hard to optimize features and my theories (some of my theories are untested and are based on thought experiments): 1) Runtime method overloading (aka multi-method dispatch or signature based dispatch). Is it hard to optimize when combined with features that allow runtime recompilation or method addition. Or is it just hard, anyway? Call site caching is a common optimization for many runtime systems, but multi-methods add additional complexity as well as making it less practical to inline methods. 2) Type morphing / variants (aka value based typing as opposed to variable based) Traditional optimizations simply cannot be applied when you don't know if the type of someting can change in a basic block. Combined with multi-methods, inlining must be done carefully if at all, and probably only for a given threshold of size of the callee. ie. it is easy to consider inlining simple property fetches (getters / setters) but inlining complex methods may result in code bloat. The other issue is I cannot just assign a variant to a register and JIT it to the native instructions because I have to carry around the type info, or every variable needs 2 registers instead of 1. On IA-32 this is inconvenient, even if improved with x64's extra registers. This is probably my favorite feature of dynamic languages, as it simplifies so many things from the programmer's perspective. 3) First class continuations - There are multiple ways to implement them, and I have done so in both of the most common approaches, one being stack copying and the other as implementing the runtime to use continuation passing style, cactus stacks, copy-on-write stack frames, and garbage collection. First class continuations have resource management issues, ie. we must save everything, in case the continuation is resumed, and I'm not aware if any languages support leaving a continuation with "intent" (ie. "I am not coming back here, so you may discard this copy of the world"). Having programmed in the threading model and the contination model, I know both can accomplish the same thing, but continuations' elegance imposes considerable complexity on the runtime and also may affect cache efficienty (locality of stack changes more with use of continuations and co-routines). The other issue is they just don't map to hardware. Optimizing continuations is optimizing for the less-common case, and as we know, the common case should be fast, and the less-common cases should be correct. 4) Pointer arithmetic and ability to mask pointers (storing in integers, etc.) Had to throw this in, but I could actually live without this quite easily. My feelings are that many of the high-level features, particularly in dynamic languages just don't map to hardware. Microprocessor implementations have billions of dollars of research behind the optimizations on the chip, yet the choice of language feature(s) may marginalize many of these features (features like caching, aliasing top of stack to register, instruction parallelism, return address buffers, loop buffers and branch prediction). Macro-applications of micro-features don't necessarily pan out like some developers like to think, and implementing many languages in a VM ends up mapping native ops into function calls (ie. the more dynamic a language is the more we must lookup/cache at runtime, nothing can be assumed, so our instruction mix is made up of a higher percentage of non-local branching than traditional, statically compiled code) and the only thing we can really JIT well is expression evaluation of non-dynamic types and operations on constant or immediate types. It is my gut feeling that bytecode virtual machines and JIT cores are perhaps not always justified for certain languages because of this. I welcome your answers.

    Read the article

  • Future of VB.NET? [closed]

    - by Alex Yeung
    Hi all, I worked with C# for years. Last year, I changed my job and the company use VB.NET. Of course, theoretically C# and VB.NET are very similar and I easily adapted. However, I have worked for VB.NET for 1 year. I cannot see any future of VB.NET. As a programming language, it is so foo. Here is a list of what C# can do but VB.NET cannot Case insensitive variables how to think a new variable name? If I have a property called FolderPath, i need to establish another private variable called _folderPath or m_folderPath. In C#, FolderPath and folderPath are two variables. Moreover, it gets compile error if variable name is same as a class name. For example Dim guid = Guid.NewGuid(). (What the...) Again, I need to think a new variable name. Adhoc scope In C#, we could use {...} to create a adhoc scope and all resource in {...} will not affect the code outside. However there is not such syntax in VB.NET. I could only use If True Then and End If to make a local scope which is so unclear. In-Method Region Sometime, it is unavoidable to have a long long method. VB.NET does not support in-method region. I always need to scroll down for 1000 lines. It wastes my time. No multi-line string definition In C#, we could var s = @"..." to define a multi-line string. In VB.NET there is no direct method to do that. The indirect way is use XML-literal string. Dim s = <![CDATA[...]]>.Value. However it is unclear. No block comment In C#, we have line comment // and block comment /* ... */. However in VB.NET we only have line comment which is a very big trouble for me. No statement end symbol Statements are separated by line break in VB.NET; while statements are separated by ; in C#. Underscore I think many people know underscore _ is continue statement symbol. I really disagree with that. I know MS VB.NET language team is going to remove the underscore syntax from VB.NET. However what can we do now? Although underscore is removed in the future, what's the advantage of that? I cannot see any advantage! With scope With scope is an evil scope. Although it allows shorter statement, it is hard to trace. Default Namesapce in project level It is a nightmare for me. The only advantage of VB.NET is property initialization. I think C# cannot do that (correct me if I am wrong.) Public Property ThisIsMyProperty As String = "MyValue" Remarks: I don't think optional method parameters is an advantage of OOP. By those disadvantages, I cannot see the future of VB.NET. Anyone sees the future of VB.NET?

    Read the article

  • Append <ul> and <li> in recursive loop

    - by Batman
    I have a site collection. I was told I need a recursive loop to do this. This is what I've tried: When the site loads, call getSiteTree() which passes the top level website to my getSubSite() function. From there I check if there are any subsites. I have a boolean but I'm not really using it for anything yet, I've just seen it used before for this type of work. Anyways, from there I check if there are any sub sits, if not I log the end of the branch, if there are, I call the function again using the new url and repeat the process. Looking at my console, it seems to work as intended. function getSiteTree(){ var tree = $('#treeviewList'); var rootsite = window.location.protocol + "//" + window.location.hostname; var siteEnd = false; getSubSite(rootsite); } function getSubSite(url){ $().SPServices({ operation: "GetWebCollection", webURL: url, async: true, completefunc: function(xData, Status) { var siteUrl; var siteCount = $(xData.responseXML).find("Web").length; if(siteCount == 0){ console.log("end of branch"); siteEnd = true; }else{ $(xData.responseXML).find("Web").each(function() { siteUrl = $(this).attr("Url"); console.log(siteUrl); getSubSite(siteUrl); }); } } }); } My questions: now that I have my sites, I need to take those sites and create something like this but I'm not sure how to accomplish this. <li>Site 1 <ul> <li>sub 1.1</li> <li>sub 1.2</li> <li>sub 1.3</li> <ul> <li>1.3.1</li> </ul> <li>sub 1.4</li> <li>sub 1.5</li> </ul> </li> <li>Site 2 <ul> <li>sub 2.1</li> <li>sub 2.2</li> <li>sub 2.3</li> <ul> <li>2.3.1</li> <li>2.3.2</li> </ul> </ul> </li> </ul> I have this inital html: <div id="treeviewDiv" style="width:200px;height:150px;overflow:scroll"> <ui id="treeviewList"></ui> </div>

    Read the article

  • Limiting TCP sends with a "to-be-sent" queue and other design issues.

    - by Poni
    Hello all! This question is the result of two other questions I've asked in the last few days. I'm creating a new question because I think it's related to the "next step" in my understanding of how to control the flow of my send/receive, something I didn't get a full answer to yet. The other related questions are: http://stackoverflow.com/questions/3028376/an-iocp-documentation-interpretation-question-buffer-ownership-ambiguity http://stackoverflow.com/questions/3028998/non-blocking-tcp-buffer-issues In summary, I'm using Windows I/O Completion Ports. I have several threads that process notifications from the completion port. I believe the question is platform-independent and would have the same answer as if to do the same thing on a *nix, *BSD, Solaris system. So, I need to have my own flow control system. Fine. So I send send and send, a lot. How do I know when to start queueing the sends, as the receiver side is limited to X amount? Let's take an example (closest thing to my question): FTP protocol. I have two servers; One is on a 100Mb link and the other is on a 10Mb link. I order the 100Mb one to send to the other one (the 10Mb linked one) a 1GB file. It finishes with an average transfer rate of 1.25MB/s. How did the sender (the 100Mb linked one) knew when to hold the sending, so the slower one wouldn't be flooded? Another way to ask this: Can I get a "hold-your-sendings" notification from the remote side? Is it built-in in TCP or the so called "reliable network protocol" needs me to do so? Again, I have a loop with many sends to a remote server, and at some point, within that loop I'll have to determine if I should queue that send or I can pass it on to the transport layer (TCP). How do I do that? What would you do? Of course that when I get a completion notification from IOCP that the send was done I'll issue other pending sends, that's clear. Another design question related to this: Since I am to use a custom buffers with a send queue, and these buffers are being freed to be reused (thus not using the "delete" keyword) when a "send-done" notification has been arrived, I'll have to use a mutual exlusion on that buffer pool. Using a mutex slows things down, so I've been thinking; Why not have each thread have its own buffers pool, thus accessing it , at least when getting the required buffers for a send operation, will require no mutex, because it belongs to that thread only. The buffers pool is located at the thread local storage (TLS) level. No mutual pool implies no lock needed, implies faster operations BUT also implies more memory used by the app, because even if one thread already allocated 1000 buffers, the other one that is sending right now and need 1000 buffers to send something will need to allocated these to its own. This is a long question and I hope none got hurt (: Thank you all!

    Read the article

  • Apache/PHP serving file multiple times

    - by easement
    I have a system with a download.php page. The page takes and id and loads a file based on from the DB Record and then serves it up. I've noticed a couple instances where files are requested multiple times in short time spans (20ms). Times that are too quick for human input. There are plenty of instances where the downloader functions fine. However, in taking a closer look at the downloader’s usage, I did see some interesting behavior. For instance, the IP address xxx.xxx.xxx.xxx (which is one in a range owned by xxxxxx.de in Germany) came to the site through Google. They browsed around and then came to the page http://site.com/xxxx/press+125.php There they issued a request for /download.php?id=/ZZ/n+aH55Y= (a PDF) at 9:04:23AM. That alone is not a big deal. However, what is interesting is that the server seems to have been quite preoccupied with serving that request. In the logs the request first completes between 9:09:48 and 9:10:00. It looks like the user must have gotten tired of waiting during that time and requested the document two more times. Between 09:14:47 and 09:15:00 the same request appears again, except it is from 9:04:43AM, 20ms later than the first request. Then it pops up a third time, with a request that started at 09:05:06 completing between 09:19:55 and 09:19:58! I’m suspicious of that document. In looking through the logs I see other instances where it takes the server a little while to handle that specific file. Check out this list of requests from zzz.zzz.zzz.zzz[different than above] for the file /download.php?id=/ZZ/n+aH55Y= (the same docuemnt as before): Request time Complete Time 04:32:43 04:33:36 04:32:50 04:33:36 04:32:51 04:33:38 04:33:05 04:33:38 04:33:34 04:33:42 04:33:05 04:33:42 So something is definitely going on. Whether it has to do with this specific document tripping up the server, the download.php page’s code, or if we’re just seeing the evidence of some server level overload as it plays out in real time I’m not yet sure. In fairness, there are other instances of people downloading /download.php?id=/ZZ/n+aH55Y= (the same PDF) without error. However, it is interesting that the multiple processes only seem to happen with this one file, and then only when it is accessed through the page http://site.com/press+125.php . It bears further investigation if there’s something amiss inside the code that causes the system to fire off multiple download requests that occupy the server. I don't know if this press+125.php is a rabbit hole, but there is weird consicence. Any ideas? I'm totally out of ideas. Apache maxed out? Things like that. ///DOWNLOAD.php $file = new files(); $file->comparison_filter("id", "=", $id); //sql to load if ($file->load()) { $file->serve(); } //FILES function serve() { if ($this->is_loaded) { if (file_exists($this->get_value("filename"))) { if ($this->get_value("content_type") != "") { header("Content-Type: " . $this->get_value("content_type")); } header("Content-Length: " . filesize($this->get_value("filename"))); if ($this->get_value("flag_image") == 0 || $this->get_value("flag_image") == false) { header("Cache-Control: private"); header("Content-Disposition: attachment; filename=" . urlencode($this->get_value("original_filename"))); } set_time_limit(0); @readfile($this->get_value("filename")); exit; } } }

    Read the article

  • Source code versioning with comments (organizational practice) - leave or remove?

    - by ADTC
    Before you start admonishing me with "DON'T DO IT," "BAD PRACTICE!" and "Learn to use proper source code control", please hear me out first. I am fully aware that the practice of commenting out old code and leaving it there forever is very bad and I hate such practice myself. But here's the situation I'm in. A few months ago I joined a company as software developer. I had worked in the company for few months as an intern, about a year before joining recently. Our company uses source code version control (CVS) but not properly. Here's what happened both in my internship and my current permanent position. Each time I was assigned to work on a project (legacy, about 8-10 years old). Instead of creating a CVS account and letting me check out code and check in changes, a senior colleague exported the code from CVS, zipped it up and passed it to me. While this colleague checks in all changes in bulk every few weeks, our usual practice is to do fine-grained versioning in the actual source code itself (each file increments in versions independent from the rest). Whenever a change is made to a file, old code is commented out, new code entered below it, and this whole section is marked with a version number. Finally a note about the changes is placed at the top of the file in a section called Modification History. Finally the changed files are placed in a shared folder, ready and waiting for the bulk check-in. /* * Copyright notice blah blah * Some details about file (project name, file name etc) * Modification History: * Date Version Modified By Description * 2012-10-15 1.0 Joey Initial creation * 2012-10-22 1.1 Chandler Replaced old code with new code */ code .... //v1.1 start //old code new code //v1.1 end code .... Now the problem is this. In the project I'm working on, I needed to copy some new source code files from another project (new in the sense that they didn't exist in destination project before). These files have a lot of historical commented out code and comment-based versioning including usually long or very long Modification History section. Since the files are new to this project I decided to clean them up and remove unnecessary code including historical code, and start fresh at version 1.0. (I still have to continue the practice of comment-based versioning despite hating it. And don't ask why not start at version 0.1...) I have done similar something during my internship and no one said anything. My supervisor has seen the work a few times and didn't say I shouldn't do such clean-up (if at all it was noticed). But a same-level colleague saw this and said it's not recommended as it may cause downtime in the future and increase maintenance costs. An example is when changes are made in another project on the original files and these changes need to be propagated to this project. With code files drastically different, it could cause confusion to an employee doing the propagation. It makes sense to me, and is a valid point. I couldn't find any reason to do my clean-up other than the inconvenience of a ridiculously messy code. So, long story short: Given the practice in our company, should I not do such clean-up when copying new files from project to project? Is it better to make changes on the (copy of) original code with full history in comments? Or what justification can I give for doing the clean-up? PS to mods: Hope you allow this question some time even if for any reason you determine it to be unfit in SO. I apologize in advance if anything is inappropriate including tags.

    Read the article

  • log-back and thirdparty writing to stdout. How to stop them getting interleaved.

    - by David Roussel
    First some background. I have a batch-type java process run from a DOS batch script. All the java logging goes to stdout, and the batch script redirects the stdout to a file. (This is good for me because I can ECHO from the script and it gets into the log file, so I can see all the java JVM command line args, which is great for debugging.) I may not I use slf4j API, and for the backend I used to use log4j, but recently switched to logback-classic. Although all my application code uses slf4j, I have a third party library that does it's own logging (not using a standard API) which also gets written to stdout. The problem is that sometimes log lines get mixed up and don't cleanly appear on separate lines. Here is an example of some messed up output: 2010-05-28 18:00:44.783 [thread-1 ] INFO CreditCorrelationElementBuilderImpl - Bump parameters exist for scenario, now attempting bumping. [indexDisplayName=STANDARD_S1_v300] 2010-05-28 18:01:43.517 [thread-1 ] INFO CreditCorrelationElementBuilderImpl - Found adjusted point in data, now applying bump. [point=0.144040000000000] 2010-05-28 18:01:58.642 [thread-1 ] DEBUG com.company.request.Request - Generated request for [dealName=XXX_20050225_01[5],dealType=GENERIC_XXX,correlationType=2,copulaType=1] in 73.8 s, Simon Stopwatch: [sys1.batchpricer.reqgen.gen INHERIT] total 1049 s, counter 24, max 74.1 s, min 212 ms 2010-05-28 18:05/28/10 18:02:20.236 INFO: [ServiceEvent] SubmittedTask:BC-STRESS_04_FZBC-2010-05-21-545024448189310126-23 01:58.658 [req-writer-2b ] INFO .c.g.r.o.OptionalFileDocumentOutput - Writing request XML to \\filserver\dir\file1.xml - write time: 21.4 ms - Simon Stopwatch: [sys1.batchpricer.reqgen.writeinputfile INHERIT] total 905 ms, counter 24, max 109 ms, min 10.8 ms 2010-05-28 18:02:33.626 [ResponseCallbacks-1: DriverJobSpace$TakeJobRunner$1] ERROR c.c.s.s.D.CalculatorCallback - Id:23 no deal found !! 2010-0505/28/10 18:02:50.267 INFO: [ServiceEvent] CompletedTask:BC-STRESS_04_FZBC-2010-05-21-545024448189310126-23:Total:24 Now comparing back to older log files, it seems the problem didn't occur when using log4j as the logging backend. So logback must be doing something different. The problem seems to be that although PrintStream.write(byte buf[], int off, int len) is synchronized, however I can see in ch.qos.logback.core.joran.spi.ConsoleTarget that System.out.write(int b) is the only write method called. So inbetween logback outputting each byte, the thirdparty library is managing to write a whole string to the stdout. (Not only is this cause me a problem, but it must also be a little inefficient?) Is there any other fix to this interleaving problem than patching the code to ConsoleTarget so it implments the other write methods? Any nice work arounds. Or should I just file a bug report? Here is my logback.xml: <configuration> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%-16thread] %-5level %-35.35logger{30} - %msg%n</pattern> </encoder> </appender> <root level="DEBUG"> <appender-ref ref="STDOUT" /> </root> </configuration> I'm using logback 0.9.20 with java 1.6.0_07.

    Read the article

  • objective-c Novice - Needs help with global variables and setter method

    - by user544006
    I am creating a quiz app which has 2 views, MMAppViewController and a subview Level1View. I have declared NSInteger property "theScore" in the MMAppViewController view and have synthesised it. In my Level1View when they answer a correct question the "theScore" int will increase by one. The score has to be a global variable because when you reach so many points it will unlock the next level. For some reason in my switch statement it only lets me use the setTheScore method once. I am getting errors for every other set method in the switch statement. Error: "Duplicate label setTheScore". The statement is in the pushButtonAnswer method: setTheScore: theScore++; Here is my code: #import "Level1View.h" #import "MMAppViewController.h" @implementation Level1View @synthesize answer; @synthesize question; @synthesize userAnswer; @synthesize theScore; @synthesize score; int questionNum=0; NSInteger score=0; NSInteger theScore; BOOL start=FALSE; BOOL optionNum=FALSE; -(IBAction)pushBack{ [self dismissModalViewControllerAnimated:YES]; } -(IBAction)pushButton1{ optionNum=TRUE; labelAnswer.textColor=[UIColor blackColor]; userAnswer=@"1"; [labelAnswer setText:(@"You chose 'A'")]; } -(IBAction)pushButton2{ optionNum=TRUE; labelAnswer.textColor=[UIColor blackColor]; userAnswer=@"2"; [labelAnswer setText:(@"You chose 'B'")]; } -(IBAction)pushButton3{ optionNum=TRUE; labelAnswer.textColor=[UIColor blackColor]; userAnswer=@"3"; [labelAnswer setText:(@"You chose 'C'")]; } -(IBAction)pushButtonAnswer{ labelAnswer.textColor=[UIColor blackColor]; switch (questionNum){ case 1: if(answer==userAnswer && optionNum==TRUE){ labelAnswer.textColor=[UIColor greenColor]; [labelAnswer setText:(@"correct")]; [self hideButtons]; score++; [self setTheScore: theScore++]; } else if(optionNum==FALSE){ [labelAnswer setText:(@"Please choose an answer below:")];} else{ labelAnswer.textColor=[UIColor redColor]; [labelAnswer setText:(@"wrong")];} [self hideButtons]; break; case 2: if(answer==userAnswer && optionNum==TRUE){ labelAnswer.textColor=[UIColor greenColor]; [labelAnswer setText:(@"correct")]; [self hideButtons]; score++; [self setTheScore: theScore++]; } else if(optionNum==FALSE){ [labelAnswer setText:(@"Please choose an answer below:")];} else{ labelAnswer.textColor=[UIColor redColor]; [labelAnswer setText:(@"wrong")]; [self hideButtons];} break; case 3: if(answer==userAnswer && optionNum==TRUE){ labelAnswer.textColor=[UIColor greenColor]; .... And #import "MMAppViewController.h" #import "Level1View.h" @implementation MMAppViewController @synthesize theScore; NSInteger score; -(IBAction)pushLevel1{ Level1View *level1View = [[Level1View alloc] initWithNibName:nil bundle:nil]; [self presentModalViewController:level1View animated:YES]; theScore++; } -(IBAction)pushLevel2{ //Level1View *level1View = [[Level1View alloc] initWithNibName:nil bundle:nil]; //[self presentModalViewController:level1View animated:YES]; NSInteger *temp = Level1View.score; //int theScore=2; [labelChoose setText:[NSString stringWithFormat:@"You scored %i", theScore]]; } Does anyone know why i am getting these errors and if I am coding this correctly?

    Read the article

  • R data.frame with stacked specified titles for latex output with xtable

    - by hhh
    > w<-data.frame(c(0,0,1,1.3,2.1), c(0,0.6,0.9,1.6091,1.6299), c(258,141,206.4,125.8,140.5), c(162,162.7,162.4,162,162)) > colnames(w) <- c('Worst Cum', 'Best Cum', 'Worst Points', 'Best Points' ) Wrong (the code) Worst Cum Best Cum Worst Points Best Points 1 0.0 0.0000 258.0 162.0 2 0.0 0.6000 141.0 162.7 3 1.0 0.9000 206.4 162.4 4 1.3 1.6091 125.8 162.0 5 2.1 1.6299 140.5 162.0 Goal: how? CUM Points Worst Best Worst Best 1 0.0 0.0000 258.0 162.0 2 0.0 0.6000 141.0 162.7 3 1.0 0.9000 206.4 162.4 4 1.3 1.6091 125.8 162.0 5 2.1 1.6299 140.5 162.0 Trial 1: fail with many data.frames > a<-data.frame(c(0,0,1,1.3,2.1), c(0,0.6,0.9,1.6091,1.6299)) > b<-data.frame(c(258,141,206.4,125.8,140.5), c(162,162.7,162.4,162,162)) > c<-data.frame(cbind(a,b)) > colnames(c) <- c('Cum', 'Points') > colnames(a) <- c('Worst', 'Best') > colnames(b) <- c('Worst', 'Best') and > xtable(c) % latex table generated in R 2.13.1 by xtable 1.6-0 package % Thu Nov 24 03:43:34 2011 \begin{table}[ht] \begin{center} \begin{tabular}{rrrrr} \hline & Cum & Points & NA & NA \\ \hline 1 & 0.00 & 0.00 & 258.00 & 162.00 \\ 2 & 0.00 & 0.60 & 141.00 & 162.70 \\ 3 & 1.00 & 0.90 & 206.40 & 162.40 \\ 4 & 1.30 & 1.61 & 125.80 & 162.00 \\ 5 & 2.10 & 1.63 & 140.50 & 162.00 \\ \hline \end{tabular} \end{center} \end{table} > xtable(a) % latex table generated in R 2.13.1 by xtable 1.6-0 package % Thu Nov 24 03:45:06 2011 \begin{table}[ht] \begin{center} \begin{tabular}{rrr} \hline & Worst & Best \\ \hline 1 & 0.00 & 0.00 \\ 2 & 0.00 & 0.60 \\ 3 & 1.00 & 0.90 \\ 4 & 1.30 & 1.61 \\ 5 & 2.10 & 1.63 \\ \hline \end{tabular} \end{center} \end{table} It is wrong because it replaces the inner headers with higher-level header nb "NA" vals.

    Read the article

  • dynamically include zipfilesets into a WAR

    - by Konstantin
    hi all - a bit of clumsy situation but for the moment we cannot migrate to more straight-forward project layout. We have a project called myServices and it has 2 source folders (yes, I know, but that's the way it is for now) - I'm trying to make build process a bit more flexible so we now have a property called artifact.names that will be parsed by generic build.xml and based on name, it will call either jar or war task, eg. myService-war will create a WAR file with the following zipfilesets included there: myService-war-classes, myService-war-web-inf, myService-war-meta-inf. I want to add a bit more flexibility, and allow having additional zipfilesets, eg. myService-war-etc-1,2 etc - so these will be picked up by the package target automatically. I cannot use "if" inside war target, and also ${ant.refid:myService-war-classes} property is not resolved, so I'm kind of stuck at the moment with my options - how do I dynamically include a zipfileset into a WAR? You can refer to fileset by id, but it MUST be defined then, eg. you can't have it optional on project level. Thank you. Some build.xml snippets: <target name="archive"> <for list="${artifact.names}" param="artifact"> <sequential> <echo>Packaging artifact @{artifact} of project ${project.name}</echo> <property name="display.@{artifact}.packaging" refid="@{artifact}.packaging" /> <echo>${display.@{artifact}.packaging}</echo> <propertyregex property="@{artifact}.archive.type" input="@{artifact}" regexp="([a-zA-Z0-9]*)(\-)([ejw]ar)" select="\3" casesensitive="false"/> <propertyregex property="@{artifact}.base.name" input="@{artifact}" regexp="([a-zA-Z0-9]*)(\-)([ejw]ar)" select="\1" casesensitive="false"/> <echo>${@{artifact}.archive.type}</echo> <if> <then> <war destfile="${jar.dir}/${@{artifact}.base.name}.war" compress="true" needxmlfile="false"> <resources refid="@{artifact}.packaging" /> <zipfileset refid="@{artifact}-classes" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-meta-inf" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-web-inf" erroronmissingdir="false" /> <!-- Additional zipfilesets to package --> <zipfileset refid="@{artifact}-etc-2" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-3" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-4" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-5" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-6" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-7" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-8" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-9" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-10" erroronmissingdir="false" /> </war> </then> <!-- Generic JAR packaging --> <else> <jar destfile="${jar.dir}/@{artifact}.jar" compress="true"> <resources refid="@{artifact}.packaging" /> <zipfileset refid="@{artifact}-meta-inf" erroronmissingdir="false" /> </jar> </else> </if> </sequential> </for> </target>

    Read the article

  • S#harp architecture mapping many to many and ado.net data services: A single resource was expected f

    - by Leg10n
    Hi, I'm developing an application that reads data from a SQL server database (migrated from a legacy DB) with nHibernate and s#arp architecture through ADO.NET Data services. I'm trying to map a many-to-many relationship. I have a Error class: public class Error { public virtual int ERROR_ID { get; set; } public virtual string ERROR_CODE { get; set; } public virtual string DESCRIPTION { get; set; } public virtual IList<ErrorGroup> GROUPS { get; protected set; } } And then I have the error group class: public class ErrorGroup { public virtual int ERROR_GROUP_ID {get; set;} public virtual string ERROR_GROUP_NAME { get; set; } public virtual string DESCRIPTION { get; set; } public virtual IList<Error> ERRORS { get; protected set; } } And the overrides: public class ErrorGroupOverride : IAutoMappingOverride<ErrorGroup> { public void Override(AutoMapping<ErrorGroup> mapping) { mapping.Table("ERROR_GROUP"); mapping.Id(x => x.ERROR_GROUP_ID, "ERROR_GROUP_ID"); mapping.IgnoreProperty(x => x.Id); mapping.HasManyToMany<Error>(x => x.Error) .Table("ERROR_GROUP_LINK") .ParentKeyColumn("ERROR_GROUP_ID") .ChildKeyColumn("ERROR_ID").Inverse().AsBag(); } } public class ErrorOverride : IAutoMappingOverride<Error> { public void Override(AutoMapping<Error> mapping) { mapping.Table("ERROR"); mapping.Id(x => x.ERROR_ID, "ERROR_ID"); mapping.IgnoreProperty(x => x.Id); mapping.HasManyToMany<ErrorGroup>(x => x.GROUPS) .Table("ERROR_GROUP_LINK") .ParentKeyColumn("ERROR_ID") .ChildKeyColumn("ERROR_GROUP_ID").AsBag(); } } When I view the Data service in the browser like: http://localhost:1905/DataService.svc/Errors it shows the list of errors with no problems, and using it like http://localhost:1905/DataService.svc/Errors(123) works too. The Problem When I want to see the Errors in a group or the groups form an error, like: "http://localhost:1905/DataService.svc/Errors(123)?$expand=GROUPS" I get the XML Document, but the browser says: The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later. -------------------------------------------------------------------------------- Only one top level element is allowed in an XML document. Error processing resource 'http://localhost:1905/DataServic... <error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"> -^ I view the sourcecode, and I get the data. However it comes with an exception: <error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"> <code></code> <message xml:lang="en-US">An error occurred while processing this request.</message> <innererror xmlns="xmlns"> <message>A single resource was expected for the result, but multiple resources were found.</message> <type>System.InvalidOperationException</type> <stacktrace> at System.Data.Services.Serializers.Serializer.WriteRequest(IEnumerator queryResults, Boolean hasMoved)&#xD; at System.Data.Services.ResponseBodyWriter.Write(Stream stream)</stacktrace> </innererror> </error> A I missing something??? Where does this error come from?

    Read the article

  • Recommended integration mechanism for bi-directional, authenticated, encrypted connection in C clien

    - by rcampbell
    Let me first give an example. Imagine you have a single server running a JVM application. This server keeps a collection of N equations, once for each client: Client #1: 2x Client #2: 1 + y Client #3: z/4 This server includes an HTTP interface so that random visitors can type https://www.acme.com/client/3 int their browsers and see the latest evaluated result of z/4. The tricky part is that either the client or the server may change the variable value at any time, informing the other party immediately. More specifically, Client #3 - a C app - can initially tell the server that z = 20. An hour later that same client informs the server that z = 23. Likewise the server can later inform the client that z = 28. As caf pointed out in the comments, there can be a race condition when values are changed by the client and server simultaneously. The solution would be for both client and server to send the operation performed in their message, which would need to be executed by the other party. To keep things simple, let's limit the operations to (commutative) addition, allowing us to disregard message ordering. For example, the client seeds the server with z = 20: server:z=20, client:z=20 server sends {+3} message (so z=23 locally) & client sends {-2} message (so z=18 locally) at the exact same time server receives {-2} message at some point, adds to his local copy so z=21 client receives {+3} message at some point, adds to his local copy so z=21 As long as all messages are eventually evaluated by both parties, the correct answer will eventually be given to the users of the client and server since we limited ourselves to commutative operations (addition of 3 and -2). This does mean that both client and server can be returning incorrect answers in the time it takes for messages to be exchanged and processed. While undesirable, I believe this is unavoidable. Some possible implementations of this idea include: Open an encrypted, always on TCP socket connection for communication Pros: no additional infrastructure needed, client and server know immediately if there is a problem (disconnect) with the other party, fairly straightforward (except the the encryption), native support from both JVM and C platforms Cons: pretty low-level so you end up writing a lot yourself (protocol, delivery verification, retry-on-failure logic), probably have a lot of firewall headaches during client app installation Asynchronous messaging (ex: ActiveMQ) Pros: transactional, both C & Java integration, free up the client and server apps from needing retry logic or delivery verification, pretty straightforward encryption, easy extensibility via message filters/routers/etc Cons: need additional infrastructure (message server) which must never fail, Database or file system as asynchronous integration point Same pros/cons as above but messier RESTful Web Service Pros: simple, possible reuse of the server's existing REST API, SSL figures out the encryption problem for you (maybe use RSA key a la GitHub for authentication?) Cons: Client now needs to run a C HTTP REST server w/SSL, client and server need retry logic. Axis2 has both a Java and C version, but you may be limited to SOAP. What other techniques should I be evaluating? What real world experiences have you had with these mechanisms? Which do you recommend for this problem and why?

    Read the article

  • Spring hibernate ehcache setup

    - by Johan Sjöberg
    I have some problems getting the hibernate second level cache to work for caching domain objects. According to the ehcache documentation it shouldn't be too complicated to add caching to my existing working application. I have the following setup (only relevant snippets are outlined): @Entity @Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE public void Entity { // ... } ehcache-entity.xml <cache name="com.company.Entity" eternal="false" maxElementsInMemory="10000" overflowToDisk="true" diskPersistent="false" timeToIdleSeconds="0" timeToLiveSeconds="300" memoryStoreEvictionPolicy="LRU" /> ApplicationContext.xml <bean class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="dataSource" ref="ds" /> <property name="annotatedClasses"> <list> <value>com.company.Entity</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.generate_statistics">true</prop> <prop key="hibernate.cache.use_second_level_cache">true</prop> <prop key="net.sf.ehcache.configurationResourceName">/ehcache-entity.xml</prop> <prop key="hibernate.cache.region.factory_class">net.sf.ehcache.hibernate.SingletonEhCacheRegionFactory</prop> .... </property> </bean> Maven dependencies <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-annotations</artifactId> <version>3.4.0.GA</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-hibernate3</artifactId> <version>2.0.8</version> <exclusions> <exclusion> <artifactId>hibernate</artifactId> <groupId>org.hibernate</groupId> </exclusion> </exclusions> </dependency> <dependency> <groupId>net.sf.ehcache</groupId> <artifactId>ehcache-core</artifactId> <version>2.3.2</version> </dependency> A test class is used which enables cache statistics: Cache cache = cacheManager.getCache("com.company.Entity"); cache.setStatisticsAccuracy(Statistics.STATISTICS_ACCURACY_GUARANTEED); cache.setStatisticsEnabled(true); // store, read etc ... cache.getStatistics().getMemoryStoreObjectCount(); // returns 0 No operation seems to trigger any cache changes. What am I missing? Currently I'm using HibernateTemplate in the DAO, perhaps that has some impact. [EDIT] The only ehcache log output when set to DEBUG is: SettingsFactory: Cache region factory : net.sf.ehcache.hibernate.SingletonEhCacheRegionFactory

    Read the article

  • Why is the dropdown menu aligned to the left?

    - by fmz
    I am working on a dropdown navigation for a site and am having some trouble with the dropdown portion aligning with the parent category - it shifts all the way to the left. Here is the html: <ul class="dropdown"> <li><a href="#" id="home">Home</a></li> <li><a href="#" id="about">About Us</a> <ul class="sub-menu"> <li><a href="#">Our History</a></li> <li><a href="#">Our Process</a></li> <li><a href="#">Portfolio</a></li> <li><a href="#">Financing</a></li> <li><a href="#">Testimonials</a></li> <li><a href="#">Subcontractors</a></li> </ul> </li> <li><a href="#" id="personal">Personal Banking</a></li> <li><a href="#" id="commercial">Commercial Banking</a></li> <li><a href="#" id="service">Customer Service</a> <ul class="sub-menu"> <li><a href="#">Our History</a></li> <li><a href="#">Our Process</a></li> <li><a href="#">Portfolio</a></li> <li><a href="#">Financing</a></li> <li><a href="#">Testimonials</a></li> <li><a href="#">Subcontractors</a></li> </ul> </li> <li><a href="#" id="investors">Investor Relations</a></li> <li><a href="#" id="contact">Contact Us</a></li> </ul> Here is the CSS: ul.dropdown { position: relative; background: #4e8997; height: 40px; padding-left: 5px; } ul.dropdown li { float: left; zoom: 1; } ul.dropdown li a { display: block; margin-top: 5px; padding: .5em .6em; color: #fff; font: bold 14px "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif; text-transform: uppercase; border: none; } ul.dropdown a:hover { background-color: #c29c5d; color: #fff; } ul.dropdown a:active { background-color: #c29c5d; color: #fff; } /* LEVEL TWO */ ul.dropdown ul { width: 200px; visibility: hidden; position: absolute; top:100%; left: 0; } ul.dropdown ul li { font: 13px "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif; border-bottom: 1px solid #ccc; float: none; color: #fff; background-color: #c29c5d; height: 20px; } ul.dropdown ul li a { display: inline-block; } ul.dropdown ul li a:hover { background-color: #a2834d; color: #fff; height: 20px; } I tried changing the ul.dropdown ul to position relative, but that breaks the navigation. I would appreciate some help getting this corrected. Thanks.

    Read the article

  • Java List to Excel Columns

    - by Nitin
    Correct me where I'm going wrong. I'm have written a program in Java which will get list of files from two different directories and make two (Java list) with the file names. I want to transfer the both the list (downloaded files list and Uploaded files list) to an excel. What the result i'm getting is those list are transferred row wise. I want them in column wise. Given below is the code: public class F { static List<String> downloadList = new ArrayList<>(); static List<String> dispatchList = new ArrayList<>(); public static class FileVisitor extends SimpleFileVisitor<Path> { @Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { String name = file.toRealPath().getFileName().toString(); if (name.endsWith(".pdf") || name.endsWith(".zip")) { downloadList.add(name); } if (name.endsWith(".xml")) { dispatchList.add(name); } return FileVisitResult.CONTINUE; } } public static void main(String[] args) throws IOException { try { Path downloadPath = Paths.get("E:\\report\\02_Download\\10252013"); Path dispatchPath = Paths.get("E:\\report\\01_Dispatch\\10252013"); FileVisitor visitor = new FileVisitor(); Files.walkFileTree(downloadPath, visitor); Files.walkFileTree(downloadPath, EnumSet.of(FileVisitOption.FOLLOW_LINKS), 1, visitor); Files.walkFileTree(dispatchPath, visitor); Files.walkFileTree(dispatchPath, EnumSet.of(FileVisitOption.FOLLOW_LINKS), 1, visitor); System.out.println("Download File List" + downloadList); System.out.println("Dispatch File List" + dispatchList); F f = new F(); f.UpDown(downloadList, dispatchList); } catch (Exception ex) { Logger.getLogger(F.class.getName()).log(Level.SEVERE, null, ex); } } int rownum = 0; int colnum = 0; HSSFSheet firstSheet; Collection<File> files; HSSFWorkbook workbook; File exactFile; { workbook = new HSSFWorkbook(); firstSheet = workbook.createSheet("10252013"); Row headerRow = firstSheet.createRow(rownum); headerRow.setHeightInPoints(40); } public void UpDown(List<String> download, List<String> upload) throws Exception { List<String> headerRow = new ArrayList<>(); headerRow.add("Downloaded"); headerRow.add("Uploaded"); List<List> recordToAdd = new ArrayList<>(); recordToAdd.add(headerRow); recordToAdd.add(download); recordToAdd.add(upload); F f = new F(); f.CreateExcelFile(recordToAdd); f.createExcelFile(); } void createExcelFile() { FileOutputStream fos = null; try { fos = new FileOutputStream(new File("E:\\report\\Download&Upload.xls")); HSSFCellStyle hsfstyle = workbook.createCellStyle(); hsfstyle.setBorderBottom((short) 1); hsfstyle.setFillBackgroundColor((short) 245); workbook.write(fos); } catch (Exception e) { } } public void CreateExcelFile(List<List> l1) throws Exception { try { for (int j = 0; j < l1.size(); j++) { Row row = firstSheet.createRow(rownum); List<String> l2 = l1.get(j); for (int k = 0; k < l2.size(); k++) { Cell cell = row.createCell(k); cell.setCellValue(l2.get(k)); } rownum++; } } catch (Exception e) { } finally { } } } (The purpose is to verify the files Downloaded and Uploaded for the given date) Thanks.

    Read the article

  • Read files from directory to create a ZIP hadoop

    - by Félix
    I'm looking for hadoop examples, something more complex than the wordcount example. What I want to do It's read the files in a directory in hadoop and get a zip, so I have thought to collect al the files in the map class and create the zip file in the reduce class. Can anyone give me a link to a tutorial or example than can help me to built it? I don't want anyone to do this for me, i'm asking for a link with better examples than the wordaccount. This is what I have, maybe it's useful for someone public class Testing { private static class MapClass extends MapReduceBase implements Mapper<LongWritable, Text, Text, BytesWritable> { // reuse objects to save overhead of object creation Logger log = Logger.getLogger("log_file"); public void map(LongWritable key, Text value, OutputCollector<Text, BytesWritable> output, Reporter reporter) throws IOException { String line = ((Text) value).toString(); log.info("Doing something ... " + line); BytesWritable b = new BytesWritable(); b.set(value.toString().getBytes() , 0, value.toString().getBytes() .length); output.collect(value, b); } } private static class ReduceClass extends MapReduceBase implements Reducer<Text, BytesWritable, Text, BytesWritable> { Logger log = Logger.getLogger("log_file"); ByteArrayOutputStream bout; ZipOutputStream out; @Override public void configure(JobConf job) { super.configure(job); log.setLevel(Level.INFO); bout = new ByteArrayOutputStream(); out = new ZipOutputStream(bout); } public void reduce(Text key, Iterator<BytesWritable> values, OutputCollector<Text, BytesWritable> output, Reporter reporter) throws IOException { while (values.hasNext()) { byte[] data = values.next().getBytes(); ZipEntry entry = new ZipEntry("entry"); out.putNextEntry(entry); out.write(data); out.closeEntry(); } BytesWritable b = new BytesWritable(); b.set(bout.toByteArray(), 0, bout.size()); output.collect(key, b); } @Override public void close() throws IOException { // TODO Auto-generated method stub super.close(); out.close(); } } /** * Runs the demo. */ public static void main(String[] args) throws IOException { int mapTasks = 20; int reduceTasks = 1; JobConf conf = new JobConf(Prue.class); conf.setJobName("testing"); conf.setNumMapTasks(mapTasks); conf.setNumReduceTasks(reduceTasks); MultipleInputs.addInputPath(conf, new Path("/messages"), TextInputFormat.class, MapClass.class); conf.setOutputKeyClass(Text.class); conf.setOutputValueClass(BytesWritable.class); FileOutputFormat.setOutputPath(conf, new Path("/czip")); conf.setMapperClass(MapClass.class); conf.setCombinerClass(ReduceClass.class); conf.setReducerClass(ReduceClass.class); // Delete the output directory if it exists already JobClient.runJob(conf); } }

    Read the article

  • In the following list, how do I prevent jQuery from interacting with part of it.

    - by kylex
    I have the following list structure: <ul> <li>One</li> <li class="topActive">Two <ul> <li class="active">Two-1 <ul> <li>Two-1-1</li> </ul> </li> <li>Two-2</li> </ul> </li> <li>Three <ul> <li>Three-1</li> </ul> </li> </ul> with the following jQuery: $("ul>li>ul").hide(); $("ul>li>ul>li>ul").hide(); $("ul>li>ul>li>ul>ul>li").hide(); $('.active').parents().show();????????? $("ul>li").hoverIntent( function(){ $(this).children('ul').slideDown('fast'); }, function(){ $(this).children('ul').slideUp('fast'); } ); $("ul>li>ul>li").hoverIntent( function(){ $(this).children('ul').slideDown('fast'); }, function(){ $(this).children('ul').slideUp('fast'); } ); What I would like is this: When an li class is topActive, everything within that class down to the current class can not be affected by the jQuery that causes the list to slide up and down. so in the case provided the following would show, along with the top level: One Two Two-1 Two-2 Three And when I mouse over either, Two, or Two-1 or Two-2 nothing happens, but if I mouse over Three, the jQuery will cause it to slide up or slide down. In this example: <ul> <li>One</li> <li class="topActive">Two <ul> <li class="active">Two-1 <ul> <li>Two-1-1</li> </ul> </li> <li>Two-2</li> </ul> </li> <li>Three <ul> <li>Three-1</li> </ul> </li> </ul> When I mouseover Two-1, it will slide down, but when I mouse out Two-1 will remain. One Two Two-1 Two-2 Three Any suggestsions for either a CSS or jQuery implementation (or mixture of the two)?

    Read the article

  • What's the best way to refactor this Rails controller?

    - by Robert DiNicolas
    I'd like some advice on how to best refactor this controller. The controller builds a page of zones and modules. Page has_many zones, zone has_many modules. So zones are just a cluster of modules wrapped in a container. The problem I'm having is that some modules may have some specific queries that I don't want executed on every page, so I've had to add conditions. The conditions just test if the module is on the page, if it is the query is executed. One of the problems with this is if I add a hundred special module queries, the controller has to iterate through each one. I think I would like to see these module condition moved out of the controller as well as all the additional custom actions. I can keep everything in this one controller, but I plan to have many apps using this controller so it could get messy. class PagesController < ApplicationController # GET /pages/1 # GET /pages/1.xml # Show is the main page rendering action, page routes are aliased in routes.rb def show #-+-+-+-+-Core Page Queries-+-+-+-+- @page = Page.find(params[:id]) @zones = @page.zones.find(:all, :order => 'zones.list_order ASC') @mods = @page.mods.find(:all) @columns = Page.columns # restful params to influence page rendering, see routes.rb @fragment = params[:fragment] # render single module @cluster = params[:cluster] # render single zone @head = params[:head] # render html, body and head #-+-+-+-+-Page Level Json Conversions-+-+-+-+- @metas = @page.metas ? ActiveSupport::JSON.decode(@page.metas) : nil @javascripts = @page.javascripts ? ActiveSupport::JSON.decode(@page.javascripts) : nil #-+-+-+-+-Module Specific Queries-+-+-+-+- # would like to refactor this process @mods.each do |mod| # Reps Module Custom Queries if mod.name == "reps" @reps = User.find(:all, :joins => :roles, :conditions => { :roles => { :name => 'rep' } }) end # Listing-poc Module Custom Queries if mod.name == "listing-poc" limit = params[:limit].to_i < 1 ? 10 : params[:limit] PropertyEntry.update_from_listing(mod.service_url) @properties = PropertyEntry.all(:limit => limit, :order => "city desc") end # Talents-index Module Custom Queries if mod.name == "talents-index" @talent = params[:type] @reps = User.find(:all, :joins => :talents, :conditions => { :talents => { :name => @talent } }) end end respond_to do |format| format.html # show.html.erb format.xml { render :xml => @page.to_xml( :include => { :zones => { :include => :mods } } ) } format.json { render :json => @page.to_json } format.css # show.css.erb, CSS dependency manager template end end # for property listing ajax request def update_properties limit = params[:limit].to_i < 1 ? 10 : params[:limit] offset = params[:offset] @properties = PropertyEntry.all(:limit => limit, :offset => offset, :order => "city desc") #render :nothing => true end end So imagine a site with a hundred modules and scores of additional controller actions. I think most would agree that it would be much cleaner if I could move that code out and refactor it to behave more like a configuration.

    Read the article

  • Cannot create Desktop shortcut

    - by Pantelis
    I have a WiX project and I want to automatically create a ProgramMenu and Desktop shortcut. I've tried the following but the Desktop shortcut is not created. The ProgramMenu shortcut works great. <Product Id="*" Name="Application Name" Language="1033" Version="1.0.0.0" Manufacturer="Company Name"> <Package InstallerVersion="200" Compressed="yes" InstallScope="perMachine" Description="A description" Comments="Some Comments" /> <MajorUpgrade DowngradeErrorMessage="A newer version of [ProductName] is already installed." /> <MediaTemplate EmbedCab="yes"/> <!-- Minimal UI --> <UIRef Id="WixUI_Minimal"/> <!-- Adding the referenced components --> <Feature Id="Complete" Title="inStorHDRadio Complete" Level="1"> <ComponentGroupRef Id="InstallationComponents" /> <ComponentRef Id="ApplicationProgramsMenuShortcut"/> <ComponentRef Id="ApplicationDesktopShortcut"/> </Feature> </Product> <Fragment> <Directory Id="TARGETDIR" Name="SourceDir"> <!-- Installation Folder --> <Directory Id="ProgramFilesFolder"> <Directory Id="CompanyFolder" Name="CompanyName"> <Directory Id="InstallationFolder" Name="ApplicationName"/> </Directory> </Directory> <!-- Programs Menu Shortcut Folder --> <Directory Id="ProgramMenuFolder" Name="ProgramsMenu"> <Directory Id="ProgramsMenuCompanyFolder" Name="CompanyName"> <Directory Id="ProgramsMenuShortcutFolder" Name="ApplicationName"/> </Directory> </Directory> <!-- Desktop Shortcut Folder --> <Directory Id="DesktopShortcutFolder" Name="Desktop"/> </Directory> </Fragment> <!-- Compoments --> <Fragment> <ComponentGroup Id="inStorHDRadioComponents" Directory="InstallationFolder"> <!-- All application components in Program Files --> </ComponentGroup> <!-- SHORTCUTS --> <!--ProgramsMenu--> <DirectoryRef Id='ProgramsMenuShortcutFolder'> <Component Id='ApplicationProgramsMenuShortcut'> <RemoveFolder Id='RemoveProgramsMenuShortcutFolder' Directory='ProgramsMenuShortcutFolder' On='uninstall' /> <RemoveFolder Id='RemoveProgramsMenuCompanyFolder' Directory='ProgramsMenuCompanyFolder' On='uninstall' /> <Shortcut Id='ApplicationProgramsMenuShortcut' Name='Company Name' Target='[#Application.exe]' WorkingDirectory='InstallationFolder' Icon='application.ico' /> <RegistryValue Name='RegistryValueProgramMenuShortcut' Root='HKCU' Key='Software\Microsoft\[Manufacturer]\[ProductName]' Type='integer' Value='1' /> </Component> </DirectoryRef> <!--Desktop--> <DirectoryRef Id='DesktopShortcutFolder'> <Component Id='ApplicationDesktopShortcut'> <RemoveFolder Id='RemoveDesktopShortcutFolder' Directory='DesktopShortcutFolder' On='uninstall'/> <Shortcut Id='ApplicationDesktopShortcut' Name='Application Name' Target='[#Bootstrapper.exe]' WorkingDirectory='InstallationFolder' Directory='DesktopShortcutFolder' Advertise='no' Icon='application.ico'/> <RegistryValue Name='RegistryValDesktopShortcut' Root='HKCU' Key='Software\[Manufacturer]\[ProductName]' KeyPath='yes' Type='integer' Value='1' /> </Component> </DirectoryRef> </Fragment> <Fragment> <Icon Id="application.ico" SourceFile="Files\application.ico" /> <Icon Id="programs.ico" SourceFile="Files\programs.ico"/> <Property Id="ARPPRODUCTICON" Value="programs.ico" /> <Property Id="ARPHELPLINK" Value="http://www.company.com" /> </Fragment> Whats wrong with the code? The ProgramMenu shortcut is working perfectly fine, but the desktop one is not getting created.

    Read the article

  • Deleting xml file using radio value

    - by ???? ???
    i using php to delete file, but i got table loop like this: <table border="0" width="100%" cellpadding="0" cellspacing="0" id="product-table"> <tr class="bg_tableheader"> <th class="table-header-check"><a id="toggle-all" ></a> </th> <th class="table-header-check"><a href="#"><font color="white">Username</font></a> </th> <th class="table-header-check"><a href="#"><font color="white">First Name</font></a></th> <th class="table-header-check"><a href="#"><font color="white">Last Name</font></a></th> <th class="table-header-check"><a href="#"><font color="white">Email</font></a></th> <th class="table-header-check"><a href="#"><font color="white">Group</font></a></th> <th class="table-header-check"><a href="#"><font color="white">Birthday</font></a></th> <th class="table-header-check"><a href="#"><font color="white">Gender</font></a></th> <th class="table-header-check"><a href="#"><font color="white">Age</font></a></th> <th class="table-header-check"><a href="#"><font color="white">Country</font></a></th> </tr> <?php $files = glob('users/*.xml'); foreach($files as $file){ $xml = new SimpleXMLElement($file, 0, true); echo ' <tr> <td></td> <form action="" method="post"> <td class="alternate-row1"><input type="radio" name="file_name" value="'. basename($file, '.xml') .'" />'. basename($file, '.xml') .'</td> <td>'. $xml->name .'&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp</td> <td class="alternate-row1">'. $xml->lastname .'</td> <td>'. $xml->email .'</td> <td class="alternate-row1">'. $xml->level .'</td> <td>'. $xml->birthday .'</td> <td class="alternate-row1">'. $xml->gender .'</td> <td>'. $xml->age .'</td> <td class="alternate-row1">'. $xml->country .'</td> </tr>'; } ?> </table> </div> <?php if(isset($_POST['file_name'])){ unlink('users/'.$_POST['file_name']); } ?> <input type="submit" value="Delete" /> </form> so as you can see i got radio value set has basename (xml file name) but from some reason it not working, any idea why is that? Thanks in advance.

    Read the article

  • Please recommend me intermediate-to-advanced Python books to buy.

    - by anonnoir
    I'm in the final year, final semester of my law degree, and will be graduating very soon. (April, to be specific.) But before I begin practice, I plan to take 2 two months off, purely for serious programming study. So I'm currently looking for some Python-related books, gauged intermediate to advanced, which are interesting (because of the subject matter itself) and possibly useful to my future line of work. I've identified 2 possible purchases at the moment: Natural Language Processing with Python. The law deals mostly with words, and I've quite a number of ideas as to where I might go with NLP. Data extraction, summaries, client management systems linked with document templates, etc. Programming Collective Intelligence. This book fascinates me, because I've always liked the idea of machine learning (and I'm currently studying it by the side too, for fun). I'd like to build/play around with Web 2.0 applications; and who knows if I can apply some of the things I learn to my legal work. (E.g. Playground experiments to determine how and under what circumstances judges might be biased, by forcing algorithms to pore through judgments and calculate similarities, etc.) Please feel free to criticize my current choices, but do at least offer or recommend other books that I should read in their place. My budget can deal with 4 books, max. These books will be used heavily throughout the 2 months; I will be reading them back to back, absorbing the explanations given, and hacking away at their code. Also, the books themselves should satisfy 2 main criteria: Application. The book must teach how to solve problems. I like reading theory, but I want to build things and solve problems first. Even playful applications are fine, because games and experiments always have real-world applications sooner or later. Readability. I like reading technical books, no matter how difficult they are. I enjoy the effort and the feeling that you're learning something. But the book shouldn't contain code or explanations that are too cryptic or erratic. Even if it's difficult, the book's content should be accessible with focused reading. Note: I realize that I am somewhat of a beginner to the whole programming thing, so please don't put me down. But from experience, I think it's better to aim up and leave my comfort zone when learning new things, rather than to just remain stagnant the way I am. (At least the difficulty gives me focus: i.e. if a programmer can be that good, perhaps if I sustain my own efforts I too can be as good as him someday.) If anything, I'm also a very determined person, so two months of day-to-night intensive programming study with nothing else on my mind should, I think, give me a bit of a fighting chance to push my programming skills to a much higher level.

    Read the article

  • More localized, efficient Lowest Common Ancestor algorithm given multiple binary trees?

    - by mstksg
    I have multiple binary trees stored as an array. In each slot is either nil (or null; pick your language) or a fixed tuple storing two numbers: the indices of the two "children". No node will have only one child -- it's either none or two. Think of each slot as a binary node that only stores pointers to its children, and no inherent value. Take this system of binary trees: 0 1 / \ / \ 2 3 4 5 / \ / \ 6 7 8 9 / \ 10 11 The associated array would be: 0 1 2 3 4 5 6 7 8 9 10 11 [ [2,3] , [4,5] , [6,7] , nil , nil , [8,9] , nil , [10,11] , nil , nil , nil , nil ] I've already written simple functions to find direct parents of nodes (simply by searching from the front until there is a node that contains the child) Furthermore, let us say that at relevant times, both all trees are anywhere between a few to a few thousand levels deep. I'd like to find a function P(m,n) to find the lowest common ancestor of m and n -- to put more formally, the LCA is defined as the "lowest", or deepest node in which have m and n as descendants (children, or children of children, etc.). If there is none, a nil would be a valid return. Some examples, given our given tree: P( 6,11) # => 2 P( 3,10) # => 0 P( 8, 6) # => nil P( 2,11) # => 2 The main method I've been able to find is one that uses an Euler trace, which turns the given tree, with a node A to be the invisible parent of 0 and 1 with a depth of -1, into: A-0-2-6-2-7-10-7-11-7-2-0-3-0-A-1-4-1-5-8-5-9-5-1-A And from that, simply find the node between your given m and n that has the lowest number; For example, to find P(6,11), look for a 6 and an 11 on the trace. The number between them that is the lowest is 2, and that's your answer. If A is in between them, return nil. -- Calculating P(6,11) -- A-0-2-6-2-7-10-7-11-7-2-0-3-0-A-1-4-1-5-8-5-9-5-1-A ^ ^ ^ | | | m lowest n Unfortunately, I do believe that finding the Euler trace of a tree that can be several thousands of levels deep is a bit machine-taxing...and because my tree is constantly being changed throughout the course of the programming, every time I wanted to find the LCA, I'd have to re-calculate the Euler trace and hold it in memory every time. Is there a more memory efficient way, given the framework I'm using? One that maybe iterates upwards? One way I could think of would be the "count" the generation/depth of both nodes, and climb the lowest node until it matched the depth of the highest, and increment both until they find someone similar. But that'd involve climbing up from level, say, 3025, back to 0, twice, to count the generation, and using a terribly inefficient climbing-up algorithm in the first place, and then re-climbing back up. Are there any other better ways?

    Read the article

  • Internet explorer and floats: please explain

    - by cletus
    Yesterday someone asked Width absorbing HTML elements. I presented two solutions: one table-based and one pure CSS. Now the pure CSS one works well in Firefox and Chrome but not in IE. Basically the floats are being bumped down to the next line. It is my understanding (and the behaviour of FF and Chrome) that this should not be the case because the left divs are block level elements that floats should basically ignore. Complete code example is below. Adding a DOCTYPE to force IE into standards compliant mode helps slightly but the problem remains. So my question is: am I mistaken about my understanding of floats or is this IE's problem? More importantly, how do I get this to work in IE? It's been bugging the hell out of me. <html> <head> <style type="text/css"> div div { height: 1.3em; } #wrapper { width: 300px; overflow: hidden; } div.text { float: right; white-space: nowrap; clear: both; background: white; padding-left: 12px; text-align: left; } #row1, #row2, #row3, #row4, #row5, #row6 { width: 270px; margin-bottom: 4px; } #row1 { background: red; } #row2 { background: blue; } #row3 { background: green; } #row4 { background: yellow; } #row5 { background: pink; } #row6 { background: gray; } </style> <script type="text/javascript" src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load("jquery", "1.3.2"); google.setOnLoadCallback(function() { $(function() { $("div.text").animate({ width: "90%" }, 2000); }); }); </script> </head> <body> <div id="wrapper"> <div class="text">FOO</div><div id="row1"></div> <div class="text">BAR</div><div id="row2"></div> <div class="text">THESE PRETZELS ARE</div><div id="row3"></div> <div class="text">MAKING ME THIRSTY</div><div id="row4"></div> <div class="text">BLAH</div><div id="row5"></div> <div class="text">BLAH</div><div id="row6"></div> </div> </body> </html>

    Read the article

  • How can a wix custom action dll call be made to use the debug runtime via a merge module?

    - by Benj
    I'm trying to create a debug build with a corresponding debug installer for our product. I'm new to Wix so please forgive any naivety contained herein. The debug Dlls in my project are dependent on both the VS2008 and the VS2008SP1 debug runtimes. I've created a merge module feature in wix to bundle those runtimes with my installer. <Include xmlns="http://schemas.microsoft.com/wix/2006/wi"> <!-- Include our 'variables' file --> <!--<?include variables.wxi ?>--> <!--<Fragment>--> <DirectoryRef Id="TARGETDIR"> <!-- Always install the 32 bit ATL/CRT libraries, but only install the 64 bit ones on a 64 bit build --> <Merge Id="AtlFiles_x86" SourceFile="$(env.CommonProgramFiles)\Merge Modules\Microsoft_VC90_ATL_x86.msm" DiskId="1" Language="1033"/> <Merge Id="AtlPolicy_x86" SourceFile="$(env.CommonProgramFiles)\Merge Modules\policy_9_0_Microsoft_VC90_ATL_x86.msm" DiskId="1" Language="1033"/> <Merge Id="CrtFiles_x86" SourceFile="$(env.CommonProgramFiles)\Merge Modules\Microsoft_VC90_DebugCRT_x86.msm" DiskId="1" Language="1033"/> <Merge Id="CrtPolicy_x86" SourceFile="$(env.CommonProgramFiles)\Merge Modules\policy_9_0_Microsoft_VC90_DebugCRT_x86.msm" DiskId="1" Language="1033"/> <Merge Id="MfcFiles_x86" SourceFile="$(env.CommonProgramFiles)\Merge Modules\Microsoft_VC90_DebugMFC_x86.msm" DiskId="1" Language="1033"/> <Merge Id="MfcPolicy_x86" SourceFile="$(env.CommonProgramFiles)\Merge Modules\policy_9_0_Microsoft_VC90_DebugMFC_x86.msm" DiskId="1" Language="1033"/> <!-- If this is a 64 bit build, install the relevant modules --> <?if $(env.Platform) = "x64" ?> <Merge Id="AtlFiles_x64" SourceFile="$(env.CommonProgramFiles)\Merge Modules\Microsoft_VC90_ATL_x86_x64.msm" DiskId="1" Language="1033"/> <Merge Id="AtlPolicy_x64" SourceFile="$(env.CommonProgramFiles)\Merge Modules\policy_9_0_Microsoft_VC90_ATL_x86_x64.msm" DiskId="1" Language="1033"/> <Merge Id="CrtFiles_x64" SourceFile="$(env.CommonProgramFiles)\Merge Modules\Microsoft_VC90_DebugCRT_x86_x64.msm" DiskId="1" Language="1033"/> <Merge Id="CrtPolicy_x64" SourceFile="$(env.CommonProgramFiles)\Merge Modules\policy_9_0_Microsoft_VC90_DebugCRT_x86_x64.msm" DiskId="1" Language="1033"/> <Merge Id="MfcFiles_x64" SourceFile="$(env.CommonProgramFiles)\Merge Modules\Microsoft_VC90_DebugMFC_x86_x64.msm" DiskId="1" Language="1033"/> <Merge Id="MfcPolicy_x64" SourceFile="$(env.CommonProgramFiles)\Merge Modules\policy_9_0_Microsoft_VC90_DebugMFC_x86_x64.msm" DiskId="1" Language="1033"/> <?endif?> </DirectoryRef> <Feature Id="MS2008_SP1_DbgRuntime" Title="VC2008 Debug Runtimes" AllowAdvertise="no" Display="hidden" Level="1"> <!-- 32 bit libraries --> <MergeRef Id="AtlFiles_x86"/> <MergeRef Id="AtlPolicy_x86"/> <MergeRef Id="CrtFiles_x86"/> <MergeRef Id="CrtPolicy_x86"/> <MergeRef Id="MfcFiles_x86"/> <MergeRef Id="MfcPolicy_x86"/> <!-- 64 bit libraries --> <?if $(env.Platform) = "x64" ?> <MergeRef Id="AtlFiles_x64"/> <MergeRef Id="AtlPolicy_x64"/> <MergeRef Id="CrtFiles_x64"/> <MergeRef Id="CrtPolicy_x64"/> <MergeRef Id="MfcFiles_x64"/> <MergeRef Id="MfcPolicy_x64"/> <?endif?> </Feature> <!--</Fragment>--> </Include> If I'm doing a debug build of the installer, I include that feature like so: <!-- The 'Feature' that contains the debug CRT/ATL libraries --> <?if $(var.Configuration) = "Debug"?> <?include ..\includes\MS2008_SP1_DbgRuntime.wxi?> <?endif?> The only problem is that my installer also includes a custom action which is also dependent on the debug runtime: <!-- Private key installer --> <Binary Id="InstallPrivateKey" SourceFile="..\InstallPrivateKey\win32\$(var.Configuration)\InstallPrivateKey.dll"></Binary> <CustomAction Id='InstallKey' BinaryKey='InstallPrivateKey' DllEntry='InstallPrivateKey'/> So how can I package the debug run time in such a way that the custom action also has access to it?

    Read the article

  • Spring MVC application - URL gives No file found (404)

    - by user1700184
    I created a Spring-MVC project. web.xml: <servlet> <servlet-name>mvc-dispatcher</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>mvc-dispatcher</servlet-name> <url-pattern>/soundmails</url-pattern> </servlet-mapping> mvc-dispatcher-servlet.xml <?xml version="1.0"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:context="http://www.springframework.org/schema/context" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd"> <mvc:annotation-driven /> <context:component-scan base-package="somepkg.controllers" /> <bean id="multipartResolver" class="org.gmr.web.multipart.GMultipartResolver"> <property name="maxUploadSize" value="1048576" /> </bean> <bean id="placeholderConfig" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <!-- property name="location"> <value>/WEB-INF/social.properties</value> </property--> </bean> <bean id="jacksonMessageConverter" class="org.springframework.http.converter.json.MappingJacksonHttpMessageConverter"></bean> <bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter"> <property name="messageConverters"> <list> <ref bean="jacksonMessageConverter"/> </list> </property> </bean> </beans> The controller has this code: ProjectController.java @Controller @RequestMapping("/soundmails") public class FileUploadController { @RequestMapping(value="/test", method=RequestMethod.GET) public @ResponseBody String test() { System.out.println("Hai"); return "Hai"; } } I am using Google App Engine in my local machine to test this. I am getting these in my log: [INFO] Oct 24, 2013 1:54:18 AM com.google.appengine.tools.development.LocalResourceFileServlet doGet [INFO] WARNING: No file found for: /soundmails/test I tried /soundmails/soundmails/test as well. That is also giving the same error. I am using Spring 3.1.0.RELEASE Can someone help me figure out what I am missing - /soundmails/test is giving 404 error. Edit I am unable to enable DEBUG logs for this. For some reason, it is not taking log level configured in logging.properties But I observed something interesting: 1) If I map the request to empty string (value = "") @RequestMapping(value="", method=RequestMethod.GET) public @ResponseBody String test() { System.out.println("Hai"); return "Hai"; } Then, when I try to access 127.0.0.1/soundmails, it works fine (returns string "Hai"). 2) When I have value="/test" @RequestMapping(value="/test", method=RequestMethod.GET) public @ResponseBody String test() { System.out.println("Hai"); return "Hai"; } and I try to access 127.0.0.1/soundmails/test, it is giving HTTP 404. This is weird.

    Read the article

< Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >