Search Results

Search found 7651 results on 307 pages for 'execution plan'.

Page 173/307 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • ASP.Net Count Download Clicks

    - by Marco
    Hello, I thought that this was easier… I have a asp:hyperlink control, with target=”_blank”, pointing to the file I want the user to download. My plan is to track the number of times, the users click on this link. I thought in placing it in ajax update panel, to catch the postback and avoid full page refresh. However, hyperlink doesn’t have a onClick method. On the other hand I could use a linkbutton, which has a onClick built in. But It’s harder to make the file open in a new window… and I would also have to do something like: Response.AppendHeader("Content-Disposition","attachment; filename=myImage.jpg"); But I heard that the above approach has some problems with PPT, PPTX, PPS, PPSX… What is you'r opinion on this? How and why, would you do it?

    Read the article

  • How to not increment the build.number in Ant?

    - by dacracot
    I have many targets in my build.xml for Ant. Generally I am running two via a shell script, one to construct the application and one for cleaning up. The shell script checks the exit status of the construction to see if it should clean up or leave the clutter behind so I can determine what went wrong and fix it. So went all is going well, the majority of the time, Ant is executed once for construction and once for clean up. This results in my build.number being incremented for each execution. So in steady state, my build.number increments by 2. How can a tell Ant to not increment the build.number? I would do this for clean up as I haven't built anything. I know the obvious answer of creating a separate script for clean up only, but I'd rather keep the entire build.xml in one file.

    Read the article

  • Hadoop: Iterative MapReduce Performance

    - by S.N
    Is it correct to say that the parallel computation with iterative MapReduce can be justified only when the training data size is too large for the non-parallel computation for the same logic? I am aware that the there is overhead for starting MapReduce jobs. This can be critical for overall execution time when a large number of iterations is required. I can imagine that the sequential computation is faster than the parallel computation with iterative MapReduce as long as the memory allows to hold a data set in many cases. Is it the only benefit to use the iterative MapReduce? If not, what are the other benefits could be?

    Read the article

  • C++ project type: unicode vs multi-byte; pros and cons

    - by Stefan Valianu
    I'm wondering what the Stack Overflow community thinks when it comes to creating a project (thinking primarily c++ here) with a unicode or a multi-byte character set. Are there pros to going Unicode straight from the start, implying all your strings will be in wide format? Are there performance issues / larger memory requirements because of a standard use of a larger character? Is there an advantage to this method? Do some processor architectures handle wide characters better? Are there any reasons to make your project Unicode if you don't plan on supporting additional languages? What reasons would one have for creating a project with a multi-byte character set? How do all of the factors above collide in a high performance environment (such as a modern video game) ?

    Read the article

  • NO-SQL reliable for small bussines app?

    - by mamcx
    I'm deciding between go for a NON-SQL engine or a regular SQL one for a document managment system for small bussines. I have experience with firebird/sql server and found a good track of reliability (specially with firebird). This market is full of crappy "servers" (clon-made PC, the mayority), cheap harddisk, rarely use of RAID or anything like that, some are in locations where a power-off is normal, some not have a UPS, etc... (I will include off-site auto-backup to external servers, but that no change the internal setup). (I know about end-user education about such proper setups, but is stupid depend on that, so stick to te point) From the desing point of view, a schema-less database is the way to go for my system, but, I worry if any of the actual solutions (MongoDb, Tokyo Cabinet, etc) are like firebird and survice crash, malfunctions & abuse so data corruption is very rare. The plan is store the office documents there & provide a central repository.

    Read the article

  • Error when running a GWTTestCase using maven gwt plugin

    - by adancu
    Hi, I've created a test which extends GWTTestCase but I'm getting this error: mvn integration-test gwt:test Running com.myproject.test.ui. GwtTestMyFirstTestCase Translatable source found in... [WARN] No source path entries; expect subsequent failures [ERROR] Unable to find type 'java.lang.Object' [ERROR] Hint: Check that your module inherits 'com.google.gwt.core.Core' either directly or indirectly (most often by inheriting module 'com.google.gwt.user.User') Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.1 sec <<< FAILURE! GwtTestMyFirstTestCase.java is in /src/test/java, while the GWT module is located in src/main/java. I assume this shouldn't be a problem. I've done everything required according to http://mojo.codehaus.org/gwt-maven-plugin/user-guide/testing.html and of course that my gwt module already has com.google.gwt.core.Core indirectly imported. http://maven.apache.org/maven-v4_0_0.xsd" 4.0.0 com.myproject main jar 0.0.1-SNAPSHOT Main Module <properties> <gwt.module>com.myproject.MainModule</gwt.module> </properties> <parent> <groupId>com.myproject</groupId> <artifactId>app</artifactId> <version>0.1.0-SNAPSHOT</version> </parent> <dependencies> <dependency> <groupId>com.myproject</groupId> <artifactId>app-commons</artifactId> <version>0.0.1-SNAPSHOT</version> </dependency> <dependency> <groupId>com.google.gwt</groupId> <artifactId>gwt-dev</artifactId> <version>${gwt.version}</version> <scope>provided</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <configuration> <outputFile>../app/src/main/webapp/WEB-INF/main.tree</outputFile> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>gwt-maven-plugin</artifactId> <executions> <execution> <goals> <goal>test</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <configuration> <classesDirectory> ${project.build.directory}/${project.build.finalName}/${gwt.module} </classesDirectory> </configuration> </plugin> </plugins> </build> Here is the test case, located in /src/test/java/com/myproject/test/ui public class GwtTestMyFirstTestCase extends GWTTestCase { @Override public String getModuleName() { return "com.myproject.MainModule"; } public void testSomething() { } } Here is the gwt module I'm trying to test, located in src/main/java/com/myproject/MainModule.gwt.xml: <inherits name='com.myproject.Commons' /> <source path="site" /> <source path="com.myproject.test.ui" /> <set-property name="gwt.suppressNonStaticFinalFieldWarnings" value="true" /> <entry-point class='com.myproject.site.SiteModuleEntry' /> Can anyone give me a hint or two about what I'm doing wrong? Thanks in advance.

    Read the article

  • Fetching email using Ruby on Rails

    - by Shreyas Satish
    I need to fetch email from my gmail account using RoR. require 'net/pop' Net::POP3.start('pop.gmail.com', 995, username, password) do |pop| if pop.mails.empty? puts 'No mail.' else #pop.each_mail do |mail| #p mail.header #p mail.pop puts "Mails present" #end end end I get a timeout error. usr/lib/ruby/1.8/timeout.rb:60:in new': execution expired (Timeout::Error) from /usr/lib/ruby/1.8/net/protocol.rb:206:inold_open' from /usr/lib/ruby/1.8/net/protocol.rb:206:in old_open' from /usr/lib/ruby/1.8/net/pop.rb:438:indo_start' from /usr/lib/ruby/1.8/net/pop.rb:432:in `start' from script/mail.rb:4 Thanks and Cheers !

    Read the article

  • Java EE 6 - ordering Servlet Request Listeners

    - by Walter White
    Hi all, I finally updated to Java EE 6 (web profile) and would like to control the ordering of my servlet request listeners. I did that before through the XML ordering by listing the listeners in a particular order. Now, I have placed the @WebListener annotation on the classes which are listeners and am trying to figure out how to order the listeners such that they work properly. One must run before another one, otherwise, it won't have the information it needs and won't work. Also, it doesn't seem my listeners are actually being invoked even though they're marked with @WebListener. I am running embedded glassfish 3.0. Another question that is somewhat related - ServletRequestListeners in Java EE 6 by default are still synchronous meaning they're hit first, then servlet filters, right? ServletRequestListeners are not asynchronous where they merely get notified of an event without interrupting the execution? Walter

    Read the article

  • What did I lose when I upgraded?

    - by Richard
    I upgraded my work box from Vista64 to Win7-64 by doing a format and reinstall. I kept backups of the project done in MS Visual Studio 2008 (Team). But now it won't compile. I am getting errors generated on lines in the MS created header files like "'_In_' not defined" etc. I know it is because I lost some compiler setting/directive. I was sure that the compiler settings would be in the project file; now I see that things like the include file directories, LIB files, etc. are not. [FYI: The project is a VB.NET GUI with VC++ DLL talking to a PIC24 micro over USB.] How do I most efficiently get my project back on the road to execution?

    Read the article

  • Incorrect decrement of the reference count

    - by idober
    I have the following problem: In one flow of the execution I use alloc, and on the other flow, alloc is not needed. At the end of the if statement, in any case, I release the object. When I do 'build and Analize' I get an error: 'Incorrect decrement of the reference count of an object is not owned by the caller'. How to solve that? UIImage *image; int RandomIndex = arc4random() % 10; if (RandomIndex<5) { image = [[UIImage alloc] initWithContentsOfFile:@"dd"]; } else { image = [UIImage imageNamed:@"dd"]; } UIImageView *imageLabel =[[UIImageView alloc] initWithImage:image]; [image release]; [imageLabel release];

    Read the article

  • Visual Studio 2008 built-in web server needs integrated pipeline mode - How?

    - by jdk
    Using Visual Studio 2008 and built-in web server. In a Web Handler .ashx file public void ProcessRequest(HttpContext context) { context.Response.ContentType = MimeType_text_xvcard; context.Response.Headers.Add(HttpHeader_ContentLength, "2138"); when I try to add an HTTP header I get the exception: This operation requires IIS integrated pipeline mode. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.PlatformNotSupportedException: This operation requires IIS integrated pipeline mode. I can find information about this error on the Internet but need specific info about how to presumably enable Integrated Pipeline mode (through web.config?) to allow HTTP headers to be manipulated. How do do I put the built-in web server into integrated pipeline mode? Note: Not using full-fledged IIS

    Read the article

  • Returning errors from AMFPHP on purpose.

    - by Morieris
    When using flash remoting with amfphp, what can I write in php that will trigger the 'status' method that I set up in my Responder in Flash? Or more generally, how can I determine if the service call has failed? The ideal solution for me would be to throw some exception in php serverside, and catch that exception in flash clientside... How do other people handle server errors with flash remoting? var responder = new Responder( function() { trace("some normal execution finished successfully. this is fine."); }, function(e) { trace("how do I make this trigger when my server tells me something bad happened?"); } ); myService = new NetConnection; myService.connect("http://localhost:88/amfphp/gateway.php"); myService.call("someclass.someservice", responder);

    Read the article

  • Microsoft Team Foundation Equivalent stack.

    - by Nix
    I am looking for a free alternative to TFS. What would be the best alternative stack(source control, bug tracking, project management/planning, wiki, automated builds (ci))? Keeping in mind that it would be nice if they all integrated well. For example, it would be nice to be able to link bugs to source control, and then be able to link to a project plan and then be able to automate building. I do not have issues with using Microsoft project to manage project planing. I know i would like to use these....: SVN TeamCity NUnit But i am struggling to find a good Wiki/Project Planning/Bug tracking, that would integrate well. Any questions let me know.

    Read the article

  • ColdFusion structs Direct Assignment vs object literal notation.

    - by Tom Hubbard
    The newer versions of ColdFusion (I believe CF 8 and 9) allow you to create structs with object literal notation similar to JSON. My question is, are there specific benefits (execution efficiency maybe) to using object literal notation over individual assignments for data that is essentially static? For example: With individual assignments you would do something like this: var user = {}; user.Fname = "MyFirstnam"; user.Lname = "MyLastName"; user.titles = []; ArrayAppend(user.titles,'Mr'); ArrayAppend(user.titles,'Dr.'); Whereas with object literals you would do something like. var user = {Fname = "MyFirstnam", Lname = "MyLastName", titles = ['Mr','Dr']}; Now this limited example is admittedly simple, but if titles was an array of structures (Say an array of addresses), the literal notation becomes awkward to work with.

    Read the article

  • Staging database good practices

    - by Tom
    Hi, I'm about to deploy to production a fairly complex site and for the first time need a staging environment where I can test things in a more realistic environment, especially with regard to some external services that cannot be run locally. My general plan is to develop & test first locally, push simple changes (small bug fixes, HTML/CSS, JS, etc) direct to production, and for larger changes, push first to staging subdomain for thorough testing and then to production. I don't think that I need to keep the staging and production databases in sync (occasional manual updating would do) but I'm wondering if there are any general good practices with regard to maintaing a staging environment in relation to a production environment, especially when it comes to databases. Any general thoughts/advice/experience would be appreciated.

    Read the article

  • Web Services vs Persistent Sockets

    - by dsquires
    I plan on doing a little benchmarking around this question, myself. But I thought it would be good to get some initial feedback from "the community". Has anyone out there done any analysis regarding the pros and cons of these two technologies? My thoughts: Opening and closing TCP/IP connections for web service calls is relatively expensive compared to persistent connections. Dealing with intermittent connection errors and state, etc... would be easier with a web service based framework. You don't see World of Warcraft using web services. One question that I can't seem to find much of answer for anywhere (even on here)... are the limits on the # of persistent connections a single network card can support, etc?

    Read the article

  • How to retrieve .properties?

    - by user1014523
    Im developing desktop java application using maven. I got a *.properties file that I need to retrive during execution (src/resources/application.properties). The only thing comes to my mind is to use: private Properties applicationProperties; applicationProperties.load(new BufferedInputStream(new FileInputStream("src/resources/application.properties"))); This would work if I run my application directly from IDE. I want to to keep outpout hierarchy clear, so I set maven to copy resources folder dircetly to target folder (which is a basedir for the output application). This way application.properties file won't load (since I have target/resources/application.properties but not target/src/resources/application.properties). What is the best way to manage resources so they work both when I debug from IDE and run builded jar file directly?

    Read the article

  • How to set WCF threads to schedual differently

    - by Gilad
    Hi, I'm running a winservice that has 2 main objectives. Execute/Handle exposed webmethods. Run Inner processes that consume allot of CPU. The problem is that when I execute many inner processes |(as tasks) that are queued to the threadpool or taskpool, the execution of the webmethods takes much more time as WCF also queues its executions to the same threadpool. This even happens when setting the inner processes task priority to lowest and setting the webmethods thread priority to heights. I hoped that Framework 4.0 would improve this, and they have, but still it takes quite allot of time for the system to handle the WCF queued tasks if the CPU is handling other inner tasks. Is it possible to change the Threadpool that WCF uses to a different one? Is it possible to manually change the task queue (global task queue, local task queue). Is it possible to manually handle 2 task queues that behave differently ? Any help in the subject would be appropriated. Gilad.

    Read the article

  • Subversion core on a sharedhosting

    - by voidnothings
    Hi! I've signed up for a shared hosting plan in bluehost and tried installing a subversion by following this instruction: bluehost forum svn install script all goes well at first even tried svnadmin create project_name and was a success but then when I traverse to project_name I got this error "svn: '.' is not a working copy" when I run svn info. I think I may have encountered an error on the compilation process, can't remember exactly but i think it has something to do with a ".so" file when I run the make && make install command , something about permission. Any help or suggestion is very much appreciated. Thanks!

    Read the article

  • Birt 2.5.2 report generates empty table data when run from a cron job

    - by Trueblood
    I've got a shell script that runs genReport.sh in order to create a .pdf formatted report, and it works perfectly when it's run from the command line. The data source for the report is a ClearQuest database. When it's run from a CRON job the .pdf file is created, except that only the various report and column headers are displayed, and the data of the report is missing. There are no errors reported to STDERR during the execution of the script. This screams "environment variable" to me. Currently, the shell script is defining the following: CQ_HOME BIRT_HOME ODBCINI ODBCINST LD_LIBRARY_PATH If it's an environmental thing, what part of the environment am I missing?

    Read the article

  • dotConnect LINQ to MySQL Issue

    - by Saravanan I M
    I am using dotConnect LINQ to MySQL and i have the following error. what would be the cause for this issue annot convert parameter value of type 'System.String' to MySQL type 'MySqlType.TimeStamp'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidCastException: Cannot convert parameter value of type 'System.String' to MySQL type 'MySqlType.TimeStamp'. Source Error: Line 93: { Line 94: string loginLowered = login.ToLower(); Line 95: return context.ISVs.Where(u = u.Email == loginLowered).SingleOrDefault() == null; Line 96: Line 97: }

    Read the article

  • How to create "recurData" in Google Calendar? in C#.Net

    - by Pari
    Hi, I want to create recurring events of Calendar using Google API. I am following links: Google Calendar API I am not getting how to create "recurData". I can't modify String and pass it as parameter. Tried DDay.iCal Version 0.80. also. DDay.iCal There are some Example code given.I tried them. I am able to create ".ics" file. But when i pass this file content as "recurData" Getting Error : {"Execution of request failed: http://www.google.com/calendar/feeds/[email protected]/private/full?gsessionid=AHItK5wrSIoJVawFjGt-0g"} My icf File content is: BEGIN:VCALENDAR VERSION:2.0 PRODID:-//DDay.iCal//NONSGML ddaysoftware.com//EN BEGIN:VEVENT CREATED:20100309T132930Z DESCRIPTION:The event description DTEND:20100310T020000 DTSTAMP:20100309T132930Z DTSTART:20100309T080000 LOCATION:Event location SEQUENCE:0 SUMMARY:18 hour event summary UID:396c6b22-277f-4496-bbe1-d3692dc1b223 END:VEVENT BEGIN:VEVENT CREATED:20100309T132930Z DTEND;VALUE=DATE:20100315 DTSTAMP:20100309T132930Z DTSTART;VALUE=DATE:20100314 SEQUENCE:0 SUMMARY:All-day event UID:ac25cdaf-4e95-49ad-a770-f04f3afc1a2f END:VEVENT END:VCALENDAR I made it using "Example6".

    Read the article

  • How to run a PowerShell script?

    - by Pekka
    Guys and Gals, a really stupid question: How do I run a PowerShell script? I have a script named myscript.ps1 I have all the necessary frameworks installed I set that execution policy thing I have followed the instructions on this MSDN help page and am trying to run it like so: powershell.exe 'C:\my_path\yada_yada\run_import_script.ps1' (with or withot --noexit) which returns exactly nothing, except that the file name is output. No error, no message, nothing. Oh, when I add -noexit, the same thing happens but I remain within Powershell, and have to exit manually. The ps1 file is supposed to run a program, and return the error level dependant on that program's output. But I'm quite sure I'm not even getting there yet. What am I doing wrong?

    Read the article

  • Obtain SubVersion 1.4 executables for Windows

    - by dcw
    I need to work with an old repository, whose db/format contains 2 From this question, I understand that this means it's version 1.4. I have only 1.5 executables, which produce a db/format containing 3 layout sharded 1000 I've tried to use the 1.5 svnadmin to verify the 1.4 repo, but it fails with 'vnadmin: Unknown FS type 'fsfs If anyone's got any ideas on how to fix this, that'd be great. My current working plan is to obtain 1.4 executables. The problem I have is that I've done a fair amount of searching, and I can't find any SubVersion 1.4 executable downloads. Any help would be hugely appreciated.

    Read the article

  • Git under windows: MSYS or Cygwin?

    - by Joce
    I plan to migrate my projects over to git, and I'm currently wondering which is the best and / or most stable option under windows. From what I gather I basically have 2.5 options: MSYSgit git under Cygwin (aka 2.5) MSYSgit from a Cygwin prompt (given that Cygwin git is already installed). Note: IMO Cygwin in itself is a big plus as you can have access to pretty much all the *nix command line tools, as where with MSYSgit bash, you only have access to a rather small subset of these tools. Given that, what option would you suggest?

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >