Search Results

Search found 22041 results on 882 pages for 'kill process'.

Page 598/882 | < Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >

  • What causes "java.lang.IncompatibleClassChangeError: vtable stub"?

    - by JimN
    What causes "java.lang.IncompatibleClassChangeError: vtable stub"? In our application, we have seen this error pop up randomly and very seldom (just twice so far, and we run it a lot). It is not readily reproducible, even when restarting the app, using the same jvm/jars without rebuilding. As for our build process, we clean all classes/jars and rebuild them, so it's not the same problem as others have encountered where they made a change in one class and didn't recompile some other dependent classes. This is unlike some of the other questions related to IncompatibleClassChangeError -- none of them mention "vtable stub". In fact, there are surprisingly few google results when searching for "IncompatibleClassChangeError "vtable stub"".

    Read the article

  • Drupal Views pulling Data Fields

    - by askon
    I'm a little new to drupal but have been using things like devel module and theme developer to speed up the learning process. My question, is it possible to theme an entire views BLOCK from a single views tpl.php page OR even a preprocess? When I'm grabbing the $view object I can see results $node-result, it has all of the results, but it doesn't have all my views fields. I'm missing things like, node path, taxonomy titles and paths, etc. From my understanding, Drupal wants you to individually theme EACH output field. It seems rather superfluous to create so many extra templates when I've already got over HALF of my results coming through the $view object Would outputting node over field make this easier? Or am going in the wrong direction with $view-result? Thanks!

    Read the article

  • How to trace a raw (character) device stream on Unix ?

    - by Fabien
    I'm trying to trace what is transiting in a raw (character) device on an Unix system (ex: /dev/tty.baseband) for DEBUG purpose. I am thinking of creating a deamon that would: upon start rename /dev/tty.baseband to /dev/tty.baseband.old. create a raw node /dev/tty.baseband spawn two threads: Thread 1: reading /dev/tty.baseband.old writing into /dev/tty.baseband Thread 2: reading /dev/tty.baseband writing into /dev/tty.baseband.old This would work a little bit like a MITM process. I wonder if there is not a 'standard' way to do this.

    Read the article

  • Memory in Eclipse

    - by user247866
    I'm getting the java.lang.OutOfMemoryError exception in Eclipse. I know that Eclipse by default uses heap size of 256M. I'm trying to increase it but nothing happens. For example: eclipse -vmargs -Xmx16g -XX:PermSize=2g -XX:MaxPermSize=2g I also tried different settings, using only the -Xmx option, using different cases of g, G, m, M, different memory sizes, but nothing helps. Does not matter which params I specify, the heap exception is thrown at the same time, so I assume there's something I'm doing wrong that Eclipse ignores the -Xmx parameter. I'm using a 32GB RAM machine and trying to execute something very simple such as: double[][] a = new double[15000][15000]; It only works when I reduce the array size to something around 10000 on 10000. I'm working on Linux and using the top command I can see how much memory the Java process is consuming; it's less than 2%. Thanks!

    Read the article

  • (fluxus) learning curve

    - by Inaimathi
    I'm trying to have some fun with fluxus, but its manual and online docs all seem to assume that the reader is already an expert network programmer who's never heard of Scheme before. Consequently, you get passages that try to explain the very basics of prefix notation, but assume that you know how to pipe sound-card data into the program, or setup and connect to an OSC process. Is there any tutorial out there that goes the opposite way? IE, assumes that you already have a handle on the Lisp/Scheme thing, but need some pointers before you can properly set up sound sources or an OSC server? Barring that, does anyone know how to get (for example) the system microphone to connect to (fluxus), or how to get it to play a sound file from disk?

    Read the article

  • rubygem "Argument list too long"

    - by mehmermaid
    My problem is that during or after running a process which uses Ruby intensively, when I use any gem command including gem --version or gem install rake, it hangs for just over a minute and then gives me this error: $ gem list /Users/username/.rvm/bin/gem: line 5: /Users/username/.rvm/bin/gem: Argument list too long /Users/username/.rvm/bin/gem: line 5: /Users/username/.rvm/bin/gem: Unknown error: 0 file at : line 5: /Users/username/.rvm/bin/gem #!/usr/bin/env bash if [[ -s "/Users/username/.rvm/environments/ruby-1.8.7-p334" ]] ; then source "/Users/username/.rvm/environments/ruby-1.8.7-p334" exec gem "$@" # this is line 5 else echo "ERROR: Missing RVM environment file: '/Users/username/.rvm/environments/ruby- 1.8.7-p334'" >&2 exit 1 fi The only way that I have found to get this working again is to restart my computer, which is obviously undesirable. I am using OSX 10.6.5 I have spent quite a while trying to find anyone else who has had this problem, and been unsuccessful. Do you have any idea why this might be happening?

    Read the article

  • Why is javac failing on @Override annotation

    - by skiphoppy
    Eclipse is adding @Override annotations when I implement methods of an interface. Eclipse seems to have no problem with this. And our automated build process from Cruise Control seems to have no problem with this. But when I build from the command-line, with ant running javac, I get this error: [javac] C:\path\project\src\com\us\MyClass.java:70: method does not override a method from its superclass [javac] @Override [javac] ^ [javac] 1 error Eclipse is running under Java 1.6. Cruise Control is running Java 1.5. My ant build fails regardless of which version of Java I use.

    Read the article

  • Request header is too large

    - by stck777
    I found serveral IllegalStateException Exception in the logs: [#|2009-01-28T14:10:16.050+0100|SEVERE|sun-appserver2.1|javax.enterprise.system.container.web|_ThreadID=26;_ThreadName=httpSSLWorkerThread-80-53;_RequestID=871b8812-7bc5-4ed7-85f1-ea48f760b51e;|WEB0777: Unblocking keep-alive exception java.lang.IllegalStateException: PWC4662: Request header is too large at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:740) at org.apache.coyote.http11.InternalInputBuffer.parseHeader(InternalInputBuffer.java:657) at org.apache.coyote.http11.InternalInputBuffer.parseHeaders(InternalInputBuffer.java:543) at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.parseRequest(DefaultProcessorTask.java:712) at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.doProcess(DefaultProcessorTask.java:577) at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.process(DefaultProcessorTask.java:831) at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.executeProcessorTask(DefaultReadTask.java:341) at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:263) at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:214) at com.sun.enterprise.web.portunif.PortUnificationPipeline$PUTask.doTask(PortUnificationPipeline.java:380) at com.sun.enterprise.web.connector.grizzly.TaskBase.run(TaskBase.java:265) at com.sun.enterprise.web.connector.grizzly.ssl.SSLWorkerThread.run(SSLWorkerThread.java:106) |#] Does anybody know configuration changes to fix this?

    Read the article

  • shutdown windows 8 from metro app

    - by MindFreak
    I'm doing a metro app for Windows 8. And as part of its functionality, I need to initiate shutdown of Winodws 8 from the Metro App. Here are the questions: 1) Firstly, I researched a lot on this topic and I found out that System.Diagnostics.Process is not available for Metro App. So, is there a another way around? 2) Even if I can't directly shutdown, is there a way to trigger it from the Metro App? I would prefer a solution in C#. Thanks.

    Read the article

  • Best design for generating code from an AST?

    - by Sam Washburn
    I'm working on a pretty complex DSL that I want to compile down into a few high level languages. The whole process has been a learning experience. The compiler is written in java. I was wondering if anyone knew a best practice for the design of the code generator portion. I currently have everything parsed into an abstract syntax tree. I was thinking of using a template system, but I haven't researched that direction too far yet as I would like to hear some wisdom first from stack overflow. Thanks!

    Read the article

  • Legality, terms of service for performing a web crawl

    - by Berlin Brown
    I was going to crawl a site for some research I was collecting. But, apparently the terms of service is quite clear on the topic. Is it illegal to now "follow" the terms of service. And what can the site normally do? Here is an example clause in the TOS. Also, what about sites that don't provide this particular clause. Restrictions: "use any robot, spider, site search application, or other automated device, process or means to access, retrieve, scrape, or index the site" It is just research? Edit: "OK, from the standpoint of designing an efficient crawler. Should I provide some form of natural language engine to read terms of service and then abide by them."

    Read the article

  • Compact Framework : Read a SQL CE database on a PDA from a PC

    - by CF_Maintainer
    Hello, I have tasked with upgrading a CF Framework 1.1 suite of apps. Currently, the PC starts a server [after confirming via RAPI that the device exists and is connected] and spawns a app on the PDA as the client. The client process on the PDA talks with the db on the PDA and returns records to the PC app [using SQL CE 2.0. OpenNETCF 1.4 for communication/io]. I have a chance to upgrade the PC and PDA suite of apps to Framework 3.5 & CF 3.5 respectively. Due to a business requirement, I cannot get rid of workflow requiring the PC app to show a preview of the work done on the PDA. Question : Are there better ways to achieve the above in general with the constraints I have? I would really appreciate any Ideas/advice.

    Read the article

  • Processing file uploads before object is saved

    - by Dominic Rodger
    I've got a model like this: class Talk(BaseModel): title = models.CharField(max_length=200) mp3 = models.FileField(upload_to = u'talks/', max_length=200) seconds = models.IntegerField(blank = True, null = True) I want to validate before saving that the uploaded file is an MP3, like this: def is_mp3(path_to_file): from mutagen.mp3 import MP3 audio = MP3(path_to_file) return not audio.info.sketchy Once I'm sure I've got an MP3, I want to save the length of the talk in the seconds attribute, like this: audio = MP3(path_to_file) self.seconds = audio.info.length The problem is, before saving, the uploaded file doesn't have a path (see this ticket, closed as wontfix), so I can't process the MP3. I'd like to raise a nice validation error so that ModelForms can display a helpful error ("You idiot, you didn't upload an MP3" or something). Any idea how I can go about accessing the file before it's saved? p.s. If anyone knows a better way of validating files are MP3s I'm all ears - I also want to be able to mess around with ID3 data (set the artist, album, title and probably album art, so I need it to be processable by mutagen).

    Read the article

  • Hard to append a table with many records into another without generating duplicates

    - by Bill Mudry
    I may seem to be a bit wordy at first but for the hope it will be easier for all of you to understand what I am doing in the first place. I have an uncommon but enjoyable activity of collecting as many species of wood from around the world as I can (over 2,900 so far). Ok, that is the real world. Meanwhile I have spent over 8 years compiling over 5.8 meg of text data on all the woods of the world. That got so large that learning some basic PHP and MySQL was most welcome so I could build a new database driven home for all this research. I am still slow at it but getting there. The original premise was to find evidence of as many species of woods in the world I can. The more names identified, the more successful the project. I have named the project TAXA for ease of conversation (short for Taxonomy). You are most welcome to take a look at what I have so far at www.prowebcanada.com/taxa. It is 95% dynamically driven. So far I am reporting about 6,500 botanical wood names and, as said above, the more I can report, the more successful is the project. I have a file of all the woods in the second largest wood collection in the world, the Tervuren wood collection in the Netherlands with over 11,300 wood names even after cleaning out all duplicates. That is almost twice the number I am reporting now so porting all the new wood names from Tervuren to the 'species' table where I keep the reported data would be a major desirable advancement in the project. At one point I was able to add all the Tervuren records to the species table but over 3,000 duplicates also formed. They were not in the Tervuren file in the first place but represent the same wood names common to both files. It is common sense that there would be woods common to both that when merged would create new duplicates. At one point and with the help of others from another forum, I may very well have finally got the proper SQL statement. When I ran it, though, the system said (semi-amusingly at first) ----- that it had gone away! After looking up on the Net what could have have done this, one reason is that the MySQL timeout lapses and probably because of the large size of files I am running. I am running this on a rented account on Godaddy so I cannot go about trying to adjust any config file. For safety, I copied the tervuren.sql file as tervuren_target.sql and the species.sql file as species_master.sql tp use as working files just to make sure I protect the original files from destruction or damage. Later I can name the species_master back to just species.sql once I am happy all worked well. The species file has about 18 columns in it but only 5 columns match the columns in the Tervuren file (name for name and collation also). The rest of the columns are just along for the ride, so to speak. The common key in both is the 'species_name" columns in both. I am not sure it is at all proper to call one a primary key and the other a foreign key since there really is no relational connection to them. One is just more data for the other and can disappear after, never to be referred to the working code in the application. I have been very surprised and flabbergasted on how hard it can be to append records from one large table into another (with same column names plus others) without generating NEW duplicates in the first place. Watch out thinking that a SELECT DISTINCT statement may do the job because absolutely NO records in the species table must get destroyed in the process and there is no way (well, that I know of) to tell the 'DISTINCT" command this. Yes, the original 'species' table has duplicates in it even before all this but, trust me ---- they have to be removed the long hard way manually record by record or I will lose precious information. It is more important to just make sure no NEW duplicates form through bringing in new names in the tervuren_target.species_name into species.species_name. I am hoping and thinking that a straight SQL solution should work --- except for that nasty timeout. How do I get past that? Could it mean that I may have to turn to a PHP plus SQL method?? Or ..... would I have to break up the Tervuren files into a few smaller ones and run them independently (hope not....)" So far, what seems should be easy has proven to be unexpectedly tricky. I appreciate any help you can give but start from the assumption that this may be harder to do right than it may seem on the surface. By the way --- I am running a quad 64 bit system with Windows 7, so at least I have some fairly hefty power on the client end. I have a direct ethernet cable feeding a cable connection to the Internet. Once I get an algorithm and code working for this, I also have many other lists to process that could make the 'species' table grow even more. It could be equivalent to (ahem) lighting a rocket under my project (especially compared to do this record by record manually)! This is my first time in this forum, so I do not know how I can receive any replies. Do I have to to come back here periodically or are replies emailed out also? It would be great if you CC'd copies to me at billmudry at rogers.com :-) Much thanks for your patience and help, Bill Mudry Mississauga, Ontario Canada (next to Toronto).

    Read the article

  • IE 8 Caching Problem

    - by Jeff Catania
    One of my javascript sources had an extra comma that was throwing an error in IE8. So I opened up my editor, deleted the comma, and saved. I reloaded IE8, but it was still pulling the old js file. I deleted everything in "Delete Browsing History...", and restarted the browser. It is still pulling the old file. I even set up a log on my server to show whenever the js file was requested. When reloading with IE, the js file is never requested. I tried doing the same process in Chrome and FF, and it pulled the new file and logged properly on the server. Is there some other cache that I am failing to clear in IE that would cause this problem?

    Read the article

  • Beginner question: What is binding?

    - by JDelage
    Hi, I was trying to understand the difference between early and late binding, and in the process realized that the concept of binding is nebulous to me. I think I understand that it relates to the way data-as-a-word-of-memory is linked to type-as-a-set-of-language-features but I am not sure those are the right concepts. Also, how does understanding this deeply help people become better programmers? Please note: This question is not "what is late v. early binding" or "what are the trade-offs between the 2". Those already exist here. Thanks, JDelage

    Read the article

  • JavaScript Metaprogramming: Reduce boilerplate of adding functions to a function queue

    - by thurn
    I'm working with animation in JavaScript, and I have a bunch of functions you can call to add things to the animation queue. Basically, all of these functions look like this: function foo(arg1, arg2) { _eventQueue.push(function() { // actual logic } } I'm wondering now if it would be possible to cut down on this boilerplate a little bit, though, so I don't need that extra "_eventQueue" line in the function body dozens of times. Would it be possible, for example, to make a helper function which takes an arbitrary function as an argument and returns a new function which is augmented to be automatically added to the event queue? The only problem is that I need to find a way to maintain access to the function's original arguments in this process, which is... complicated.

    Read the article

  • recommendations for rich scalable internet application

    - by Wouter Roux
    I am planning to develop a web application that must perform the following actions: User authentication allow authenticated user to download data from USB device (roughly 5 meg data) upload the data from the USB device to processing server process the data and display the results to the user further requirements/restrictions: the USB driver supports windows (2000, XP, Vista, 7) the application must support IE, Firefox and Chrome the USB driver must be installed when pointing to the web application the first time the USB driver exposes functionality through exported functions in dll scalability and eye candy is important server details: Windows Server 2008 Enterprise x64, IIS 7.0, SQL server 2008 With the limited details specified: What technology would you recommend? (eg silverlight/asp.net mvc/wcf). What practices/patterns/3rd party controls would you recommend?

    Read the article

  • CodeIgniter based e-shop, shipping and gift address design problem

    - by alexander
    While building an ecommerce platform I have run into design problems. I'm working with the built-in CodeIgniter's cart class. It stores all the cart information in session. Let say that cart has already been filled with products and user clicks checkout. When should I store order in database? Just after that click or after several steps of gathering information and stoing it in session? How to deal with additional features like different shipping methods? Should I add it to the basket first and get additional (gift address) to session? I dont want to store it in database because of the relation between gift address and order is needed and since I dont know what's the ID of the order. I'm puzzled :) Additionally I think its crucial to keep cart aware of shipping methods and additional bought services (by selecting gift address there is an extra fee) because the cart content is just like an reciept? In brief, what is the best practice to process checkout?

    Read the article

  • Using Interfaces in action signature

    - by Dmitry Borovsky
    Hello, I want to use interface in Action signature. So I've tried make own ModelBinder by deriving DefaultModelBinder: public class InterfaceBinder<T> : DefaultModelBinder where T: new() { protected override object CreateModel(ControllerContext controllerContext, ModelBindingContext bindingContext, Type modelType) { return base.CreateModel(controllerContext, bindingContext, typeof(T)); } } public interface IFoo { string Data { get; set; } } public class Foo: IFoo /*other interfaces*/ { /* a lot of */ public string Data { get; set; } } public class MegaController: Controller { public ActionResult Process( [ModelBinder(typeof(InterfaceBinder))]IFoo foo){/bla-bla-bla/} } But it doesn't work. Does anybody have idea how release this behaviour? And yes, I know that I can make my own implementation of IModelBinder, but I'm looking for easier way.

    Read the article

  • Potential for SQL injection here?

    - by Matt Greer
    This may be a really dumb question but I figure why not... I am using RIA Services with Entity Framework as the back end. I have some places in my app where I accept user input and directly ask RIA Services (and in turn EF and in turn my database) questions using their data. Do any of these layers help prevent security issues or should I scrub my data myself? For example, whenever a new user registers with the app, I call this method: [Query] public IEnumerable<EmailVerificationResult> VerifyUserWithEmailToken(string token) { using (UserService userService = new UserService()) { // token came straight from the user, am I in trouble here passing it directly into // my DomainService, should I verify the data here (or in UserService)? User user = userService.GetUserByEmailVerificationToken(token); ... } } (and whether I should be rolling my own user verification system is another issue altogether, we are in the process of adopting MS's membership framework. I'm more interested in sql injection and RIA services in general)

    Read the article

  • What are all the disadvantages of using files as a means of communicating between two processes?

    - by Manny
    I have legacy code which I need to improve for performance reasons. My application comprises of two executables that need to exchange certain information. In the legacy code, one exe writes to a file ( the file name is passed as an argument to exe) and the second executable first checks if such a file exists; if does not exist checks again and when it finds it, then goes on to read the contents of the file. This way information in transferred between the two executables. The way the code is structured, the second executable is successful on the first try itself. Now I have to clean this code up and was wondering what are the disadvantages of using files as a means of communication rather than some inter-process communication like pipes.Is opening and reading a file more expensive than pipes? Are there any other disadvantages? And how significant do you think would be the performance degradation. The legacy code is run on both windows and linux.

    Read the article

  • Opening read-only OLEDB connection to MS Access back-end database while allowing updates via separat

    - by djdilicious
    I have a back-end MS Access 2002-2003 database which stores blog entries. I created a separate front-end database with the forms for entering blog posts into the backend database. Finally, I have a website utilizing ASP to display the blog entries. The website connects directly to the backend database using an OLEDB connection object. Whenever I open the form for creating a new post in MS Access, loading the blog post page on the website displays the error: Could not use "; file already in use. I would like to be able to display the older blog posts even though the newest one is in the process of being added.

    Read the article

  • ajax response redirect problem

    - by zurna
    When my member registration form correctly filled in and submitted, server responds with redirect link. But my ajax does not redirect the website. I do not receive any errors, where is my mistake? <script type="text/javascript"> $(document).ready(function() { $("[name='submit']").click(function() { $.ajax({ type: "POST", data: $(".form-signup").serialize(), url: "http://www.refinethetaste.com/FLPM/content/myaccount/signup.cs.asp?Process=Add2Member", success: function(output) { if (output.Redirect) { window.location.href = output.Redirect; } else { $('.sysMsg').html(output); } }, error: function(output) { $('.sysMsg').html(output); } }); }); }); </script> asp codes: If Session("LastVisitedURL") <> "" Then Response.Redirect Session("LastVisitedURL") Else Response.Redirect "?Section=myaccount&SubSection=myaccount" End If

    Read the article

  • How to free virtual memory ?

    - by Mehdi Amrollahi
    I have a crawler application (with C#) that downloads pages from web . The application take more virtual memory , even i dispose every object and even use GC.Collect() . This , have 10 thread and each thread has a socket that downloads pages . I use dispose method and even use GC.Collect() in my application , but in 3 hour my application take 500 MB on virtual memory (500 MB on private bytes in Process explorer) . Then my system will be hang and i should restart my pc . Is there any way that i use to free vitual memory ? Thanks .

    Read the article

< Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >