Search Results

Search found 6947 results on 278 pages for 'annotation processing'.

Page 236/278 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • Should java try blocks be scoped as tightly as possible?

    - by isme
    I've been told that there is some overhead in using the Java try-catch mechanism. So, while it is necessary to put methods that throw checked exception within a try block to handle the possible exception, it is good practice performance-wise to limit the size of the try block to contain only those operations that could throw exceptions. I'm not so sure that this is a sensible conclusion. Consider the two implementations below of a function that processes a specified text file. Even if it is true that the first one incurs some unnecessary overhead, I find it much easier to follow. It is less clear where exactly the exceptions come from just from looking at statements, but the comments clearly show which statements are responsible. The second one is much longer and complicated than the first. In particular, the nice line-reading idiom of the first has to be mangled to fit the readLine call into a try block. What is the best practice for handling exceptions in a funcion where multiple exceptions could be thrown in its definition? This one contains all the processing code within the try block: void processFile(File f) { try { // construction of FileReader can throw FileNotFoundException BufferedReader in = new BufferedReader(new FileReader(f)); // call of readLine can throw IOException String line; while ((line = in.readLine()) != null) { process(line); } } catch (FileNotFoundException ex) { handle(ex); } catch (IOException ex) { handle(ex); } } This one contains only the methods that throw exceptions within try blocks: void processFile(File f) { FileReader reader; try { reader = new FileReader(f); } catch (FileNotFoundException ex) { handle(ex); return; } BufferedReader in = new BufferedReader(reader); String line; while (true) { try { line = in.readLine(); } catch (IOException ex) { handle(ex); break; } if (line == null) { break; } process(line); } }

    Read the article

  • Pump Messages During Long Operations + C# (it is urgent)

    - by Newbie
    Hi I have a web service that is doing huge computation and is taking more than a minute. I have generated the proxy file of the web service and then from my client end I am using the dll(of course I generated the proxy dll). My client side code is TimeSeries3D t = new TimeSeries3D(); int portfolioId = 4387919; string[] str = new string[2]; str[0] = "MKT_CAP"; DateRange dr = new DateRange(); dr.mStartDate = DateTime.Today; dr.mEndDate = DateTime.Today; Service1 sc = new Service1(); t = sc.GetAttributesForPortfolio(portfolioId, true, str, dr); But since it is taking to much time for the server to compute, after 1 minute I am receiving an error message The CLR has been unable to transition from COM context 0x33caf30 to COM context 0x33cb0a0 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations. Kindly guide me what to do? It is very urgent. Thanks

    Read the article

  • MySql Query lag time?

    - by Click Upvote
    When there are multiple PHP scripts running in parallel, each making an UPDATE query to the same record in the same table repeatedly, is it possible for there to be a 'lag time' before the table is updated with each query? I have basically 5-6 instances of a PHP script running in parallel, having been launched via cron. Each script gets all the records in the items table, and then loops through them and processes them. However, to avoid processing the same item more than once, I store the id of the last item being processed in a seperate table. So this is how my code works: function getCurrentItem() { $sql = "SELECT currentItemId from settings"; $result = $this->db->query($sql); return $result->get('currentItemId'); } function setCurrentItem($id) { $sql = "UPDATE settings SET currentItemId='$id'"; $this->db->query($sql); } $currentItem = $this->getCurrentItem(); $sql = "SELECT * FROM items WHERE status='pending' AND id > $currentItem'"; $result = $this->db->query($sql); $items = $result->getAll(); foreach ($items as $i) { //Check if $i has been processed by a different instance of the script, and if so, //leave it untouched. if ($this->getCurrentItem() > $i->id) continue; $this->setCurrentItem($i->id); // Process the item here } But despite of all the precautions, most items are being processed more than once. Which makes me think that there is some lag time between the update queries being run by the PHP script, and when the database actually updates the record. Is it true? And if so, what other mechanism should I use to ensure that the PHP scripts always get only the latest currentItemId even when there are multiple scripts running in parrallel? Would using a text file instead of the db help?

    Read the article

  • Rails uploaded file blank

    - by Ceilingfish
    Hi chaps, I'm trying to upload a file to the server using a HTTP multipart form in rails, and for some reason it's turning up blank at the other end. I can see it being received in the rails log thusly: Processing Admin::HeadlinesController#update (for 127.0.0.1 at 2010-03-08 12:26:13) [PUT] Parameters: {"commit"=>"Save changes", "action"=>"update", "_method"=>"put", "authenticity_token"=>"mK70XRk5gOPUwXOcNboT/4K8PD9RBM7GqCOlEUKZwcA=", "headline"=>{"position"=>"1", "location"=>"primary", "attachment_id"=>"13", "headline_content"=>"questionnaires", "article_id"=>"3", "image"=>#<File:/tmp/RackMultipart20100308-63211-1vym9nj-0>}, "id"=>"140", "controller"=>"admin/headlines"} But if I have a look in /tmp/RackMultipart20100308-63211-1vym9nj-0 the file is blank. Am I right in thinking that this should be the file that I uploaded? I'm running Phusion Passenger 2.2.7 on Apache 2.2.13, with ruby 1.8.7 and rails 2.3.5, on OSX 10.6.2

    Read the article

  • Boost binding a function taking a reference

    - by Jamie Cook
    Hi all, I am having problems compiling the following snippet int temp; vector<int> origins; vector<string> originTokens = OTUtils::tokenize(buffer, ","); // buffer is a char[] array // original loop BOOST_FOREACH(string s, originTokens) { from_string(temp, s); origins.push_back(temp); } // I'd like to use this to replace the above loop std::transform(originTokens.begin(), originTokens.end(), origins.begin(), boost::bind<int>(&FromString<int>, boost::ref(temp), _1)); where the function in question is // the third parameter should be one of std::hex, std::dec or std::oct template <class T> bool FromString(T& t, const std::string& s, std::ios_base& (*f)(std::ios_base&) = std::dec) { std::istringstream iss(s); return !(iss >> f >> t).fail(); } the error I get is 1>Compiling with Intel(R) C++ 11.0.074 [IA-32]... (Intel C++ Environment) 1>C:\projects\svn\bdk\Source\deps\boost_1_42_0\boost/bind/bind.hpp(303): internal error: assertion failed: copy_default_arg_expr: rout NULL, no error (shared/edgcpfe/il.c, line 13919) 1> 1> return unwrapper<F>::unwrap(f, 0)(a[base_type::a1_], a[base_type::a2_]); 1> ^ 1> 1>icl: error #10298: problem during post processing of parallel object compilation Google is being unusually unhelpful so I hope that some one here can provide some insights.

    Read the article

  • Offline Mapping API

    - by Aaron M
    Are there any services available that allow me to manipulate maps in an offline setting? I am working on a project that requires me to take a map and based on features on the map, generate a game world. I have looked at a few of the API's for different providers: Google, ms, etc. The API's I looked seem to be strictly showing a user a map. I am looking for something that allows me to create a derivative of a map (the Gameworld), that will never be seen by the public, and is only used by the game engine. However one caveat is that I would like to be able to link the derivative created for use by the game engine, with something I can show the user. As an example. Think of a cross country racing sim. Users cannot control the vehicles directly in this game, they can only control the cars setup, driver, etc. I create a gameworld from a map. The gameworld data (driver position, etc) is overlayed onto a real map. A race might last several days. The only interaction users have with the real map is viewing their position on the map, and where they are in relation to the others. I don't want to violate the terms of the API here. I read Googles API TOS, and it seems to me that creating the gameworld would violdate their TOS. The features I really need are the following The ability to locate a specific place on the map by lat/long The ability/rights to grab those maps and save them as an image file temporarily for processing The ability/rights to store a gameworld that is based on the real map The ability to show a user a map with an overlay (this is optional. I can use googles API, or any other one that supports lat/long.)

    Read the article

  • Point data structure for a sketching application

    - by bebraw
    I am currently developing a little sketching application based on HTML5 Canvas element. There is one particular problem I haven't yet managed to find a proper solution for. The idea is that the user will be able to manipulate existing stroke data (points) quite freely. This includes pushing point data around (ie. magnet tool) and manipulating it at whim otherwise (ie. altering color). Note that the current brush engine is able to shade by taking existing stroke data in count. It's a quick and dirty solution as it just iterates the points in the current stroke and checks them against a distance rule. Now the problem is how to do this in a nice manner. It is extremely important to be able to perform efficient queries that return all points within given canvas coordinate and radius. Other features, such as space usage, should be secondary to this. I don't mind doing some extra processing between strokes while the user is not painting. Any pointers are welcome. :)

    Read the article

  • Hierarchy of meaning

    - by asldkncvas
    I am looking for a method to build a hierarchy of words. Background: I am a "amateur" natural language processing enthusiast and right now one of the problems that I am interested in is determining the hierarchy of word semantics from a group of words. For example, if I have the set which contains a "super" representation of others, i.e. [cat, dog, monkey, animal, bird, ... ] I am interested to use any technique which would allow me to extract the word 'animal' which has the most meaningful and accurate representation of the other words inside this set. Note: they are NOT the same in meaning. cat != dog != monkey != animal BUT cat is a subset of animal and dog is a subset of animal. I know by now a lot of you will be telling me to use wordnet. Well, I will try to but I am actually interested in doing a very domain specific area which WordNet doesn't apply because: 1) Most words are not found in Wordnet 2) All the words are in another language; translation is possible but is to limited effect. another example would be: [ noise reduction, focal length, flash, functionality, .. ] so functionality includes everything in this set. I have also tried crawling wikipedia pages and applying some techniques on td-idf etc but wikipedia pages doesn't really do much either. Can someone possibly enlighten me as to what direction my research should go towards? (I could use anything)

    Read the article

  • iPhone TabbarController Switch Transition

    - by user269737
    I've implemented gestures (touchBegan-moved-ended) in order to allow for swiping through my tabs. It works. I'd like to add a slide-from-left and slide-from-right transition. It would be better if it could be part of the gesture if statement which tells me if the swipe is towards the right of left. Since I determine which tab is displayed from that, I could show that specific transition along with the new tab. So my question is this: what's the simplest way to simplement a slide transition at a specific instance. I don't want it to be for the whole tabbarcontrol since this is specifically for the swiping. Thanks for the help, much appreciated. For clarification purposes, this is snippet shows how I'm switching tabs: if(abs(diffx / diffy) > 2.5 && abs(diffx) > HORIZ_SWIPE_DRAG_MIN) { // It appears to be a swipe. if(isProcessingListMove) { // ignore move, we're currently processing the swipe return; } if (mystartTouchPosition.x < currentTouchPosition.x) { isProcessingListMove = YES; self.tabBarController.selectedViewController = [self.tabBarController.viewControllers objectAtIndex:0]; return; } else { isProcessingListMove = YES; self.tabBarController.selectedViewController = [self.tabBarController.viewControllers objectAtIndex:1 ]; return; }

    Read the article

  • Is there a better tool than postcat for viewing postfix mail queue files?

    - by Geekman
    So I got a call early this morning about a client needing to see what email they have waiting to be delivered sitting in our secondary mail server. Their link for the main server had (still is) been down for two days and they needed to see their email. So I wrote up a quick Perl script to use mailq in combination with postcat to dump each email for their address into separate files, tar'd it up and sent it off. Horrible code, I know, but it was urgent. My solution works OK in that it at least gives a raw view, but I thought tonight it would be nice if I had a solution where I could provide their email attachments and maybe remove some "garbage" header text as well. Most of the important emails seem to have a PDF or similar attached. I've been looking around but the only method of viewing queue files I can see is the postcat command, and I really don't want to write my own parser - so I was wondering if any of you have already done so, or know of a better command to use? Here's the code for my current solution: #!/usr/bin/perl $qCmd="mailq | grep -B 2 \"someemailaddress@isp\" | cut -d \" \" -f 1"; @data = split(/\n/, `$qCmd`); $i = 0; foreach $line (@data) { $i++; $remainder = $i % 2; if ($remainder == 0) { next; } if ($line =~ /\(/ || $line =~ /\n/ || $line eq "") { next; } print "Processing: " . $line . "\n"; `postcat -q $line > $line.email.txt`; $subject=`cat $line.email.txt | grep "Subject:"`; #print "SUB" . $subject; #`cat $line.email.txt > \"$subject.$line.email.txt\"`; } Any advice appreciated.

    Read the article

  • Twitter api - no more than 150 requests per hour....

    - by RenegadeAndy
    Hi. I am writing a twitter app using jtwitter - and its running inside a server inside my work. Anyway - whenever i run it from work it returns the error below and I am only making a couple requests per hour: HTTP/1.1 400 Bad Request {"request":"/1/statuses/user_timeline.json?count=6&id=cicsdemo&","error":"Rate limit exceeded. Clients may not make more than 150 requests per hour."} ] 2010-06-03 18:44:49 zero.timer.TimerTask::run Thread-3 SEVERE [ CWPZA3100E: Exception during processing for timer task, "twitterTimer". Exception: java.lang.ClassCastException: winterwell.jtwitter.Twitter$Status incompatible with java.lang.String ] I run the same code from home - its fine. So obviously at some point twitter thinks our work is all coming from one direct IP - which is why its hitting a limit which it shouldnt. Do I have any choice or workaround - can i make the limit be counted from my direct machine IP - or to my account instead of IP? Can i use a proxy? Has any body else had this problem and solved it?! Before anyone asks the APP must live inside my work - it cannot run anywhere else! Cheers, Andy

    Read the article

  • Rails 4, not saving @user.save when registering new user

    - by Yuichi
    When I try to register an user, it does not give me any error but just cannot save the user. I don't have attr_accessible. I'm not sure what I am missing. Please help me. user.rb class User < ActiveRecord::Base has_secure_password validates :email, presence: true, uniqueness: true, format: { with: /\A([^@\s]+)@((?:[-a-z0-9]+\.)+[a-z]{2,})\Z/i } validates :password, presence: true, length: {minimum: 6} validates :nickname, presence: true, uniqueness: true end users_controller.rb class UsersController < ApplicationController def new @user = User.new end def create @user = User.new(user_params) # Not saving @user ... if @user.save flash[:success] = "Successfully registered" redirect_to videos_path else flash[:error] = "Cannot create an user, check the input and try again" render :new end end private def user_params params.require(:user).permit(:email, :password, :nickname) end end Log: Processing by UsersController#create as HTML Parameters: {"utf8"=>"?", "authenticity_token"=>"x5OqMgarqMFj17dVSuA8tVueg1dncS3YtkCfMzMpOUE=", "user"=>{"email"=>"[email protected]", "password"=>"[FILTERED]", "nickname"=>"example"}, "commit"=>"Register"} (0.1ms) begin transaction User Exists (0.2ms) SELECT 1 AS one FROM "users" WHERE "users"."email" = '[email protected]' LIMIT 1 User Exists (0.1ms) SELECT 1 AS one FROM "users" WHERE "users"."nickname" = 'example' LIMIT 1 (0.1ms) rollback transaction

    Read the article

  • Using hashing to group similar records

    - by Neil Dobson
    I work for a fulfillment company and we have to pack and ship many orders from our warehouse to customers. To improve efficiency we would like to group identical orders and pack these in the most optimum way. By identical I mean having the same number of order lines containing the same SKUs and same order quantities. To achieve this I was thinking about hashing each order. We can then group by hash to quickly see which orders are the same. We are moving from an Access database to a PostgreSQL database and we have .NET based systems for data loading and general order processing systems, so we can either do the hashing during the data loading or hand this task over to the DB. My question firstly is should the hashing be managed by DB, possibly using triggers, or should the hash be created on-the-fly using a view or something? And secondly would it be best to calculate a hash for each order line and then to combine these to find an order-level hash for grouping, or should I just use a trigger for all CRUD operations on the order lines table which re-calculates a single hash for the entire order and store the value in the orders table? TIA

    Read the article

  • XSLT: Add namespace to root element

    - by Ingrid
    I need to change namespaces in the root element as follows: input document: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <foo xsi:schemaLocation="urn:isbn:1-931666-22-9 http://www.loc.gov/ead/ead.xsd" xmlns:ns2="http://www.w3.org/1999/xlink" xmlns="urn:isbn:1-931666-22-9" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> desired output: <foo audience="external" xsi:schemaLocation="urn:isbn:1-931666-22-9 http://www.loc.gov/ead/ead.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="urn:isbn:1-931666-22-9"> I was trying to do it as I copy over the whole document and before I give any other transformation instructions, but the following doesn't work: <xsl:template match="* | processing-instruction() | comment()"> <xsl:copy copy-namespaces="no"> <xsl:for-each select="."> <xsl:attribute name="audience" select="'external'"/> <xsl:namespace name="xlink" select="'http://www.w3.org/1999/xlink'"/> </xsl:for-each> <xsl:apply-templates/> <xsl:copy-of select="@*"/> <xsl:apply-templates/> </xsl:copy> </xsl:template> Thanks for any advice!

    Read the article

  • ASP.NET Login Control rejects users who exist

    - by rsteckly
    Hi, I'm having some trouble with the ASP.NET 2.0 Login Control. I've setup a database with the aspI.net regsql tool. I've checked the application name. It is set to "/". The application can access the SQL Server. In fact, when I go to retrieve the password, it will even send me the password. Despite this, the login control continues to reject logins. I added this to the web.config: <membership defaultProvider="AspNetSqlProvider"> <providers> <clear/> <add name="AspNetSqlProvider" connectionStringName="LocalSqlServer" applicationName="/" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/> </providers> And I added the following to my connection strings: <remove name="LocalSqlServer" /> <add name="LocalSqlServer" connectionString="Data Source=IDC-4\EXCALIBUR;Initial Catalog=allied_nr;Integrated Security=True;Asynchronous Processing=True"/> (Note the "remove name" is to get rid of the default connection string in the App_Data directory.) Why won't the login control authenticate users?

    Read the article

  • Is there a way to parse XML via SAX/DOM with line numbers available per node.

    - by Chris
    I already have written a DOM parser for a large XML document format that contains a number of items that can be used to automatically generate Java code. This is limited to small expressions that are then merged into a dynamically generated Java source file. So far - so good. Everything works. BUT - I wish to be able to embed the line number of the XML node where the Java code was included from (so that if the configuration contains uncompilable code, each method will have a pointer to the source XML document and the line number for ease of debugging). I don't require the line number at parse-time and I don't need to validate the XML Source Document and throw an error at a particular line number. I need to be able to access the line number for each node and attribute in my DOM or per SAX event. Any suggestions on how I might be able to achieve this? P.S. Also, I read the StAX has a method to obtain line number whilst parsing, but ideally I would like to achieve the same result with regular SAX/DOM processing in Java 4/5 rather than become a Java 6+ application or take on extra .jar files.

    Read the article

  • Assign sage variable values into R objects via sagetex and Sweave

    - by sheed03
    I am writing a short Sweave document that outputs into a Beamer presentation, in which I am using the sagetex package to solve an equation for two parameters in the beta binomial distribution, and I need to assign the parameter values into the R session so I can do additional processing on those values. The following code excerpt shows how I am interacting with sage: <<echo=false,results=hide>>= mean.raw <- c(5, 3.5, 2) theta <- 0.5 var.raw <- mean.raw + ((mean.raw^2)/theta) @ \begin{frame}[fragile] \frametitle{Test of Sage 2} \begin{sagesilent} var('a1, b1, a2, b2, a3, b3') eqn1 = [1000*a1/(a1+b1)==\Sexpr{mean.raw[1]}, ((1000*a1*b1)*(1000+a1+b1))/((a1+b1)^2*(a1+b1+1))==\Sexpr{var.raw[1]}] eqn2 = [1000*a2/(a2+b2)==\Sexpr{mean.raw[2]}, ((1000*a2*b2)*(1000+a2+b2))/((a2+b2)^2*(a2+b2+1))==\Sexpr{var.raw[2]}] eqn3 = [1000*a3/(a3+b3)==\Sexpr{mean.raw[3]}, ((1000*a3*b3)*(1000+a3+b3))/((a3+b3)^2*(a3+b3+1))==\Sexpr{var.raw[3]}] s1 = solve(eqn1, a1,b1) s2 = solve(eqn2, a2,b2) s3 = solve(eqn3, a3,b3) \end{sagesilent} Solutions of Beta Binomial Parameters: \begin{itemize} \item $\sage{s1[0]}$ \item $\sage{s2[0]}$ \item $\sage{s3[0]}$ \end{itemize} \end{frame} Everything compiles just fine, and in that slide I am able to see the solutions to the three equations respective parameters in that itemized list (for example the first item in the itemized list from that beamer slide is outputted as [a1=(328/667), b1=(65272/667)] (I am not able to post an image of the beamer slide but I hope you get the idea). I would like to save the parameter values a1,b1,a2,b2,a3,b3 into R objects so that I can use them in simulations. I cannot find any documentation in the sagetex package on how to save output from sage commands into variables for use with other programs (in this case R). Any suggestions on how to get these values into R?

    Read the article

  • How to properly set relationships in Core Data when using setValue and data already exists

    - by ern
    Let's say I have two objects: Articles and Categories. For the sake of this example all relevant categories have already been added to the data store. When looping through data that holds edits for articles, there is category relationship information that needs to be saved. I was planning on using the -setValue method in the Article class in order to set the relationships like so: - (void)setValue:(id)value forUndefinedKey:(NSString *)key { if([key isEqualToString:@"categories"]){ NSLog(@"trying to set categories..."); } } The problem is that value isn't a Category, it is just a string (or array of strings) holding the title of a category. I could certainly do a lookup within this method for each category and assign it, but that seems inefficient when processing a whole bunch of articles at once. Another option is to populate an array of all possible categories and just filter, but my question is where to store that array? Should it be a class method on Article? Is there a way to pass in additional data to the -setValue method? Is there another, better option for setting the relationship I'm not thinking of? Thanks for your help.

    Read the article

  • Multiple websites, Single sign-on design

    - by Yannis
    Hi all, I have a question. A client I have been doing some work recently has a range of websites with different login mechanisms. He is looking to slowly migrate to a single sign-on mechanism for his websites (all written in asp.net mvc). I am looking at my options here, so here is a list of requirements: 1) It has to be secure (duh) 2) It needs to support extra user properties over and above the usual name, address stuff (such as money or credits for a user) 3) It has to provide a centralized user management web console for his convenience (I understand that this will be a small project on top of whatever design solution I choose to go for) 4) It has to integrate with the existing websites without re-engineering the whole product (I understand that this depends on the current product implementation). 5) It has to deal with emailing the user when he registers (in order for him to activate his account) 6) It has to deal with activating the user when he clicks the activate me link in the email (I understand that 5 and 6 require some form of email templating system to support different emails per application) I was thinking of creating a library working together with forms authentication that exposes whatever methods are required (e.g. login, logout, activate, etc. and a small restful service to implement activation from email, registration processing etc. Taking into account that loads of things have been left out to make this question brief and to the point, does this sound like a good design? But this looks like a very common problem so arent there any existing projects that I could use? Thanks for reading.

    Read the article

  • SpeechRecognizer causes ANR... I need help with Android speech API.

    - by Dondo Chaka
    I'm trying to use Android's speech recognition package to record user speech and translate it to text. Unfortunately, when I attempt initiate listening, I get an ANR error that doesn't point to anything specific. As the SpeechRecognizer API indicates, a RuntimeException is thrown if you attempt to call it from the main thread. This would make me wonder if the processing was just too demanding... but I know that other applications use the Android API for this purpose and it is typically pretty snappy. java.lang.RuntimeException: SpeechRecognizer should be used only from the application's main thread Here is a (trimmed) sample of the code I'm trying to call from my service. Is this the proper approach? Thanks for taking the time to help. This has been a hurdle I haven't been able to get over yet. Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); intent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE, "com.domain.app"); SpeechRecognizer recognizer = SpeechRecognizer .createSpeechRecognizer(this.getApplicationContext()); RecognitionListener listener = new RecognitionListener() { @Override public void onResults(Bundle results) { ArrayList<String> voiceResults = results .getStringArrayList(RecognizerIntent.EXTRA_RESULTS); if (voiceResults == null) { Log.e(getString(R.string.log_label), "No voice results"); } else { Log.d(getString(R.string.log_label), "Printing matches: "); for (String match : voiceResults) { Log.d(getString(R.string.log_label), match); } } } @Override public void onReadyForSpeech(Bundle params) { Log.d(getString(R.string.log_label), "Ready for speech"); } @Override public void onError(int error) { Log.d(getString(R.string.log_label), "Error listening for speech: " + error); } @Override public void onBeginningOfSpeech() { Log.d(getString(R.string.log_label), "Speech starting"); } }; recognizer.setRecognitionListener(listener); recognizer.startListening(intent);

    Read the article

  • looking to streamline my RSS feed mashup

    - by Mark Cejas
    Hello crafty developers, I have aggregated RSS feeds from various sources with RSSowl, fetching directly from the social mention API. The RSS feeds are categorized into the following major categories: blogs, news, twitter, Q&A and social networking sites. Each major category is nested with a common group of RSS feeds that represent a particular client/brand ontology. Merging these feeds into the RSSowl reader application, allows me to conduct and save refined search queries (from the aggregated data) into a single file - that I can then tag and further segment for analysis. This scheme is utilized for my own research needs and has helped me considerably. However, I find this RSS mashup scheme kinda clumsy, it requires quite a bit of time to initially organize all of the feeds and I would like to be able to do further natural language processing to the data as well as eventually be able to rank the collected list of URL's into some order of media prominence - right I don't want to pay the ridiculous radian6 web analytics fees, when my intuition is telling me that with a bit of 'elbow grease' I can maybe leverage some available resources online to develop a functional low scale web mining application and get some good intelligence from it. I am now starting to learn a little about computer science - my background is in physical science/statistics so is my thinking in the right track? So, I guess I am imagining an application that allows me to query in a refined manner. A manner that allows me to search for keyword combinations, applying AND/OR operators, selectively focus my queries into particular sources - like a collection of blogs or twitter, or social networking communities, then save the results of my queries into a structured format that can then be manipulated and explored. Am I dreaming? I just had to get all of this out. any bit of advice and insight would be hugely appreciated. my best, Mark

    Read the article

  • nginx multiple domain virtual host configuration

    - by Poe
    I'm setting up nginx with multiple domain or wildcard support for convenience sake, rather than setting up 50+ different sites-available/* files. Hopefully this is enough to show you what I'm trying to do. Some are static sites, some are dynamic with usually wordpress installed. If an index.php exists, everything works as expected. If a file is requested that does not exist (missing.html), a 500 error is given due to the rewrite. The logged error is: *112 rewrite or internal redirection cycle while processing "/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/missing.html" The basic nginx configuration I'm currently using is: ` listen 80 default; server _; ... location / { root /var/www/$host; if (-f $request_filename) { expires max; break; } # problem, what if index.php does not exist? if (!-e $request_filename) { rewrite ^/(.*)$ /index.php/$1 last; } } ... ` If an index.php does not exist, and the file also does not exist, I would like it to error 404. Currently, nginx does not support multiple condition if's or nested if so I need a workaround.

    Read the article

  • Question about the evolution of interaction paradigm between web server program and content provider program?

    - by smwikipedia
    Hi experts, In my opinion, web server is responsible to deliver content to client. If it is static content like pictures and static html document, web server just deliver them as bitstream directly. If it is some dynamic content that is generated during processing client's request, the web server will not generate the conetnt itself but call some external proram to genearte the content. AFAIK, this kind of dynamice content generation technologies include the following: CGI ISAPI ... And from here, I noticed that: ...In IIS 7, modules replace ISAPI filters... Is there any others? Could anyone help me complete the above list and elabrate on or show some links to their evolution? I think it would be very helpful to understand application such as IIS, TomCat, and Apache. I once wrote a small CGI program, and though it serves as a content generator, it is still nothing but a normal standalone program. I call it normal because the CGI program has a main() entry point. But with the recenetly technology like ASP.NET, I am not writing complete program, but only some class library. Why does such radical change happens? Many thanks.

    Read the article

  • How do I use a custom cookie session serializer in Rack?

    - by Damien Wilson
    Hello SO. I'm currently integrating Warden into a new Rack application I'm building. I'd like to implement a recent patch to Rack that allows me to specify how sessions are serialized; specifically, I'd like to use Rack::Session::Cookie::Identity as the session processor. Unfortunately, the documentation is a little unclear as to what syntax I should use to configure Rack::Session::Cookie in my rackup file, can anyone here tell me what I'm doing wrong? config.ru require 'my_sinatra_app' app = self use Rack::Session::Cookie.new(app, Rack::Session::Cookie::Identity.new), {:key => "auth_token"} use Warden::Manager do |warden| # Must come AFTER Rack::Session warden.default_strategies :password warden.failure_app Jelli::Auth.run! end run MySinatraApp error message from thin !! Unexpected error while processing request: undefined method `new' for #<Rack::Session::Cookie:0x00000110124128> PS: I'm using bundler to manage my gem dependencies and I've likewise included rack's master branch as the desired version. Update: As suggested in the comments below, I have read the documentation; sadly the suggested syntax in the docs is not working. Update: Still no luck on my end; offering up a bounty to whoever can help me figure this out.

    Read the article

  • can javascript process binary data?

    - by Johnny
    admit me describe my questions in situation-oriented way: assume IE is still the dominate web browser(the firefox have document for binary processing): the XMLHttpRequest.responseText or XMLHttpRequest.responseXML in ie desire txt or xml/xhtml/html,but what about the server response the xmlHttprequest whith MIME TYPE application/octet ? would the response string all little than 256 ?(every char of that string < 256), thanks very much for a straight answer, i have no webserver env,so i don't know how to test it out. because use txt or xml have a issue of character set encode, and i don't know how to process #[[[CDDATA node of one encoded xml(ex : utf-8,ascii,gb18030) with javascript, when i getNodeText, does the docObj return me byte or decoded char ? if it was decoded char which according to the header indicated charSet in the httpresponse , it would be all wrong. to avoid mess up with charSet ,i would like the server to response octet data and force strings data to be encoded as utf-8 but another charSet in the binary format. if the response is octal, so i guess the browser would not try to decode the response"txt" does this weird? or miss understanding the fundamental things? EDIT: I believe the question is asking this: Can Javascript safely process strings that aren't encoded in Unicode? What are the problems with trying to do so? EDIT: no no no , i means if http-header: content-type is "application/octet" , would the ie try to decoded it as (16bits Unicode | ie local setting charset ) when i get XMLHttpRequestobj.responseText use javascript ? or it(ie) just wrap every single byte of the response body as a javascript string, then every char in that string little than or equal 256 (char<=256), am i talking Mars language? sadly, if i were Marsizen,i would come as tourist without fuzzy questions. however i am in a country which share at least one property with Mars : RED

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >