Search Results

Search found 8367 results on 335 pages for 'temporal difference'.

Page 65/335 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • What is the difference between System.Speech.Recognition and Microsoft.Speech.Recognition?

    - by Michael
    There are two similar namespaces and assemblies for speech recognition in .NET. I’m trying to understand the differences and when it is appropriate to use one or the other. There is System.Speech.Recognition from the assembly System.Speech (in System.Speech.dll). System.Speech.dll is a core DLL in the .NET Framework class library 3.0 and later There is also Microsoft.Speech.Recognition from the assembly Microsoft.Speech (in microsoft.speech.dll). Microsoft.Speech.dll is part of the UCMA 2.0 SDK I find the docs confusing and I have the following questions: System.Speech.Recognition says it is for "The Windows Desktop Speech Technology", does this mean it cannot be used on a server OS or cannot be used for high scale applications? The UCMA 2.0 Speech SDK ( http://msdn.microsoft.com/en-us/library/dd266409%28v=office.13%29.aspx ) says that it requires Microsoft Office Communications Server 2007 R2 as a prerequisite. However, I’ve been told at conferences and meetings that if I do not require OCS features like presence and workflow I can use the UCMA 2.0 Speech API without OCS. Is this true? If I’m building a simple recognition app for a server application (say I wanted to automatically transcribe voice mails) and I don’t need features of OCS, what are the differences between the two APIs?

    Read the article

  • Rails - difference between config.cache_store and config.action_controller.cache_store?

    - by gsmendoza
    If I set this in my environment config.action_controller.cache_store = :mem_cache_store ActionController::Base.cache_store will use a memcached store but Rails.cache will use a memory store instead: $ ./script/console >> ActionController::Base.cache_store => #<ActiveSupport::Cache::MemCacheStore:0xb6eb4bbc @data=<MemCache: 1 servers, ns: nil, ro: false>> >> Rails.cache => #<ActiveSupport::Cache::MemoryStore:0xb78b5e54 @data={}> In my app, I use Rails.cache.fetch(key){ object } to cache objects inside my helpers. All this time, I assumed that Rails.cache uses the memcached store so I'm surprised that it uses memory store. If I change the cache_store setting in my environment to config.cache_store = :mem_cache_store both ActionController::Base.cache_store and Rails.cache will now use the same memory store, which is what I expect: $ ./script/console >> ActionController::Base.cache_store => #<ActiveSupport::Cache::MemCacheStore:0xb7b8e928 @data=<MemCache: 1 servers, ns: nil, ro: false>, @middleware=#<Class:0xb7b73d44>, @thread_local_key=:active_support_cache_mem_cache_store_local_cache> >> Rails.cache => #<ActiveSupport::Cache::MemCacheStore:0xb7b8e928 @data=<MemCache: 1 servers, ns: nil, ro: false>, @middleware=#<Class:0xb7b73d44>, @thread_local_key=:active_support_cache_mem_cache_store_local_cache> However, when I run the app, I get a "marshal dump" error in the line where I call Rails.cache.fetch(key){ object } no marshal_dump is defined for class Proc Extracted source (around line #1): 1: Rails.cache.fetch(fragment_cache_key(...), :expires_in => 15.minutes) { ... } vendor/gems/memcache-client-1.8.1/lib/memcache.rb:359:in 'dump' vendor/gems/memcache-client-1.8.1/lib/memcache.rb:359:in 'set_without_newrelic_trace' What gives? Is Rails.cache meant to be a memory store? Should I call controller.cache_store.fetch in the places where I call Rails.cache.fetch?

    Read the article

  • Is there a difference between plain text emails, and multipart emails with only plain text?

    - by Brian Armstrong
    I'm using Rails to send emails and I just want to send a plain text email (there is no corresponding HTML part). I've noticed that if I just have one file named email.text.plain.erb it actually generates a multipart email with one part (the plain text part) like this: Content-Type: multipart/alternative; boundary=mimepart_4c04a2d34c4bb_690a4e56b0362 --mimepart_4c04a2d34c4bb_690a4e56b0362 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: Quoted-printable Content-Disposition: inline text of the email here... --mimepart_4c04a2d34c4bb_690a4e56b0362-- But if I take out the text.plain part and name it email.erb ActionMailer generates a regular plain text email without multipart like this: Content-Type: text/plain; charset=utf-8 text of the email here... Both work fine most of the time (so this is kind of nitpicky), but I guess my question is whether the second one is more correct. My goal here is just to make sure deliverability is as high as possible across a wide variety of devices and email clients. I've read that plain text emails can have slightly better deliverability rates than html and was just curious if throwing in this multipart (even if it only contained a plain text part) might throw off some of the dumber email clients. Thanks for your help!

    Read the article

  • What is the difference among NSString alloc:initWithCString versus stringWithUTF8String?

    - by mobibob
    I thought these two methods were (memory allocation-wise) equivalent, however, I was seeing "out of scope" and "NSCFString" in the debugger if I used what I thought was the convenient method (commented out below) and when I switched to the more explicit method my code stopped crashing! Notice that I am getting the string that is being stored in my container from sqlite3 query. p = (char*) sqlite3_column_text (queryStmt, 1); // GUID = (NSString*) [NSString stringWithUTF8String: (p!=NULL) ? p : ""]; GUID = [[NSString alloc] initWithCString:(p!=NULL) ? p : "" encoding:NSUTF8StringEncoding]; Also note, that if I looked at the values in the debugger and printed them with NSLog they looked correct, however, I don't think new memory was allocated and the value copied. Instead the memory pointer was stored - went out of scope - referenced later - crash!

    Read the article

  • Google Federated Login vs Hybrid Protocol vs Google Data Authentication. Whats's the Difference?

    - by johnfelix
    Hi, I am trying to implement Google Authentication in my website, in which I would also be pulling some Google Data using the Google Data API and I am using Google App Engine with Jinja2. My question is, so many ways are mentioned to do it. I am confused between Google Federated Login,Google Data Protocol, Hybrid Protocol. Are these things the same or different ways to do the same thing. From what I read and understood, which might be incorrect, Google Federated Login uses the hybrid protocol to authenticate and fetch the google data. Is there a proper guide to implement any one of these in python. Examples which I found at the google link are kind of different. From what I understood,correct me if i am wrong, I have to implement only the OpenID Consumer part. In order to implement Google Federated Login in Python, I saw that we need to download a separate library from the openid-enabled.com but I found a different library for the google data implementation at http://code.google.com/p/gdata-python-client/ As you can see, I am confused a lot :D. Please help me :) Thanks

    Read the article

  • What's the difference between !col and col=false in MySQL?

    - by Mask
    The two statements have totally different performance: mysql> explain select * from jobs where createIndexed=false; +----+-------------+-------+------+----------------------+----------------------+---------+-------+------+-------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+----------------------+----------------------+---------+-------+------+-------+ | 1 | SIMPLE | jobs | ref | i_jobs_createIndexed | i_jobs_createIndexed | 1 | const | 1 | | +----+-------------+-------+------+----------------------+----------------------+---------+-------+------+-------+ 1 row in set (0.01 sec) mysql> explain select * from jobs where !createIndexed; +----+-------------+-------+------+---------------+------+---------+------+-------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+-------+-------------+ | 1 | SIMPLE | jobs | ALL | NULL | NULL | NULL | NULL | 17996 | Using where | +----+-------------+-------+------+---------------+------+---------+------+-------+-------------+ Column definition and related index for aiding analysis: createIndexed tinyint(1) NOT NULL DEFAULT 0, create index i_jobs_createIndexed on jobs(createIndexed);

    Read the article

  • What could cause this difference in behaviour from iphone OS3.0 to iOS4.0?

    - by frankodwyer
    I am getting a strange EXC_BAD_ACCESS error when running my app on iOS4. The app has been pretty solid on OS3.x for some time - not even seeing crash logs in this area of the code (or many at all) in the wild. I've tracked the error down to this code: main class: - (void) sendPost:(PostRequest*)request { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSURLResponse* response; NSError* error; NSData *serverReply = [NSURLConnection sendSynchronousRequest:request.request returningResponse:&response error:&error]; ServerResponse* serverResponse=[[ServerResponse alloc] initWithResponse:response error:error data:serverReply]; [request.objectToNotifyWhenDone performSelectorOnMainThread:request.targetToNotifyWhenDone withObject:serverResponse waitUntilDone:YES]; [pool drain]; } (Note: sendPost is run on a separate thread for each invocation of it. PostRequest is just a class to encapsulate a request and a selector to notify when complete) ServerResponse.m: @synthesize response; @synthesize replyString; @synthesize error; @synthesize plist; - (ServerResponse*) initWithResponse:(NSURLResponse*)resp error:(NSError*)err data:(NSData*)serverReply { self.response=resp; self.error=err; self.plist=nil; self.replyString=nil; if (serverReply) { self.replyString = [[[NSString alloc] initWithBytes:[serverReply bytes] length:[serverReply length] encoding: NSASCIIStringEncoding] autorelease]; NSPropertyListFormat format; NSString *errorStr; plist = [NSPropertyListSerialization propertyListFromData:serverReply mutabilityOption:NSPropertyListImmutable format:&format errorDescription:&errorStr]; } return self; } ServerResponse.h: @property (nonatomic, retain) NSURLResponse* response; @property (nonatomic, retain) NSString* replyString; @property (nonatomic, retain) NSError* error; @property (nonatomic, retain) NSDictionary* plist; - (ServerResponse*) initWithResponse:(NSURLResponse*)response error:(NSError*)error data:(NSData*)serverReply; This reliably crashes with a bad access in the line: self.error=err; ...i.e. in the synthesized property setter! I'm stumped as to why this should be, given the code worked on the previous OS and hasn't changed since (even the binary compiled with the previous SDK crashes the same way, but not on OS3.0) - and given it is a simple property method. Any ideas? Could the NSError implementation have changed between releases or am I missing something obvious?

    Read the article

  • What is the exact difference between MEM_RESERVE and MEM_COMMIT states?

    - by pj4533
    As I understand it MEM_RESERVE is actually 'free' memory, ie available to be used by my process, but just hasn't been allocated yet? Or it was previously allocated, but had since been freed? Specifically, see in my !address output below how I am nearly out of virtual address space (99900 KB free, 2307872 as MEM_PRIVATE. But the states shows that 44.75% of that is actually MEM_RESERVE. Does that mean it is actually free, in my process...but maybe fragmented? 0:000> !address -summary --------- PEB a8bd8000 not found ---- -------------------- Usage SUMMARY -------------------------- TotSize ( KB) Pct(Tots) Pct(Busy) Usage 259af000 ( 616124) : 22.29% 23.12% : RegionUsageIsVAD 618f000 ( 99900) : 03.61% 00.00% : RegionUsageFree 13e22000 ( 325768) : 11.78% 12.22% : RegionUsageImage 42c04000 ( 1093648) : 39.56% 41.04% : RegionUsageStack 42d000 ( 4276) : 00.15% 00.16% : RegionUsageTeb 2625d000 ( 625012) : 22.61% 23.45% : RegionUsageHeap 0 ( 0) : 00.00% 00.00% : RegionUsagePageHeap 0 ( 0) : 00.00% 00.00% : RegionUsagePeb 1000 ( 4) : 00.00% 00.00% : RegionUsageProcessParametrs 1000 ( 4) : 00.00% 00.00% : RegionUsageEnvironmentBlock Tot: a8bf0000 (2764736 KB) Busy: a2a61000 (2664836 KB) -------------------- Type SUMMARY -------------------------- TotSize ( KB) Pct(Tots) Usage 618f000 ( 99900) : 03.61% : <free> 13e22000 ( 325768) : 11.78% : MEM_IMAGE 1e77000 ( 31196) : 01.13% : MEM_MAPPED 8cdc8000 ( 2307872) : 83.48% : MEM_PRIVATE -------------------- State SUMMARY -------------------------- TotSize ( KB) Pct(Tots) Usage 57235000 ( 1427668) : 51.64% : MEM_COMMIT 618f000 ( 99900) : 03.61% : MEM_FREE 4b82c000 ( 1237168) : 44.75% : MEM_RESERVE Largest free region: Base 7e4a1000 - Size 000ff000 (1020 KB)

    Read the article

  • what is the difference between response.redirect and response status 301 redirects in asp ?

    - by Nikhil Vaghela
    Our asp application is moving to new server and i want to implement a permenant url redirection. I am aware of following two approaches , i need to understand which one to user over other and when ? Option 1: <%@ Language=VBScript %><% Response.Redirect “http://www.new-url.com” %> Option 2: <%@ Language=VBScript %><% Response.Status="301 Moved Permanently" Response.AddHeader "Location","http://www.new-url.com/" %> Thanks, Nikhil.

    Read the article

  • What is the difference between Cloud Computing and Grid Computing ?

    - by this. __curious_geek
    Hi, Can you please help me understand the significant differences between Cloud Computing and Grid Computing ? What are the precise definations and target application domains for both ? I'm looking for conceptual insights along with technicalities. Like Windows Azure is a Cloud OS, do we have anytihng such for Grid Computing ? In past I did work on distributed and parallel computing and I used the librariries like PVM and MPI for processing distribution. Out of curiosity I wanted to know If Grid Computing is distributed computing extended over internet ?

    Read the article

  • Android: What is a difference between 'orientation' and 'screenLayout'?

    - by alex2k8
    There are 2 different constants that have same description (http://developer.android.com/intl/de/reference/android/R.attr.html#configChanges) orientation 0x0080 The screen orientation has changed, that is the user has rotated the device. screenLayout 0x0100 The screen orientation has changed, that is the user has rotated the device. Many sources suggest to specify: android:configChanges="keyboardHidden|orientation" But should not it be: android:configChanges="keyboardHidden|orientation|screenLayout"

    Read the article

  • oauth process for twitter. the difference between client and web application

    - by Radek
    I managed to make the oauth process work for PIN kind of verification. My twitter application is client type. When enter authorize url into web browser and grant the application access then I have to enter pin in my ruby application. Can I finish the process of getting access token without the pin thing? My current code is like. What changes do I need to do to make it work without pin? gem 'oauth' require 'oauth/consumer' consumer_key = 'w855B2MEJWQr0SoNDrnBKA' consumer_secret ='yLK3Nk1xCWX30p07Id1ahxlXULOkucq5Rve28pNVwE' consumer=OAuth::Consumer.new consumer_key, consumer_secret, {:site=>"http://twitter.com"} request_token = consumer.get_request_token puts request_token.authorize_url puts "Hit enter when you have completed authorization." pin = STDIN.readline.chomp access_token = request_token.get_access_token(:oauth_verifier => pin) puts puts access_token.token puts access_token.secret

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >