Search Results

Search found 9983 results on 400 pages for 'fuzzy c means'.

Page 291/400 | < Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >

  • How to efficiently serve massive sitemaps in django

    - by mlissner
    I have a site with about 150K pages in its sitemap. I'm using the sitemap index generator to make the sitemaps, but really, I need a way of caching it, because building the 150 sitemaps of 1,000 links each is brutal on my server.[1] I COULD cache each of these sitemap pages with memcached, which is what I'm using elsewhere on the site...however, this is so many sitemaps that it would completely fill memcached....so that doesn't work. What I think I need is a way to use the database as the cache for these, and to only generate them when there are changes to them (which as a result of the sitemap index means only changing the latest couple of sitemap pages, since the rest are always the same.)[2] But, as near as I can tell, I can only use one cache backend with django. How can I have these sitemaps ready for when Google comes-a-crawlin' without killing my database or memcached? Any thoughts? [1] I've limited it to 1,000 links per sitemap page because generating the max, 50,000 links, just wasn't happening. [2] for example, if I have sitemap.xml?page=1, page=2...sitemap.xml?page=50, I only really need to change sitemap.xml?page=50 until it is full with 1,000 links, then I can it pretty much forever, and focus on page 51 until it's full, cache it forever, etc.

    Read the article

  • using indexer to retrieve Linq to SQL object from datastore

    - by fearofawhackplanet
    class UserDatastore : IUserDatastore { ... public IUser this[Guid userId] { get { User user = (from u in _dataContext.Users where u.Id == userId select u).FirstOrDefault(); return user; } } ... } One of the developers in our team is arguing that an indexer in the above situation is not appropriate and that a GetUser(Guid id) method should be prefered. The arguments being that: 1) We aren't indexing into an in-memory collection, the indexer is basically performing a hidden SQL query 2) Using a Guid in an indexer is bad (FxCop flagged this also) 3) Returning null from an indexer isn't normal behaviour 4) An API user generally wouldn't expect any of this behaviour I agree to an extent with (most of) these points. But I'm also inclined to argue that one of the characteristics of Linq is to abstract the database access to make it appear that you're simply working with a bunch of collections, even though the lazy evaluation paradigm means those collections aren't evaluated until you run a query over them. It doesn't seem inconsistent to me to access the datastore in the same manner as if it was a concrete in-memory collection here. Also bearing in mind this is an inherited codebase which uses this pattern extensively and consistently, is it worth the refactoring? I accept that it might have been better to use a Get method from the start, but I'm not yet convinced that it's completely incorrect to be using an indexer. I'd be interested to hear all opinions, thanks.

    Read the article

  • Parsing multibyte string in PHP

    - by Petr Peller
    I would like to write a (HTML) parser based on state machine but I have doubts how to acctually read/use an input. I decided to load the whole input into one string and then work with it as with an array and hold its index as current parsing position. There would be no problems with single-byte encoding, but in multi-byte encoding each value does not represent a character, but a byte of a character. Example: $mb_string = 'žšcr'; //4 multi-byte characters in UTF-8 for($i=0; $i < 4; $i++) { echo $mb_string[$i], PHP_EOL; } Outputs: L ž L A This means I cannot iterate through the string in a loop to check single characters, because I never know if I am in the middle of an character or not. So the questions are: How do I multi-byte safe read a single character from a string in a performance friendly way? Is it good idea to work with the string as it was an array in this case? How would you read the input?

    Read the article

  • HTML Agility Pack

    - by Harikrishna
    I have html tables in one webpage like <table border=1> <tr><td>sno</td><td>sname</td></tr> <tr><td>111</td><td>abcde</td></tr> <tr><td>213</td><td>ejkll</td></tr> </table> <table border=1> <tr><td>adress</td><td>phoneno</td><td>note</td></tr> <tr><td>asdlkj</td><td>121510</td><td>none</td></tr> <tr><td>asdlkj</td><td>214545</td><td>none</td></tr> </table> Now from this webpage using html agility pack I want to extract the data of the column address and phone no only. It means for that I have find first in which table there is column address and phoneno.After finding that table I want to extract the data of that column address and phoneno what should I do ? I can get the table. But after that what should I do don't understand.

    Read the article

  • GLSL shader render to texture not saving alpha value

    - by quadelirus
    I am rendering to a texture using a GLSL shader and then sending that texture as input to a second shader. For the first texture I am using RGB channels to send color data to the second GLSL shader, but I want to use the alpha channel to send a floating point number that the second shader will use as part of its program. The problem is that when I read the texture in the second shader the alpha value is always 1.0. I tested this in the following way: at the end of the first shader I did this: gl_FragColor(r, g, b, 0.1); and then in the second texture I read the value of the first texture using something along the lines of vec4 f = texture2D(previous_tex, pos); if (f.a != 1.0) { gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); return; } No pixels in my output are black, whereas if I change the above code to read gl_FragColor(r, g, 0.1, 1.0); //Notice I'm now sending 0.1 for blue and in the second shader vec4 f = texture2D(previous_tex, pos); if (f.b != 1.0) { gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); return; } All the appropriate pixels are black. This means that for some reason when I set the alpha value to something other than 1.0 in the first shader and render to a texture, it is still seen as being 1.0 by the second shader. Before I render to texture I glDisable(GL_BLEND); It seems pretty clear to me that the problem has to do with OpenGL handling alpha values in some way that isn't obvious to me since I can use the blue channel in the way I want, and figured someone out there will instantly see the problem.

    Read the article

  • .net framework execution aborted while executing CLR sproc?

    - by Sean Ochoa
    I constructed a sproc that does the equivalent of FOR XML AUTO in SQL 2008. Now that I'm testing it, it gives me a really unhelpful error msg. Any idea what this error means? Msg 10329, Level 16, State 49, Procedure ForXML, Line 0 .Net Framework execution was aborted. System.Threading.ThreadAbortException: Thread was being aborted. System.Threading.ThreadAbortException: at System.Runtime.InteropServices.Marshal.PtrToStringUni(IntPtr ptr, Int32 len) at System.Data.SqlServer.Internal.CXVariantBase.WSTRToString() at System.Data.SqlServer.Internal.SqlWSTRLimitedBuffer.GetString(SmiEventSink sink) at System.Data.SqlServer.Internal.RowData.GetString(SmiEventSink sink, Int32 i) at Microsoft.SqlServer.Server.ValueUtilsSmi.GetValue(SmiEventSink_Default sink, ITypedGettersV3 getters, Int32 ordinal, SmiMetaData metaData, SmiContext context) at Microsoft.SqlServer.Server.ValueUtilsSmi.GetValue200(SmiEventSink_Default sink, SmiTypedGetterSetter getters, Int32 ordinal, SmiMetaData metaData, SmiContext context) at System.Data.SqlClient.SqlDataReaderSmi.GetValue(Int32 ordinal) at System.Data.SqlClient.SqlDataReaderSmi.GetValues(Object[] values) at System.Data.ProviderBase.DataReaderContainer.CommonLanguageSubsetDataReader.GetValues(Object[] values) at System.Data.ProviderBase.SchemaMapping.LoadDataRow() at System.Data.Common.DataAdapter.FillLoadDataRow(SchemaMapping mapping) at System.Data.Common.DataAdapter.FillFromReader(DataSet dataset, DataTable datatable, String srcTable, DataReaderContainer dataReader, Int32 startRecord, Int32 maxRecords, DataColumn parentChapterColumn, Object parentChapterValue) at System.Data.Common.DataAdapter.Fill(DataTable[] dataTables, IDataReader dataReader, Int32 startRecord, Int32 maxRecords) at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataTable[] dataTables, Int32 startRecord, Int32 maxRecords, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataTable dataTable) at ForXML.GetXML...

    Read the article

  • HttpServletResponse encoding problem @ WebSphere 6.1

    - by user295509
    My application is working fine with JBOSS 4.2.2 application server. However when I deploy same application at WebSphere 6.1. I get HttpServletResponse encoding problem. I am getting response on web browser as shown below:- ??][s?8?~N??0?uRY?d;?H?e??e??d6?%??A"yH????????M??x?? ??&A??h??ntCT???????UM??BW???H?T?4???????t??G?f =l?&5[?j?B{???6???V???6???7???????(???5?4????.?!????j??i?V????? X?Q??^<??????????sK????h?{y1?] [??T??- ?Dm?_?7????P??<*??VvQ?:6?KCc? 6?]????V_?zPC?c???Ÿ???zsW????_y?*???2? ??)?r?~?L%^?M???kzduY??BW4? ?.?????V????{??O????/?l?ii8?S?Q?cJ?56GAogp?w???7'??9vf???E?,??? 9?q?x???z?H????????;????4?? ?5?????iWF??l????o^??Fy?|?d???????zMa,????y??e \<?J???M?:miz????z?Z5???????^/???e?:?j7??'??~?@?V?V???nN?&??Q%}(??????*u???#???S?BO??Lð????+??x?8?/?E??????6_k?1)?@q. ?S%??5?=?$?CSBt?c ????+hX??2?>t?s?+?M????????nv$??13m??? I would like to mentioned that this encoding problem is not arise when I have less data (or HTML element). Means even at WebSphere, It is working fine when there are up to approximately 300 HTML element are render. When HTML elements over than certain number then web page is shown with encoded form. Moreover at Jboss 4.2.2 application is working fine up to long -long html element. I set content type as: <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> This issue is reproducible at FF 3.6 and IE 7 and 8 browser. Can anyone help me out? Am I missing some setting?

    Read the article

  • Using Git to work with subversion: Ignoring modifications to tracked files

    - by Chris Nicola
    I am currently working with a subversion repository but I am using git to work locally on my machine. It makes work much easier, but it also makes some of the bad behavior going on in the subversion repo quite glaring and that creates problems for me. There is a somewhat complex local build process after pulling down the code and it creates (and unfortunately modifies) a number of files. Obviously these changes are not meant to be committed back to the repository. Unfortunately the build process is actually modifying some tracked files (yes, most likely because someone mistakenly committed these build artifacts at some point to the subversion repository). Since these are modifications adding them to my ignore file does nothing for me. I can avoid checking these changes back it, I simple don't stage or commit them, but having unstaged local changes means I can't rebase without first cleaning them up. What I would like to know is if there any way to ignore future changes to a set of tracked files? Alternatively, is there another way to handle the problem I am having, or will I just have to tell whoever checked in these files to clean them up?

    Read the article

  • Applescript from Mac App says "Expected end of line but found \U201c\"\U201d."

    - by Rasmus Styrk
    I am trying to perform a copy/paste for my to the the last active app, here's my code: NSString *appleScriptSource = [NSString stringWithFormat:@"\ntell application \"%@\" to activate\ntell application \"System Events\" to tell process \"%@\"\nkeystroke \"v\" using command down\nend tell", [lastApp localizedName], [lastApp localizedName]]; NSDictionary *error; NSAppleScript *aScript = [[NSAppleScript alloc] initWithSource:appleScriptSource]; NSAppleEventDescriptor *aDescriptor = [aScript executeAndReturnError:&error]; The problem is that on some computers it works just fine, but on others it fails. My error output from error that is returned by executeAndReturnError is: 2012-06-13 17:43:19.875 Mini Translator[1206:303] (null) (error: { NSAppleScriptErrorBriefMessage = "Expected end of line but found \U201c\"\U201d."; NSAppleScriptErrorMessage = "Expected end of line but found \U201c\"\U201d."; NSAppleScriptErrorNumber = "-2741"; NSAppleScriptErrorRange = "NSRange: {95, 1}"; }) I can't seem to figure out what it means or why it happens. We tried copying the generated apple-script code into the Apple Script editor, and here it works just fine. My App is sandboxed - i have added the bundle identifiers for the key "com.apple.security.temporary-exception.apple-events" for the apps i want to support. Any suggestions?

    Read the article

  • Are there any ASP.NET MVC subscription-based starter kits or examples?

    - by Wayne M
    Basically something that handles the low-level "plumbing" code for a subscription-based service. I see a lot of things dealing with basic membership, but nothing that handles the subscription aspect (recurring billing, automated jobs for setting up billing, notification for billing, etc). This might be the one thing that keeps me from using ASP.NET MVC for my SaaS idea, since it would take a fair amount of development time to write my own; if I go with my other option, Ruby on Rails, I can buy a kit that does all of this for $250. I haven't found anything even remotely close to this for .NET - all of the SaaS sample apps I've seen are more like StackOverflow et all where you have one site that multiple people log on to, not the web application model where you have subscribers who are billed monthly, each of whom has users and other entities (e.g. Customers, Tasks, etc) for their own site. Is there anything similar for ASP.NET, or some kind of guidelines for writing my own if I have to, so I don't waste too much time? As a startup that means that I'm doing all the coding myself. I've found this, but it seems to only be for billing and didn't seem to have much (any?) documentation on exactly how to set it up.

    Read the article

  • How to improve performance of non-scalar aggregations on denormalized tables

    - by The Lazy DBA
    Suppose we have a denormalized table with about 80 columns, and grows at the rate of ~10 million rows (about 5GB) per month. We currently have 3 1/2 years of data (~400M rows, ~200GB). We create a clustered index to best suit retrieving data from the table on the following columns that serve as our primary key... [FileDate] ASC, [Region] ASC, [KeyValue1] ASC, [KeyValue2] ASC ... because when we query the table, we always have the entire primary key. So these queries always result in clustered index seeks and are therefore very fast, and fragmentation is kept to a minimum. However, we do have a situation where we want to get the most recent FileDate for every Region, typically for reports, i.e. SELECT [Region] , MAX([FileDate]) AS [FileDate] FROM HugeTable GROUP BY [Region] The "best" solution I can come up to this is to create a non-clustered index on Region. Although it means an additional insert on the table during loads, the hit isn't minimal (we load 4 times per day, so fewer than 100,000 additional index inserts per load). Since the table is also partitioned by FileDate, results to our query come back quickly enough (200ms or so), and that result set is cached until the next load. However I'm guessing that someone with more data warehousing experience might have a solution that's more optimal, as this, for some reason, doesn't "feel right".

    Read the article

  • Delphi dbExpress and Interbase: UTF8 migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally.

    Read the article

  • OO Design / Patterns - Fat Model Vs Transaction Script?

    - by ben
    Ok, 'Fat' Model and Transaction Script both solve design problems associated with where to keep business logic. I've done some research and popular thought says having all business logic encapsulated within the model is the way to go (mainly since Transaction Script can become really complex and often results in code duplication). However, how does this work if I want to use the TDG of a second Model in my business logic? Surely Transaction Script presents a neater, less coupled solution than using one Model inside the business logic of another? A practical example... I have two classes: User & Alert. When pushing User instances to the database (eg, creating new user accounts), there is a business rule that requires inserting some default Alerts records too (eg, a default 'welcome to the system' message etc). I see two options here: 1) Add this rule as a User method, and in the process create a dependency between User and Alert (or, at least, Alert's Table Data Gateway). 2) Use a Transaction Script, which avoids the dependency between models. (Also, means the business logic is kept in a 'neutral' class & easily accessible by Alert. That probably isn't too important here, though). User takes responsibility for it's own validation etc, however, but because we're talking about a business rule involving two Models, Transaction Script seems like a better choice to me. Anyone spot flaws with this approach?

    Read the article

  • 4.0/WCF: Best approach for bi-idirectional message bus?

    - by TomTom
    Just a technology update, now that .NET 4.0 is out. I write an application that communicates to the server through what is basically a message bus (instead of method calls). This is based on the internal architecture of the application (which is multi threaded, passing the messages around). There are a limited number of messages to go from the client to the server, quite a lot more from the server to the client. Most of those can be handled via a separate specialized mechanism, but at the end we talk of possibly 10-100 small messages per second going from the server to the client. The client is supposed to operate under "internet conditions". THis means possibly home end users behind standard NAT devices (i.e. typical DSL routers) - a firewalled secure and thus "open" network can not be assumed. I want to have as little latency and as little overhad for the communication as possible. What is the technologally best way to handle the message bus callback? I Have no problem regularly calling to the server for message delivery if something needs to be sent... ...but what are my options to handle the messagtes from the server to the client? WsDualHttp does work how? Especially under a NAT scenario? Just as a note: polling is most likely out - the main problem here is that I would have a significant overhead OR a significant delay, both aren ot really wanted. Technically I would love some sort of streaming appraoch, where the server can write messags to a stream while he generates them and they get sent to the client as they come. Not esure this is doable with WCF, though (if not, I may acutally decide to handle the whole message part outside of WCF and just do control / login / setup / destruction via WCF).

    Read the article

  • how is data stored at bit level according to "Endianness" ?

    - by bakra
    I read about Endianness and understood squat... so I wrote this main() { int k = 0xA5B9BF9F; BYTE *b = (BYTE*)&k; //value at *b is 9f b++; //value at *b is BF b++; //value at *b is B9 b++; //value at *b is A5 } k was equal to "A5 B9 BF 9F" and (byte)pointer "walk" o/p was "9F BF b9 A5" so I get it bytes are stored backwards...ok. ~ so now I thought how is it stored at BIT level... I means is "9f"(1001 1111) stored as "f9"(1111 1001)? so I wrote this int _tmain(int argc, _TCHAR* argv[]) { int k = 0xA5B9BF9F; void *ptr = &k; bool temp= TRUE; cout<<"ready or not here I come \n"< for(int i=0;i<32;i++) { temp = *( (bool*)ptr + i ); if( temp ) cout<<"1 "; if( !temp) cout<<"0 "; if(i==7||i==15||i==23) cout<<" - "; } } I get some random output even for nos. like "32" I dont get anything sensible. why ?

    Read the article

  • Memory Bandwidth Performance for Modern Machines

    - by porgarmingduod
    I'm designing a real-time system that occasionally has to duplicate a large amount of memory. The memory consists of non-tiny regions, so I expect the copying performance will be fairly close to the maximum bandwidth the relevant components (CPU, RAM, MB) can do. This led me to wonder what kind of raw memory bandwidth modern commodity machine can muster? My aging Core2Duo gives me 1.5 GB/s if I use 1 thread to memcpy() (and understandably less if I memcpy() with both cores simultaneously.) While 1.5 GB is a fair amount of data, the real-time application I'm working on will have have something like 1/50th of a second, which means 30 MB. Basically, almost nothing. And perhaps worst of all, as I add multiple cores, I can process a lot more data without any increased performance for the needed duplication step. But a low-end Core2Due isn't exactly hot stuff these days. Are there any sites with information, such as actual benchmarks, on raw memory bandwidth on current and near-future hardware? Furthermore, for duplicating large amounts of data in memory, are there any shortcuts, or is memcpy() as good as it will get? Given a bunch of cores with nothing to do but duplicate as much memory as possible in a short amount of time, what's the best I can do?

    Read the article

  • Umbraco template issues

    - by bomortensen
    Hi fellow Umbraco users, I'm currently building my first umbraco website and since I'm completely new to umbraco I've already ran into a problem which I'm sure is pretty straight-forward to do. That said, I'm by no means a beginner when it comes to building sites that run on a (open source) CMS as I've been using Joomla! since it was called Mambo. Anyway, the site I'm building is here: my site What I want to do is to have some content in the white box that changes when you mouseover/hover one of the menu items. Also that content has to stay "active" when you've clicked on a link (i.e. if you click on "Profile" I need to highlight the Profile menu item with the gray color and the white boxs content needs to be what would be related to the Profile menu item. How do I go about this? What would be the best practice when it comes to showing multiple content on a site? I've watched the video about multiple Content Place Holders, but I never really got it to work. I can't get a page to display in the NavigationPlaceHolder (the placeholder I put in the white box), but thats because the actual page is Frontpage.aspx and not WhateverIsInThenavigationPlaceHolder.aspx. If I go to the mysite.dk/WhateverIsInTheNavigationPlaceHolder.aspx it shows up fine. What have I missed here? :) Thanks in advance! If my question is not clear in some ways, please tell me and I will try to explain it better. All the best, Bo

    Read the article

  • Simplepie - fetch feeds from database

    - by krike
    I want to fetch multiple feeds from the database and in so doing fetch all new content from those feeds it works but there is a problem and I have no idea what's causing it, this is the code: $feed_sql = mysqli_query($link, "SELECT feed from tutorial_feed WHERE approved=1"); $feeds = array(); $i = 0; while($feed_r = mysqli_fetch_object($feed_sql)): $feeds[$i] .= $feed_r->feed; $i++; endwhile; $feed = new SimplePie($feeds); $feed-handle_content_type(); foreach($feed-get_items(0, 100) as $item) : echo $item-get_permalink().""; endforeach; I first get Notice: Undefined offset: 0 in I:\wamp\www\cmstut\includes\cron.php on line 22 Notice: Undefined offset: 1 in I:\wamp\www\cmstut\includes\cron.php on line 22 Notice: Undefined offset: 2 in I:\wamp\www\cmstut\includes\cron.php on line 22 Notice: Undefined offset: 3 in I:\wamp\www\cmstut\includes\cron.php on line 22 Notice: Undefined offset: 4 in I:\wamp\www\cmstut\includes\cron.php on line 22 Notice: Undefined offset: 5 in I:\wamp\www\cmstut\includes\cron.php on line 22 Notice: Undefined offset: 6 in I:\wamp\www\cmstut\includes\cron.php on line 22 Notice: Undefined offset: 7 in I:\wamp\www\cmstut\includes\cron.php on line 22 Notice: Undefined offset: 8 in I:\wamp\www\cmstut\includes\cron.php on line 22 Notice: Undefined offset: 9 in I:\wamp\www\cmstut\includes\cron.php on line 22 Notice: Undefined offset: 10 in I:\wamp\www\cmstut\includes\cron.php on line 22 and then it will start printing the permalinks to the new content based on the imported feeds, I know undefined offset means it does not exist but I don't get it, any help would be appreciated

    Read the article

  • SQL Server: Output an XML field as tabular data using a stored procedure

    - by Pawan
    I am using a table with an XML data field to store the audit trails of all other tables in the database. That means the same XML field has various XML information. For example my table has two records with XML data like this: 1st record: <client> <name>xyz</name> <ssn>432-54-4231</ssn> </client> 2nd record: <emp> <name>abc</name> <sal>5000</sal> </emp> These are the two sample formats and just two records. The table actually has many more XML formats in the same field and many records in each format. Now my problem is that upon query I need these XML formats to be converted into tabular result sets. What are the options for me? It would be a regular task to query this table and generate reports from it. I want to create a stored procedure to which I can pass that I need to query "<emp>" or "<client>", then my stored procedure should return tabular data.

    Read the article

  • Encrypt a hex string in java.

    - by twintwins
    I would like to ask for any suggestions about my problem. I need to encrypt a hexadecimal string. I must not to use the built-in functions of java because it doesn't work in my server. In short, I have to hard code an algorithm or any means of encrypting the message. Anyone who could help me with this? thanks a lot! here is the code. public Encrypt(SecretKey key, String algorithm) { try { ecipher = Cipher.getInstance(algorithm); dcipher = Cipher.getInstance(algorithm); ecipher.init(Cipher.ENCRYPT_MODE, key); dcipher.init(Cipher.DECRYPT_MODE, key); } catch (NoSuchPaddingException e) { System.out.println("EXCEPTION: NoSuchPaddingException"); } catch (NoSuchAlgorithmException e) { System.out.println("EXCEPTION: NoSuchAlgorithmException"); } catch (InvalidKeyException e) { System.out.println("EXCEPTION: InvalidKeyException"); } } public void useSecretKey(String secretString) { try { SecretKey desKey = KeyGenerator.getInstance("DES").generateKey(); SecretKey blowfishKey = KeyGenerator.getInstance("Blowfish").generateKey(); SecretKey desedeKey = KeyGenerator.getInstance("DESede").generateKey(); Encrypt desEncrypter = new Encrypt(desKey, desKey.getAlgorithm()); Encrypt blowfishEncrypter = new Encrypt(blowfishKey, blowfishKey.getAlgorithm()); Encrypt desedeEncrypter = new Encrypt(desedeKey, desedeKey.getAlgorithm()); desEncrypted = desEncrypter.encrypt(secretString); blowfishEncrypted = blowfishEncrypter.encrypt(secretString); desedeEncrypted = desedeEncrypter.encrypt(secretString); } catch (NoSuchAlgorithmException e) {} } those are the methods i used. no problem if it is run as an application but then when i put it to my server which is the glassfish server an exception occured and it says no such algorithm.

    Read the article

  • WCF Callback Contract InvalidOperationException: Collection has been modified

    - by mrlane
    We are using a WCF service with a Callback contract. Communication is Asynchronous. The service contract is defined as: [ServiceContract(Namespace = "Silverlight", CallbackContract = typeof(ISessionClient),SessionMode = SessionMode.Allowed)] public interface ISessionService With a method: [OperationContract(IsOneWay = true)] void Send(Message message); The callback contract is defined as [ServiceContract] public interface ISessionClient With methods: [OperationContract(IsOneWay = true, AsyncPattern = true)] IAsyncResult BeginSend(Message message, AsyncCallback callback, object state); void EndSend(IAsyncResult result); The implementation of BeginSend and EndSend in the callback channel are as follows: public void Send(ActionMessage actionMessage) { Message message = Message.CreateMessage(_messageVersion, CommsSettings.SOAPActionReceive, actionMessage, _messageSerializer); lock (LO_backChannel) { try { _backChannel.BeginSend(message, OnSendComplete, null); } catch (Exception ex) { _hasFaulted = true; } } } private void OnSendComplete(IAsyncResult asyncResult) { lock (LO_backChannel) { try { _backChannel.EndSend(asyncResult); } catch (Exception ex) { _hasFaulted = true; } } } We are getting an InvalidOperationException: "Collection has been modified" on _backChannel.EndSend(asyncResult) seemingly randomly, and we are really out of ideas about what is causing this. I understand what the exception means, and that concurrency issues are a common cause of such exceptions (hence the locks), but it really doesn't make any sense to me in this situation. The clients of our service are Silverlight 3.0 clients using PollingDuplexHttpBinding which is the only binding available for Silverlight. We have been running fine for ages, but recently have been doing a lot of data binding, and this is when the issues started. Any help with this is appreciated as I am personally stumped at this time.

    Read the article

  • ADO/SQL Server: What is the error code for "timeout expired"?

    - by Ian Boyd
    i'm trying to trap a "timeout expired" error from ADO. When a timeout happens, ADO returns: Number: 0x80040E31 (DB_E_ABORTLIMITREACHED in oledberr.h) SQLState: HYT00 NativeError: 0 The NativeError of zero makes sense, since the timeout is not a function of the database engine (i.e. SQL Server), but of ADO's internal timeout mechanism. The Number (i.e. the COM hresult) looks useful, but the definition of DB_E_ABORTLIMITREACHED in oledberr.h says: Execution stopped because a resource limit was reached. No results were returned. This error could apply to things besides "timeout expired" (some potentially server-side), such as a governor that limits: CPU usage I/O reads/writes network bandwidth and stops a query. The final useful piece is SQLState, which is a database-independent error code system. Unfortunately the only reference for SQLState error codes i can find have no mention of HYT00. What to do? What do do? Note: i can't trust 0x80040E31 (DB_E_ABORTLIMITREACHED) to mean "timeout expired", anymore than i could trust 0x80004005 (E_UNSPECIFIED_ERROR) to mean "Transaction was deadlocked on lock resources with another process and has been chosen as the deadlock victim". My pseudo-question becomes: does anyone have documentation on what the SQLState "HYT000" means? And my real question still remains: How can i specifically trap an ADO timeout expired exception thrown by ADO? Gotta love the questions where the developer is trying to "do the right thing", but nobody knows how to do the right thing. Also gotta love how googling for DB_E_ABORTLIMITREACHED and this question is #9, with MSDN nowhere to be found. Update 3 From the OLEdb ICommand.Execute reference: DB_E_ABORTLIMITREACHED Execution has been aborted because a resource limit has been reached. For example, a query timed out. No results have been returned. "For example", meaning not an exhaustive list.

    Read the article

  • Saving a record in Authlogic table

    - by denniss
    I am using authlogic to do my authentication. The current model that serves as the authentication model is the user model. I want to add a "belongs to" relationship to user which means that I need a foreign key in the user table. Say the foreign key is called car_id in the user's model. However, for some reason, when I do u = User.find(1) u.car_id = 1 u.save! I get ActiveRecord::RecordInvalid: Validation failed: Password can't be blank My guess is that this has something to do with authlogic. I do not have validation on password on the user's model. This is the migration for the user's table. def self.up create_table :users do |t| t.string :email t.string :first_name t.string :last_name t.string :crypted_password t.string :password_salt t.string :persistence_token t.string :single_access_token t.string :perishable_token t.integer :login_count, :null => false, :default => 0 # optional, see Authlogic::Session::MagicColumns t.integer :failed_login_count, :null => false, :default => 0 # optional, see Authlogic::Session::MagicColumns t.datetime :last_request_at # optional, see Authlogic::Session::MagicColumns t.datetime :current_login_at # optional, see Authlogic::Session::MagicColumns t.datetime :last_login_at # optional, see Authlogic::Session::MagicColumns t.string :current_login_ip # optional, see Authlogic::Session::MagicColumns t.string :last_login_ip # optional, see Authlogic::Session::MagicColumns t.timestamps end end And later I added the car_id column to it. def self.up add_column :users, :user_id, :integer end Is there anyway for me to turn off this validation?

    Read the article

  • Question about the Cloneable interface and the exception that should be thrown

    - by Nazgulled
    Hi, The Java documentation says: A class implements the Cloneable interface to indicate to the Object.clone() method that it is legal for that method to make a field-for-field copy of instances of that class. Invoking Object's clone method on an instance that does not implement the Cloneable interface results in the exception CloneNotSupportedException being thrown. By convention, classes that implement this interface should override Object.clone (which is protected) with a public method. See Object.clone() for details on overriding this method. Note that this interface does not contain the clone method. Therefore, it is not possible to clone an object merely by virtue of the fact that it implements this interface. Even if the clone method is invoked reflectively, there is no guarantee that it will succeed. And I have this UserProfile class: public class UserProfile implements Cloneable { private String name; private int ssn; private String address; public UserProfile(String name, int ssn, String address) { this.name = name; this.ssn = ssn; this.address = address; } public UserProfile(UserProfile user) { this.name = user.getName(); this.ssn = user.getSSN(); this.address = user.getAddress(); } // get methods here... @Override public UserProfile clone() { return new UserProfile(this); } } And for testing porpuses, I do this in main(): UserProfile up1 = new UserProfile("User", 123, "Street"); UserProfile up2 = up1.clone(); So far, no problems compiling/running. Now, per my understanding of the documentation, removing implements Cloneable from the UserProfile class should throw an exception in up1.clone() call, but it doesn't. I've read around here that the Cloneable interface is broken but I don't really know what that means. Am I missing something?

    Read the article

  • Is a confirmation screen necessary for an order form?

    - by abeger
    In a discussion about how to streamline an order form on our site, the idea of eliminating the confirmation screen. So, instead of filling out the form, clicking "Submit", seeing a summary on a confirmation screen and clicking "Confirm", the user would simply fill out the form, hit "Submit", and the order's done. The theory is that fewer clicks and fewer screens means less time to order and therefore the ordering experience is easier. The opposing opinion says that without the confirmation screen, user error increases and people just end up canceling/changing orders after the fact. I'm looking for more input from the SO community. Have you ever done this? How has it worked out, compared to a traditional confirmation screen setup? Are there examples of a true "one click and done" setup on the web (does Amazon's 1-click have a confirmation screen? I've never been courageous enough to try it)? EDIT: Just to clarify, when I say "confirmation screen", I mean a second step where the customer reviews the order before placing it. Even if we did do away with it, the user would still receive a message saying "your order has been placed".

    Read the article

< Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >