Search Results

Search found 4650 results on 186 pages for 'perfect hash'.

Page 114/186 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Follow up viewDidUnload vs. dealloc question...

    - by entaroadun
    Clarification question as a follow up to: http://stackoverflow.com/questions/2261972/what-exactly-must-i-do-in-viewdidunload http://stackoverflow.com/questions/1158788/when-should-i-release-objects-in-voidviewdidunload-rather-than-in-dealloc So let's say there's a low memory error, and the view is hidden, and viewDidUnload is called. We do the release and nil dance. Later the entire view stack is not needed, so dealloc is called. Since I already have the release and nil stuff in viewDidUnload, I don't have it in dealloc. Perfect. But if there's no low memory error, viewDidUnload is never called. dealloc is called and since I don't have the release and nil stuff, there's a memory leak. In other words, will dealloc ever be called without viewDidUnload being called first? And the practical follow up to that is, if I alloc and set something in viewDidLoad, and I release it and set to nil in viewDidUnload, do I leave it out of dealloc, or do I do a defensive nil check in dealloc and release/nil it if it's not nil?

    Read the article

  • Ruby on Rails Decryption

    - by user812120
    0 down vote favorite share [g+] share [fb] share [tw] The following function works perfect in PHP. How can it be translated in Ruby on Rails. Please note that both privateKey and iv are 32 characters long. mcrypt_decrypt(MCRYPT_RIJNDAEL_256, $privateKey, base64_decode($enc), MCRYPT_MODE_CBC, $iv) I tried to use the following in Ruby but got a bad decrypt error. cipher = OpenSSL::Cipher.new('aes-256-cbc') cipher.decrypt cipher.key = privateKey cipher.iv = iv decrypted = '' << cipher.update(encrypted) << cipher.final

    Read the article

  • svn over HTTP proxy

    - by small_jam
    Hi all. I'm on laptop (Ubuntu) with a network that use HTTP proxy (only http connections allowed). When I use svn up for url like 'http://.....' everything is cool (google chrome repository works perfect), but right now I need to svn up from server with 'svn://....' and I see connection refused. I've set proxy configuration in /etc/subversion/servers but it doesn't help. Anyone have opinion/solution? Thanks. Anton.

    Read the article

  • unwanted \ caracter

    - by marc-andre menard
    php code: <?php echo json_encode(glob("photos-".$_GET["folder"].'/*.jpg')); ?> it return : ["photos-animaux\/ani-01.jpg","photos-animaux\/ani-02.jpg","photos-animaux\/ani-02b.jpg","photos-animaux\/ani-03.jpg","photos-animaux\/ani-04.jpg","photos-animaux\/ani-05.jpg","photos-animaux\/ani-06.jpg","photos-animaux\/ani-07.jpg","photos-animaux\/ani-08.jpg","photos-animaux\/ani-09.jpg","photos-animaux\/ani-10.jpg","photos-animaux\/ani-11.jpg","photos-animaux\/ani-12.jpg","photos-animaux\/ani-13.jpg","photos-animaux\/ani-14.jpg"] wich is ALMOST perfect, exept for the \ caracter... where it came from ??? no idea HELP here is the jquery code that call it: $.get( 'photolister.php', {'folder' : $(this).attr('href')}, function(data){startSlideshow(data);console.log(data);} );

    Read the article

  • Chrome history problem

    - by Parhs
    $("#table_exams tbody tr").click(function (event) { window.location.href="#" +$(this).attr("exam_ID"); window.location.href="/medilab/prototypes/exams/edit?examId=" + $(this).attr("exam_ID") +"&referer=" + referer; row_select(this); }); $(document).keypress(function (event) { if(event.keyCode==13) $(row_selected).trigger("click"); }); I have a little problem with this only in chrome...When user goes back chrome ignores the last href hash that my script added..but when i do a doubleclick its ok... IE and Firefox work great...

    Read the article

  • Understanding memory and cpu speed

    - by tipu
    Firstly, I am working on a windows xp 64 machine with 4gb ram and 2.29 ghz x4 I am indexing 220,000 lines of text that are more or less the same length. These are divided into 15 equally sized files. File 1/15 takes 1 minute to index. As the script indexes more files, it seems to take much longer with file 15/15 taking 40 minutes. My understanding is that the more I put in memory, the faster the script is. The dictionary is indexed in a hash, so fetch operations should be O(1). I am not sure where the script would be hanging the CPU. I have the script here.

    Read the article

  • smarter character replacement using ruby gsub and regexp

    - by agriciuc
    Hi guys! I'm trying to create permalink like behavior for some article titles and i don't want to add a new db field for permalink. So i decided to write a helper that will convert my article title from: "O "focoasa" a pornit cruciada, împotriva barbatilor zgârciti" to "o-focoasa-a-pornit-cruciada-impotriva-barbatilor-zgarciti". While i figured out how to replace spaces with hyphens and remove other special characters (other than -) using: title.gsub(/\s/, "-").gsub(/[^\w-]/, '').downcase I am wondering if there is any other way to replace a character with a specific other character from only one .gsub method call, so I won't have to chain title.gsub("a", "a") methods for all the UTF-8 special characters of my localization. I was thinking of building a hash with all the special characters and their counterparts but I haven't figured out yet how to use variables with regexps. What I was looking for is something like: title.gsub(/\s/, "-").gsub(*replace character goes here*).gsub(/[^\w-]/, '').downcase Thanks!

    Read the article

  • Upload/Download images to FTP without bothering the user

    - by Dan B
    Hi, I know a lot of posts have been made in regards to FTP, but none have led me to what I need. I'm trying to upload a picture to a server (currently attempting FTP) but do it without notifying requiring the user to be involved. I want to be able to seamlessly upload/download the image when a certain user action occurs, but I don't want to use a third-party app like AndFTP. The idea is that a user will upload a picture, and then another user will be able to grab that picture based on which user put it up. No user will know where it's going or where it came from, nor will they navigate the FTP. Alternatively, does anyone have thoughts on a better way to do that? I thought of using the imgur api, but it can't be used commercially. It would, however, be perfect for my purposes. Is there a similar open-source alternative? Any help is greatly appreciated. Dan

    Read the article

  • What does a modern, standard Microsoft-based technology stack look like?

    - by Sean Owen
    Let's say I asked Microsoft to describe the perfect, modern, Microsoft-based technology stack to power a standard e-commerce web site, which perhaps has a simple 2-tier web/database architecture. What would it be like? Yes, I'm just looking for a list of product / technology names. For example, in the J2EE world, I might describe a stack that includes: J2EE 6 standard JavaServer Faces Glassfish 3 MySQL 5.1.x I'm guessing this stack includes some combination of .NET, SQL Server, ASP.NET, IIS, etc. but I am not familiar with this world. Looking for ideas on the equivalent in Microsoft-land.

    Read the article

  • PHP Simple modification/correction

    - by Adrian M.
    Hello, I got this code from someone, it's almost perfect to create a dynamic breadcrumb, but there just a little glitch because it echoes two dividers before the breadcrumb: $crumbs = explode("/",$_SERVER["REQUEST_URI"]); foreach($crumbs as $crumb){ echo ucfirst(str_replace(array(".php","_"),array(""," "),'>' . $crumb)); } it echoes: "contentcommonfile" what I want it to look like is "contentcommon1" and also I will deeply appreciate if someone can tell me how can I add links for all the items in the array except the last one (file)? Thank you so much everybody, this website really helped me a lot to learn php by examples!

    Read the article

  • git-svn on subset of large svn repo

    - by an146
    repo layout: a/1 a/2 a/3 ... b/1 b/2 ... c/1 c/2 ... git-svn works perfect for me if I work on 1 svn repo subdir. But right now I'm facing the need to work on several subdirs (like, a/1, a/2, and b/1), and there's much shit in repo besides them. I've managed to write a regexp for this, but git-svn with --ignore-paths seems to check each file's name against this regexp, instead of skipping entire folders, so it's too slow. /* Probably I should file a bug report about this */ So -- any ideas of handling this? If some Mercurial svn agent can do selective clones, it's OK too, but I'd better stick with git. My another idea was some selective svn proxy, but I haven't succeeded in googling anything like that. Thanks!

    Read the article

  • Automatic counter in Ruby for each?

    - by yar
    I know you Ruby people will laugh at my bad Ruby code: i=0 for blah in blahs puts i.to_s + " " + blah i+=1 end I want to use a for-each and a counter... is there a better way to do it? Note: I don't know if blahs is an array or a hash, but having to do blahs[i] wouldn't make it much sexier. Also I'd like to know how to write i++ in Ruby. Edit: Technically, Matt's and Squeegy's answer came in first, but I'm giving best answer to paradoja so spread around the points a bit on SO. Also his answer had the note about versions, which is still relevant (as long as my Ubuntu 8.04 is using Ruby 1.8.6). Edit: Should've used puts "#{i} #{blah}" which is a lot more succinct.

    Read the article

  • Sqlite: Selecting records spread over total records

    - by Martin
    I have a sql / sqlite question. I need to write a query that select some values from a sqlite database table. I always want the maximal returned records to be 20. If the total selected records are more than 20 I need to select 20 records that are spread over the total records. It is also important that I always select the first and last value from the table when sorted on the date. I know how to accomplish this in code but it would be perfect to have a sqlite query that can do the same. The query Im using now is really simple and looks like this: "SELECT value,date,valueid FROM tblvalue WHERE tblvalue.deleted=0 ORDER BY DATE(date)" Hope I explained what I need, thanks for your help!

    Read the article

  • Symfony routing with an API Web Service

    - by fesja
    Hi, I'm finishing the API of our web service. Now I'm thinking on how to make the route changes, so if we decide to make a new version we don't break the first API. right now: url: /api/:action param: { module: api, action: :action } requirements: sf_format: (?:xml|json) what i've thought: url: /api/v1/:module/:action param: { module: api1, action: :action } requirements: sf_format: (?:xml|json) url: /api/v2/:module/:action param: { module: api2, action: :action } requirements: sf_format: (?:xml|json) That's easy, but the perfect solution would be to have the following kind of route # Automatically redirects to one module or another url: /api/v:version/:module/:action param: { module: api:version, action: :action } requirements: sf_format: (?:xml|json) any ideas on how to do it? what do you recommend us to do? thanks!

    Read the article

  • Good way to find duplicate files?

    - by OverTheRainbow
    Hello I don't know enough about VB.Net (2008, Express Edition) yet, so I wanted to ask if there were a better way to find files with different names but the same contents, ie. duplicates. In the following code, I use GetFiles() to retrieve all the files in a given directory, and for each file, use MD5 to hash its contents, check if this value already lives in a dictionary: If yes, it's a duplicate and I'll delete it; If not, I add this filename/hashvalue into the dictionary for later: 'Get all files from directory Dim currfile As String For Each currfile In Directory.GetFiles("C:\MyFiles\", "File.*") 'Check if hashing already found as value, ie. duplicate If StoreItem.ContainsValue(ReadFileMD5(currfile)) Then 'Delete duplicate 'This hashing not yet found in dictionary -> add it Else StoreItem.Add(currfile, ReadFileMD5(currfile)) End If Next Is this a good way to solve the issue of finding duplicates, or is there a better way I should know about? Thank you.

    Read the article

  • What is the worst programming mistake you have made?

    - by George Edison
    Most of us are not perfect. (Well, except Jon Skeet) Have you made a terrible mistake that you would like to share? The idea is that we could all learn from our mistakes and by collecting them together here, we can avoid some common ones and discover some no-so-common ones we may have overlooked. Oh, and this question is CW, of course. Edit: This question is different than http://stackoverflow.com/questions/1928002/what-is-the-worst-programming-mistake-you-have-ever-seen because we are sharing our own mistakes. Edit again: And this one http://stackoverflow.com/questions/130965/what-is-the-worst-code-youve-ever-written is different too - it asks for code. My question does not have that restriction!

    Read the article

  • NHybernate, the Parallel Framework, and SQL Server

    - by andy
    hey guys, we have a loop that: 1.Loops over several thousand xml files. Altogether we're parsing millions of "user" nodes. 2.In each iteration we parse a "user" xml, do custom deserialization 3.finally, in each iteration, we send our object to nhibernate for saving. We use: .SaveOrUpdateAndFlush(user); This is a lengthy process, and we thought it would be a perfect candidate for testing out the .NET 4.0 Parallel libraries. So we wrapped the loop in a: Parallel.ForEach(); After doing this, we start getting "random" Timeout Exceptions from SQL Server, and finally, after leaving it running all night, OutOfMemory unhandled exceptions. I haven't done deep debugging on this yet, but what do you guys think. Is this simply a limitation of SQL Server, or could it be our NHibernate setup, or what? cheers andy

    Read the article

  • FxCop CA2000 Warning in UserControls

    - by esjr
    Running FxCop on a WebProject that contains a UserControl will result in a CA2000 Warning (Call System.IDisposable.Dispose on object) for every ServerControl (Label, TextBox,...) in that UserControl. I understand why this would happen. Replacing the 'offending' ServerControls with a PlaceHolder and then adding the Controls in code (Using...End Using) might be a way around that, but it is not always an option.But, if they are not 'kosher' why have ServerControls you can drop in your ascx/aspx in the first place ?Am I missing something ? If, like in my case, you inherit a sizeable collection of fairly complex UserControls, do I now add every 'offending' Control to the GlobalSupperssions file (that's a lot of mind numbing right-clicking) ?I do not want to suppress all CA2000 warnings since it makes perfect sense to fix them, but not in the case of ServerControls in UserControls.

    Read the article

  • ArgumentOutOfRangeException at MySql execution. (MySqlConnector .NET)

    - by Lazlo
    I am getting this exception from a MySqlCommand.ExecuteNonQuery(): Index and length must refer to a location within the string. Parameter name: length The command text is as follows: INSERT INTO accounts (username, password, salt, pin, banned, staff, logged_in, points_a, points_b, points_c, birthday) VALUES ('adminb', 'aea785fbcac7f870769d30226ad55b1aab850fb0979ee00481a87bc846744a646a649d30bca5474b59e4292095c74fa47ae6b9b3a856beef332ff873474cc0d3', 'cb162ef55ff7c58c7cb9f2a580928679', '', '0, '0', '0', '0', '0', '0', '2010-04-18') Sorry for the long string, it is a SHA512 hash. I tried manually adding this data in the table from MySQL GUI tools, and it worked perfectly. I see no "out of range" problem in these strings. Does anybody see something wrong?

    Read the article

  • Space-based architecture?

    - by rcampbell
    One chapter in Pragmatic Programmer recommends looking at a blackboard/space-based architecture + a rules engine as a more flexible alternative to a traditional workflow system. The project I'm working on currently uses a workflow engine, but I'd like to evaluate alternatives. I really feel like a SBA would be a better solution to our business problems, but I'm worried about a total lack of community support/user base/venders/options. JavaSpaces is dead, and the JINI spin-off Apache River seems to be on life support. SemiSpace looks perfect, but it's a one-man show. The only viable solution seems to be GigaSpaces. I'd like to hear your thoughts on space based architecture and any experiences you've had with real world implementations.

    Read the article

  • dictionary of lists of dictionaries in python

    - by Andy
    I'm a perl scripter working in python and need to know a way to do the following perl in python. $Hash{$key1}[$index_value]{$key2} = $value; I have seen the stackoverflow question here: List of dictionaries, in a dictionary - in Python I still don't understand what self.rules is doing or if it works for my solution. My data will be coming from files, and will I will be using regexes to capture to temporary variables until ready to store in the data structure. If you need to ask, the order related to the $index_value is important and would like to be maintained as an integer. Any suggestions are appreciated or if you think I need to rethink data structures with Python that would be helpful.

    Read the article

  • How to walk through two files simultaneously in Perl?

    - by Alex Reynolds
    I have two text files that contain columnar data of the variety position-value. Here is an example of the first file (file A): 100 1 101 1 102 0 103 2 104 1 ... Here is an example of the second file (B): 20 0 21 0 ... 100 2 101 1 192 3 193 1 ... Instead of reading one of the two files into a hash table, which is prohibitive due to memory constraints, what I would like to do is walk through two files simultaneously, in a stepwise fashion. What this means is that I would like to stream through lines of either A or B and compare position values. If the two positions are equal, then I perform a calculation on the values associated with that position. Otherwise, if the positions are not equal, I move through lines of file A or file B until the positions are equal (when I again perform my calculation) or I reach EOF of both files. Is there a way to do this in Perl?

    Read the article

  • Reading and parsing email from Gmail using C#, C++ or Python

    - by jpnavarini
    I have to do a Windows application that from times to times access a Gmail account and checks if there is a new email. In case there is, it must read the email body and subject (a simple text email, without images or attachments). Please, do not use paid libs, and in case of any other libs used, give the download path. And I need the email body and subject only. So if the long and complex message that comes from Gmail could be parsed and only two strings containing the subject and the body, it would be perfect. Finally, I only have to get the new messages arrived since the last execution. So the read messages could be marked as "read" and only the new ones (marked as "new") are considered. The code can be written in Python or C++, but I prefer it in C#. Thank you for the help.

    Read the article

  • Javascript - Concatenate Multiple NodeLists Together

    - by Emtucifor
    I was originally asking for an elegant way to simulate the Array.concat() functionality in IE or older browsers, because it seemed that concat was not supported. Only, of course it is and the reason the object didn't support it is because it wasn't an array. Oops! getElementsByTagName returns a NodeList, not an array. The real question, then, is: what's a good way to get a single list of all the form elements in a document (input, select, textarea, button) to loop through them? An array isn't required... a single NodeList would be perfect, too. Note that I'm using IE6 as this is for a corporate intranet (soon IE8 though).

    Read the article

  • Are these tables too big for SQL Server or Oracle

    - by Jeffrey Cameron
    Hey all, I'm not much of a database guru so I would like some advice. Background We have 4 tables that are currently stored in Sybase IQ. We don't currently have any choice over this, we're basically stuck with what someone else decided for us. Sybase IQ is a column-oriented database that is perfect for a data warehouse. Unfortunately, my project needs to do a lot of transactional updating (we're more of an operational database) so I'm looking for more mainstream alternatives. Question Given these tables' dimensions, would anyone consider SQL Server or Oracle to be a viable alternative? Table 1 : 172 columns * 32 million rows Table 2 : 453 columns * 7 million rows Table 3 : 112 columns * 13 million rows Table 4 : 147 columns * 2.5 million rows Given the size of data what are the things I should be concerned about in terms of database choice, server configuration, memory, platform, etc.?

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >