Search Results

Search found 144 results on 6 pages for 'uniqueness'.

Page 4/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • Compact representation of GUID/UUID?

    - by chakrit
    I need to generate a GUID and save it via a string representation. The string representation should be as short as possible as it will be used as part of an already-long URL string. Right now, instead of using the normal abcd-efgh-... representation, I use the raw bytes generated and base64-encode them instead, which results in a somewhat shorter string. But is it possible to make it even shorter? I'm OK with losing some degree of uniqueness and keeping a counter, but scanning all existing keys is not an option. Suggestions?

    Read the article

  • How are .NET 4 GUIDs generated?

    - by mafutrct
    I am aware of the multitude of questions here as well as Raymond's excellent (as usual) post. However, since the algorithm to create GUIDs was changed apparently, I found it hard to get my hands on any up-to-date information. The MSDN seems to try and provide as few information as possible. What is known about how GUIDs are generated in .NET 4? What was changed, and how does it affect the security ("randomness") and integrity ("uniqueness")? One specific aspect I'm interested in: In v1, it seems to be about impossible to generate the same GUID on a single machine again since there was a timestamp and counter involved. In v4, this is no longer the case (I was told), so the chance to get the same GUID on a single machine ... increased?

    Read the article

  • MySQL unique clustered constraint not constraining as expected

    - by igor
    I'm creating a table with: CREATE TABLE movies ( id INT AUTO_INCREMENT PRIMARY KEY, name CHAR(255) NOT NULL, year INT NOT NULL, inyear CHAR(10), CONSTRAINT UNIQUE CLUSTERED (name, year, inyear) ); (this is jdbc SQL) Which creates a MySQL table with a clustered index, "index kind" is "unique", and spans the three clustered columns: full size However, once I dump my data (without exceptions thrown), I see that the uniqueness constraint has failed: SELECT * FROM movies WHERE name = 'Flawless' AND year = 2007 AND inyear IS NULL; gives: id, name, year, inyear 162169, 'Flawless', 2007, NULL 162170, 'Flawless', 2007, NULL Does anyone know what I'm doing wrong here?

    Read the article

  • Combining lists but getting unique members

    - by MC
    I have a bit of a special requirement when combining lists. I will try to illustrate with an example. Lets say I'm working with 2 lists of GamePlayer objects. GamePlayer has a property called LastGamePlayed. A unique GamePlayer is identified through the GamePlayer.ID property. Now I'd like to combine listA and listB into one list, and if a given player is present in both lists I'd like to keep the value from listA. I can't just combine the lists and use a comparer because my uniqueness is based on ID, and if my comparer checks ID I will not have control over whether it picks the element of listA or listB. I need something like: for each player in listB { if not listA.Contains(player) { listFinal.Add(player) } } However, is there a more optimal way to do this instead of searching listA for each element in listB?

    Read the article

  • Codeigniter Form validation problem

    - by ben robinson
    Please please please can someone help me $this-load-library('form_validation'); $this-load-helper('cookie'); $data = array(); if($_POST) { // Set validation rules including additional validation for uniqueness $this-form_validation-set_rules('yourname', 'Your Name', 'trim|required'); $this-form_validation-set_rules('youremail', 'Your Email', 'trim|required|valid_email'); $this-form_validation-set_rules('friendname', 'Friends Name', 'trim|required'); $this-form_validation-set_rules('friendemail', 'Friends Email', 'trim|required|valid_email'); // Run the validation and take action if($this-form_validation-run()) { echo 'valid; } } else{ echo 'problem'; } Form validation is coming back with no errors can cany one see why?

    Read the article

  • How to generate a random alpha-numeric string in Java

    - by Todd
    I've been looking for a simple java algorithm to generate a pseudo-random alpha-numeric string. In my situation it would be used as a unique session/key identifier that would "likely" be unique over 500K+ generation (my needs don't really require anything much more sophisticated) . Ideally I would be able to specify a length depending on my uniqueness needs. For example, a generated string of length 12 might look something like "AEYGF7K0DM1X". Answers: I like @Apocalisp and @erickson's answers equally well. The only downside to @Apocalisp's answer is it requires an apache class. Thanks to both for the help!

    Read the article

  • basic unique ModelForm field for Google App Engine

    - by Alexander Vasiljev
    I do not care about concurrency issues. It is relatively easy to build unique form field: from django import forms class UniqueUserEmailField(forms.CharField): def clean(self, value): self.check_uniqueness(super(UniqueUserEmailField, self).clean(value)) def check_uniqueness(self, value): same_user = users.User.all().filter('email', value).get() if same_user: raise forms.ValidationError('%s already_registered' % value) so one could add users on-the-fly. Editing existing user is tricky. This field would not allow to save user having other user email. At the same time it would not allow to save a user with the same email. What code do you use to put a field with uniqueness check into ModelForm?

    Read the article

  • Removing duplicate SQL records to permit a unique key

    - by j pimmel
    I have a table ('sales') in a MYSQL DB which should have rightfully have had a unique constraint enforced to prevent duplicates. To first remove the dupes and set the constraint is proving a bit tricky. Table structure (simplified): 'id (unique, autoinc)' product_id The goal is to enforce uniqueness for product_id. The de-duping policy I want to apply is to remove all duplicate records except the most recently created, eg: the highest id Or to put another way, I would like to delete duplicate records, excluding the ids matched by the following query: select id from sales s inner join (select product_id, max(id) as maxId from sales group by product_id having count(product_id) > 1) groupedByProdId on s.product_id and s.id = groupedByProdId.maxId I've struggled with this on two fronts - writing the query to select the correct records to delete and then also the constraint in MYSQL where a subselect FROM clause of a DELETE cannot reference the same table from which data is being removed.

    Read the article

  • Google App Engine update an object from servlet not working ?

    - by Frank
    I use the following code to update an object from servlet in Google App Engine : String Time_Stamp=Get_Date_Format(6),query="select from "+Contact_Info_Entry.class.getName()+" where Contact_Id == '"+Contact_Id+"' order by Contact_Id desc"; PersistenceManager pm=null; try { pm=PMF.get().getPersistenceManager(); // note that this returns a list, there could be multiple, DataStore does not ensure uniqueness for non-primary key fields List<Contact_Info_Entry> results=(List<Contact_Info_Entry>)pm.newQuery(query).execute(); Contact_Info_Entry A_Contact_Entry=results.get(0); A_Contact_Entry.Extra_10=Time_Stamp; pm.makePersistent(A_Contact_Entry); } catch (Exception e) { Send_Email(Email_From,Email_To,"Check_License_Servlet Error [ "+Time_Stamp+" ]",new Text(e.toString()+"\n"+Get_Stack_Trace(e)),null); } finally { pm.close(); } The value "[ 2010-05-13 Thu 15:58:31 ]" was in A_Contact_Entry.Extra_10, but it seems "pm.makePersistent(A_Contact_Entry);" was not executed. The object was not updated and there was no error message, why ? How to fix it ?

    Read the article

  • Why getting active record error when trying to work on arrays?

    - by keruilin
    I have the following association in my User model: has_and_belongs_to_many :friends, :class_name => 'User', :foreign_key => 'friend_id' I have the following uniqueness constraint in my user_users table: UNIQUE KEY `no_duplicate_friends` (`user_id`,`friend_id`) In my code, I am retrieving a user's friends -- friends = user.friends. friends is an array. I have a scenario where I want add the user with all those friends to the friends array. Ex: friends << user_with_all_those_homies However, I get the following error: ActiveRecord::StatementInvalid: Mysql::Error: Duplicate entry '18-18' for key 'no_duplicate_friends': INSERT INTO `users_users` (`friend_id`, `user_id`) VALUES (18, 18) What gives?

    Read the article

  • Finding unique elements in an string array in C

    - by LuckySlevin
    Hi, C bothers me with its handling of strings. I have a pseudocode like this in my mind: char *data[20]; char *tmp; int i,j; for(i=0;i<20;i++) { tmp = data[i]; for(j=1;j<20;j++) { if(strcmp(tmp,data[j]))//then except the uniqueness, store them in elsewhere. } } But when i coded this the results were bad.(I handled all the memory stuff,little things etc.) The problem is in the second loop obviously :D. But i cannot think any solution. How do i find unique strings in an array. Example input : abc def abe abc def deg entered unique ones : abc def abe deg should be found.

    Read the article

  • Problem with updating multiple rows which are in conflict with unique index

    - by GUZ
    I am using Microsoft SQL Server and I have a master-detail scenario where I need to store the order of details. So in the Detail table I have ID, MasterID, Position and some other columns. There is also a unique index on MasterID and Position. It works OK except one case: when I have some existing details and I change their order. For example when I change a detail on position 3 with a detail on position 2. When I save the detail on position 2 (which in the database has Position equal to 3) SQL Server protests, because the index uniqueness constraint. How to solve this problem in a reasonable way? Thank you in advance Lukasz Glaz

    Read the article

  • Best datastructure for frequently queried list of objects

    - by panzerschreck
    Hello, I have a list of objects say, List. The Entity class has an equals method,on few attributes ( business rule ) to differentiate one Entity object from the other. The task that we usually carry out on this list is to remove all the duplicates something like this : List<Entity> noDuplicates = new ArrayList<Entity>(); for(Entity entity: lstEntities) { int indexOf = noDuplicates.indexOf(entity); if(indexOf >= 0 ) { noDuplicates.get(indexOf).merge(entity); } else { noDuplicates.add(entity); } } Now, the problem that I have been observing is that this part of the code, is slowing down considerably as soon as the list has objects more than 10000.I understand arraylist is doing a o(N) search. Is there a faster alternative, using HashMap is not an option, because the entity's uniqueness is built upon 4 of its attributes together, it would be tedious to put in the key itself into the map ? will sorted set help in faster querying ? Thanks

    Read the article

  • Can I use a single MySQL query to select distinct rows and then non-distinct rows if a limit hasn't

    - by Matt Rix
    I hope I'm explaining this properly, my knowledge of MySQL is quite limited. Let's say I have a table with rows that have name and shape fields. I'd like to select a bunch of rows from a table, but return all of the rows with unique shape field values first. If I have less than a certain number of rows, let's say 7, then I'd like to fill the remaining result rows with non-unique shape rows. The best way I can word it is that they're "ordered by uniqueness, and then by some other value". So, I don't want: square, square, circle, circle, rectangle, square, triangle I'd like to have: square, circle, rectangle, triangle, square, square, circle Is this possible to do using a single SQL query? I'm using MySQL with PHP, if that makes any difference. Thanks!

    Read the article

  • UIDocumentInteractionController change title of presented view

    - by whatdoesitallmean
    I am using the UIDocumentInteractionController to display pdf files. My files are stored in the filesystem using encoded filenames that are not user-friendly. I do have access to a more friendly name for the file, but the file is not stored with this name (for uniqueness reasons). When I load the document into the UIDocumentInteractionController then the view displays the unfriendly 'real' filename in the titlebar. Is there any way to change the displayed title as presented by the UIDocumentInteractionController? Thanks.

    Read the article

  • Can I add and remove elements of enumeration at runtime in Java

    - by Brabster
    It is possible to add and remove elements from an enum in Java at runtime? For example, could I read in the labels and constructor arguments of an enum from a file? @saua, it's just a question of whether it can be done out of interest really. I was hoping there'd be some neat way of altering the running bytecode, maybe using BCEL or something. I've also followed up with this question because I realised I wasn't totally sure when an enum should be used. I'm pretty convinced that the right answer would be to use a collection that ensured uniqueness instead of an enum if I want to be able to alter the contents safely at runtime.

    Read the article

  • How do I ensure data consistency in this concurrent situation?

    - by MalcomTucker
    The problem is this: I have multiple competing threads (100+) that need to access one database table Each thread will pass a String name - where that name exists in the table, the database should return the id for the row, where the name doesn't already exist, the name should be inserted and the id returned. There can only ever be one instance of name in the database - ie. name must be unique How do I ensure that thread one doesn't insert name1 at the same time as thread two also tries to insert name1? In other words, how do I guarantee the uniqueness of name in a concurrent environment? This also needs to be as efficient as possible - this has the potential to be a serious bottleneck. I am using MySQL and Java. Thanks

    Read the article

  • Shorter GUIDs than hashing a user id?

    - by Alex Mcp
    I'm wondering how Instapaper (bookmarklet that saves text) might generate URLs for their bookmarklet. Mine has a script src of something similar to www.instapaper.com/j/AnJHrfoDTRia The quality of these URLs is that they need to never collide, and not be really guessable (so other people can't save to your account). I know a simple approach might be to MD5 their email address (presumed to have been checked on signup for uniqueness), but then I'd end up with a super long string. This isn't a huge issue, but I'm wondering what techniques there are for shorter GUIDs that won't collide too often (this is obviously the tradeoff, but 12 characters above is pretty short in my opinion)

    Read the article

  • SQL: Optimize insensive SELECTs on DateTime fields

    - by Fedyashev Nikita
    I have an application for scheduling certain events. And all these events must be reviewed after each scheduled time. So basically we have 3 tables: items(id, name) scheduled_items(id, item_id, execute_at - datetime) - item_id column has an index option. reviewed_items(id, item_id, created_at - datetime) - item_id column has an index option. So core function of the application is "give me any items(which are not yet reviewed) for the actual moment". How can I optimize this solution for speed(because it is very core business feature and not micro optimization)? I suppose that adding index to the datetime fields doesn't make any sense because the cardinality or uniqueness on that fields are very high and index won't give any(?) speed-up. Is it correct? What would you recommend? Should I try no-SQL? -- mysql -V 5.075 I use caching where it makes sence.

    Read the article

  • Strange issue with 74.125.79.118

    - by Domenic
    I'm facing with a strange issue on a Linux server. After frequent crashes the analysis found that the server is led to collapse by a huge number of connections to the ip 74.125.79.118 departing from php scripts of the hosted web sites. After a depth analysis of the files I'm found that are not present any malware infections. Ip 74.125.79.118 is Google. I realize after a Google search that the connections to this ip are generated by embedded video from youtube on web sites, among other Google features like safe search. But I don't understand how this type of behavior can lead to the collapse the server and the uniqueness of the situation leads me to think that the situation is far from being attributable only to Google and Youtube. Also I've found that blocking connections from eth0 to 74.125.79.118:80 doesn't solve the issue but if I stop DNS traffic from eth0 to internet, connections to 74.125.79.118 stops. I'm really confused about this. Any suggestions? Cheers.

    Read the article

  • Will these optimizations to my Ruby implementation of diff improve performance in a Rails app?

    - by grg-n-sox
    <tl;dr> In source version control diff patch generation, would it be worth it to use the optimizations listed at the very bottom of this writing (see <optimizations>) in my Ruby implementation of diff for making diff patches? </tl;dr> <introduction> I am programming something I have never done before and there might already be tools out there to do the exact thing I am programming but at this point I am having too much fun to care so I am still going to do it from scratch, even if there is a tool for this. So anyways, I am working on a Ruby on Rails app and need a certain feature. Basically I want each entry in a table of mine, let's say for example a table of video games, to have a stored chunk of text that represents a review or something of the sort for that table entry. However, I want this text to be both editable by any registered user and also keep track of different submissions in a version control system. The simplest solution I could think of is just implement a solution that keeps track of the text body and the diff patch history of different versions of the text body as objects in Ruby and then serialize it, preferably in human readable form (so I'll most likely use YAML for this) for editing if needed due to corruption by a software bug or a mistake is made by an admin doing some version editing. So at first I just tried to dive in head first into this feature to find that the problem of generating a diff patch is more difficult that I thought to do efficiently. So I did some research and came across some ideas. Some I have implemented already and some I have not. However, it all pretty much revolves around the longest common subsequence problem, as you would already know if you have already done anything with diff or diff-like features, and optimization the function that solves it. Currently I have it so it truncates the compared versions of the text body from the beginning and end until non-matching lines are found. Then it solves the problem using a comparison matrix, but instead of incrementing the value stored in a cell when it finds a matching line like in most longest common subsequence algorithms I have seen examples of, I increment when I have a non-matching line so as to calculate edit distance instead of longest common subsequence. Although as far as I can tell between the two approaches, they are essentially two sides of the same coin so either could be used to derive an answer. It then back-traces through the comparison matrix and notes when there was an incrementation and in which adjacent cell (West, Northwest, or North) to determine that line's diff entry and assumes all other lines to be unchanged. Normally I would leave it at that, but since this is going into a Rails environment and not just some stand-alone Ruby script, I started getting worried about needing to optimize at least enough so if a spammer that somehow knew how I implemented the version control system and knew my worst case scenario entry still wouldn't be able to hit the server that bad. After some searching and reading of research papers and articles through the internet, I've come across several that seem decent but all seem to have pros and cons and I am having a hard time deciding how well in this situation that the pros and cons balance out. So are the ones listed here worth it? I have listed them with known pros and cons. </introduction> <optimizations> Chop the compared sequences into multiple chucks of subsequences by splitting where lines are unchanged, and then truncating each section of unchanged lines at the beginning and end of each section. Then solve the edit distance of each subsequence. Pro: Changes the time increase as the changed area gets bigger from a quadratic increase to something more similar to a linear increase. Con: Figuring out where to split already seems like you have to solve edit distance except now you don't care how it is changed. Would be fine if this was solvable by a process closer to solving hamming distance but a single insertion would throw this off. Use a cryptographic hash function to both convert all sequence elements into integers and ensure uniqueness. Then solve the edit distance comparing the hash integers instead of the sequence elements themselves. Pro: The operation of comparing two integers is faster than the operation of comparing two strings, so a slight performance gain is received after every comparison, which can be a lot overall. Con: Using a cryptographic hash function takes time to convert all the sequence elements and may end up costing more time to do the conversion that you gain back from the integer comparisons. You could use the built in hash function for a string but that will not guarantee uniqueness. Use lazy evaluation to only calculate the three center-most diagonals of the comparison matrix and then only calculate additional diagonals as needed. And then also use this approach to possibly remove the need on some comparisons to compare all three adjacent cells as desribed here. Pro: Can turn an algorithm that always takes O(n * m) time and make it so only worst case scenario is that time, best case becomes practically linear, and average case is somewhere between the two. Con: It is an algorithm I've only seen implemented in functional programming languages and I am having a difficult time comprehending how to convert this into Ruby based on how it is described at the site linked to above. Make a C module and do the hard work at the native level in C and just make a Ruby wrapper for it so Ruby can make all the calls to it that it needs. Pro: I have to imagine that evaluating something like this in could be a LOT faster. Con: I have no idea how Rails handles apps with ruby code that has C extensions and it hurts the portability of the app. This is an optimization for after the solving of edit distance, but idea is to store additional combined diffs with the ones produced by each version to make a delta-tree data structure with the most recently made diff as the root node of the tree so getting to any version takes worst case time of O(log n) instead of O(n). Pro: Would make going back to an old version a lot faster. Con: It would mean every new commit, the delta-tree would get a new root node that will cost time to reorganize the delta-tree for an operation that will be carried out a lot more often than going back a version, not to mention the unlikelihood it will be an old version. </optimizations> So are these things worth the effort?

    Read the article

  • forEach and Facelets - a bugfarm just waiting for harvest

    - by Duncan Mills
    An issue that I've encountered before and saw again today seems worthy of a little write-up. It's all to do with a subtle yet highly important difference in behaviour between JSF 2 running with JSP and running on Facelets (.jsf pages). The incident I saw today can be seen as a report on the ADF EMG bugzilla (Issue 53) and in a blog posting by Ulrich Gerkmann-Bartels who reported the issue to the EMG. Ulrich's issue nicely shows how tricky this particular gochya can be. On the surface, the problem is squarely the fault of MDS but underneath MDS is, in fact, innocent. To summarize the problem in a simpler testcase than Ulrich's example, here's a simple fragment of code: <af:forEach var="item" items="#{itemList.items}"> <af:commandLink id="cl1" text="#{item.label}" action="#{item.doAction}"  partialSubmit="true"/> </af:forEach> Looks innocent enough right? We see a bunch of links printed out, great. The issue here though is the id attribute. Logically you can kind of see the problem. The forEach loop is creating (presumably) multiple instances of the commandLink, but only one id is specified - cl1. We know that IDs have to be unique within a JSF component tree, so that must be a bad thing?  The problem is that JSF under JSP implements some hacks when the component tree is generated to transparently fix this problem for you. Behind the scenes it ensures that each instance really does have a unique id. Really nice of it to do so, thank you very much. However, (you could see this coming), the same is not true when running with Facelets  (this is under 11.1.2.n)  in that case, what you put for the id is what you get, and JSF does not mess around in the background for you. So you end up with a component tree that contains duplicate ids which are only created at runtime.  So subtle chaos can ensue.  The symptoms are wide and varied, from something pretty obscure such as the combination Ulrich uncovered, to something as frustrating as your ActionListener just not being triggered. And yes I've wasted hours on just such an issue.  The Solution  Once you're aware of this one it's really simple to fix it, there are two options: Remove the id attribute on components that will cause some kind of submission within the forEach loop altogether and let JSF do the right thing in generating them. Then you'll be assured of uniqueness. Use the var attribute of the loop to generate a unique id for each child instance.  for example in the above case: <af:commandLink id="cl1_#{item.index}" ... />.  So one to watch out for in your upgrades to JSF 2 and one perhaps, for your coding standards today to prepare you for. For completeness, here's the reference to the underlying JSF issue that's at the heart of this: JAVASERVERFACES-1527

    Read the article

  • Silverlight and Encryption, how to store/generate they key/iv pair?

    - by cmaduro
    I have a Silverlight app that connects to a php webservice. I want to encrypt the communication between the webservice and the Silverlight client. I'm not relying on SSL. I'm encrypting/decrypting the POST string myself using AES 256bit Key and IV. The big questions then are: How do I generate a random unique key/iv pair in PHP. How do I share this key/iv pair between the web service and silverlight client in a secure way. It seems impossible without having some kind of hard coded key or iv on the client. Which would compromise security. This is a public website, there are no logins. Just the requirement of secure communication. I can hard code the seed for the key/iv (which is hashed with SHA256 with a time stamp salt and then assigned as the key or iv) in PHP source code, that's on the server so that is pretty safe. However on the client the seed for the key/iv pair would be visible, if it is hard coded. Further more using a time stamp as the basis for uniqueness/randomness is definitely not ok, since timestamps are predictable. It does however provide a common factor between the C# code and the PHP code. The only other option that I can think of would be to have a 3rd service involved that provides the key/iv to the Silverlight client, as well as the php webservice. This of course start the cycle anew, with the question of how to store the credentials for accessing the key/iv distribution service on the Silverlight client. Sounds like the solution is then asymmetric encryption, since sensitive data will be viewed only on the administrative back end of the website. Unfortunately Silverlight has no asymmetric encryption classes. The solution? Roll my own Diffie-Hellman key exchange! Plug that key into AES256!

    Read the article

  • Question about SQL Server HierarchyID depth-first performance

    - by AndalusianCat
    I am trying to implement hierarchyID in a table (dbo.[Message]) containing roughly 50,000 rows (will grow substantially in the future). However it takes 30-40 seconds to retrieve about 25 results. The root node is a filler in order to provide uniqueness, therefor every subsequent row is a child of that dummy row. I need to be able to traverse the table depth-first and have made the hierarchyID column (dbo.[Message].MessageID) the clustering primary key, have also added a computed smallint (dbo.[Message].Hierarchy) which stores the level of the node. Usage: A .Net application passes through a hierarchyID value into the database and I want to be able to retrieve all (if any) children AND parents of that node (besides the root, as it is filler). A simplified version of the query I am using: @MessageID hierarchyID /* passed in from application */ SELECT m.MessageID, m.MessageComment FROM dbo.[Message] as m WHERE m.Messageid.IsDescendantOf(@MessageID.GetAncestor((@MessageID.GetLevel()-1))) = 1 ORDER BY m.MessageID From what I understand, the index should be detected automatically without a hint. From searching forums I have seen people utilizing index hints, at least in the case of breadth-first indexes, as apparently CLR calls may be opaque to the query optimizer. I have spent the past few days trying to find a solution for this issue, but to no avail. I would greatly appreciate any assistance, and as this is my first post, I apologize in advance if this would be considered a 'noobish' question, I have read the MS documentation and searched countless forums, but have not came across a succinct description of the specific issue.

    Read the article

  • Building simple Reddit scraper

    - by Bazant Fundator
    Let's say that I would like to make a collection of images from reddit for my own amusement. I have ran the code on my development env and It haven't gone past the first page of posts (anything beyond requries the after string from the JSON. Additionally, When I turn on the validation, the whole loop breaks if the item doesn't pass it, not just the current iteration. I would be glad If you helped me understand mistakes I made. class Link include Mongoid::Document include Mongoid::Timestamps field :author, type: String field :url, type: String validates_uniqueness_of :url, # no duplicates validates :url, uniqueness :true end def fetch (count, after) count_s = count.to_s # convert count to string link = "http://reddit.com/r/aww/.json?count="+count_s+"&after="+after #so it can be used there res = HTTParty.get(link) # GET req. to the reddit server json = JSON.parse(res.body) # Parse the response if json['kind'] == "Listing" then # check if the retrieved item is a Listing for i in 1...(count) do # for each list item datum = json['data']['children'][i]['data'] #i-th element properties if datum['domain'].in?(["imgur.com", "i.imgur.com"]) then # fetch only imgur links Link.create!(author: datum['author'], url: datum['url']) # save to db end end count += 25 fetch(count, json['data']['after']) # if it retrieved the right kind of object, move on to the next page end end fetch(25," ") # run it

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >