Search Results

Search found 946 results on 38 pages for 'surrogate pairs'.

Page 22/38 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • C++: A variation of priority queue

    - by Helltone
    I need some kind of priority queue to store pairs <key, value>. Values are unique, but keys aren't. I will be performing the following operations (most common first): random insertion; retrieving (and removing) all elements with the least key. random removal (by value); I can't use std::priority_queue because it only supports removing the head. For now, I'm using an unsorted std::list. Insertion is performed by just pushing new elements to the back (O(1)). Operation 2 sorts the list with std::sort (O(N*logN)), before performing the actual retrieval. Removal, however, is O(n), which is a bit expensive. Any idea of a better data structure?

    Read the article

  • Equivalence Classes

    - by orcik
    I need to write a program for equivalence classes and get this outputs... (equiv '((a b) (a c) (d e) (e f) (c g) (g h))) => ((a b c g h) (d e f)) (equiv '((a b) (c d) (e f) (f g) (a e))) => ((a b e f g) (c d)) Basically, A set is a list in which the order doesn't matter, but elements don't appear more than once. The function should accept a list of pairs (elements which are related according to some equivalence relation), and return a set of equivalence classes without using iteration or assignment statements (e.g. do, set!, etc.). However, set utilities such as set-intersection, set-union and a function which eliminates duplicates in a list and built-in functions union, intersection, and remove-duplicates are allowed. Thanks a lot! By the way, It's not a homework question. A friend of mine need this piece of code to solve smilar questions.

    Read the article

  • List available languages for PyGTK UI strings

    - by detly
    I'm cleaning up some localisation and translation settings in our PyGTK application. The app is only intended to be used under GNU/Linux systems. One of the features we want is for users to select the language used for the applications (some prefer their native language, some prefer English for consistency, some like French because it sounds romantic, etc). For this to work, I need to actually show a combo box with the various languages available. How can I get this list? In fact, I need a list of pairs of the language code ("en", "ru", etc) and the language name in the native language ("English (US)", "???????"). If I had to implement a brute force method, I'd do something like: look in the system locale dir (eg. "/usr/share/locale") for all language code dirs (eg. "en/") containing the relative path "LC_MESSAGES/OurAppName.mo". Is there a more programmatic way?

    Read the article

  • How can I remove sensitive data from the debug_backtrace function?

    - by RenderIn
    I am using print_r(debug_backtrace(), true) to retrieve a string representation of the debug backtrace. This works fine, as print_r handles recursion. When I tried to recursively iterate through the debug_backtrace() return array before turning it into a string it ran into recursion and never ended. Is there some simple way I can remove certain sensitive key/value pairs from the backtrace array? Perhaps some way to turn the array to a string using print_r, then back to an array with the recursive locations changed to the string RECURSION, which I could the iterate through. I don't want to execute regular expressions on the string representation if possible.

    Read the article

  • postgres store with composite value type, or a better way of attributing an inverted index

    - by Hassan Syed
    can't seem to figure out the syntax for populating a hstore with a value of composite type -- note: I do not want to convert a record to a hstore. select hstore('hello => ROW(1,2)'); I know it's something simple; However, google is not my friend today. use case : custom inverted index. The data is modelling an inverted index of lexemes, the composite data types are various probabilities related to the lexemes which I will use to implement document clustering. Does anyone know a better way of doing this ? I'm open to using an external system if it allows attaching attributes to key-posting pairs in the inverted index. I'd use something external if it had solid support for what I am trying to do, I suspect that sticking 3-10k lexemes per tuple and then doing batch processing on them is gonna be nasty as the whole hstore will have to be parsed and converted .

    Read the article

  • implementing feature structures: what data type to use?

    - by Dervin Thunk
    Hello. In simplistic terms, a feature structure is an unordered list of attribute-value pairs. [number:sg, person:3 | _ ], which can be embedded: [cat:np, agr:[number:sg, person:3 | _ ] | _ ], can subindex stuff and share the value [number:[1], person:3 | _ ], where [1] is another feature structure (that is, it allows reentrancy). My question is: what data structure would people think this should be implemented with for later access to values, to perform unification between 2 fts, to "type" them, etc. There is a full book on this, but it's in lisp, which simplifies list handling. So, my choices are: a hash of lists, a list of lists, or a trie. What do people think about this?

    Read the article

  • DEADLOCK_WRAP error when using Berkeley Db in python (bsddb)

    - by JiminyCricket
    I am using a berkdb to store a huge list of key-value pairs but for some reason when i try to access some of the data later i get this error: try: key = 'scrape011201-590652' contenttext = contentdict[key] except: print the error <type 'exceptions.KeyError'> 'scrape011201-590652' in contenttext = contentdict[key]\n', ' File "/usr/lib64/python2.5/bsddb/__init__.py", line 223, in __getitem__\n return _DeadlockWrap(lambda: self.db[key]) # self.db[key]\n', 'File "/usr/lib64/python2.5/bsddb/dbutils.py", line 62, in DeadlockWrap\n return function(*_args, **_kwargs)\n', ' File "/usr/lib64/python2.5/bsddb/__init__.py", line 223, in <lambda>\n return _DeadlockWrap(lambda: self.db[key]) # self.db[key]\n'] I am not sure what DeadlockWrap is but there isnt any other program or process accessing the berkdb or writing to it (as far as i know,) so not sure how we could get a deadlock, if its referring to that. Is it possible that I am trying to access the data to rapidly? I have this function call in a loop, so something like for i in hugelist: #try to get a value from the berkdb #do something with it I am running this with multiple datasets and this error only occurs with one of them, the largest one, not the others.

    Read the article

  • C++ assignment question [closed]

    - by Adam Joof
    (Bubble Sort) In the bubble sort algorithm, smaller values gradually "bubble" their way upward to the top of the array like air bubbles rising in water, while the larger values sink to the bottom. The bubble sort makes several passes through the array. On each pass, successive pairs of elements are compared. If a pair is in increasing order (or the values are identical), we leave the values as they are. If a pair is in decreasing order, their values are swapped in the array. Write a program that sorts an array of 10 integers using bubble sort.

    Read the article

  • Why does LINQ-to-SQL Paging fail inside a function?

    - by ssg
    Here I have an arbitrary IEnumerable<T>. And I'd like to page it using a generic helper function instead of writing Skip/Take pairs every time. Here is my function: IEnumerable<T> GetPagedResults<T>(IEnumerable<T> query, int pageIndex, int pageSize) { return query.Skip((pageIndex - 1) * pageSize).Take(pageSize); } And my code is: result = GetPagedResults(query, 1, 10).ToList(); This produces a SELECT statement without TOP 10 keyword. But this code below produces the SELECT with it: result = query.Skip((pageIndex - 1) * pageSize).Take(pageSize).ToList(); What am I doing wrong in the function?

    Read the article

  • Duplicate partitioning key performance impact

    - by Anshul
    I've read in some posts that having duplicate partitioning key can have a performance impact. I've two tables like: CREATE TABLE "Test1" ( CREATE TABLE "Test2" ( key text, key text, column1 text, name text, value text, age text, PRIMARY KEY (key, column1) ... ) PRIMARY KEY (key, name,age) ) In Test1 column1 will contain column name and value will contain its corresponding value.The main advantage of Test1 is that I can add any number of column/value pairs to it without altering the table by just providing same partitioning key each time. Now my question is how will each of these table schema's impact the read/write performance if I've millions of rows and number of columns can be upto 50 in each row. How will it impact the compaction/repair time if I'm writing duplicate entries frequently?

    Read the article

  • Creating a triple store query using SQL - how to find all triples that have a common predicate and o

    - by Ankur
    I have a database that acts like a triple store, except that it is just a simple MySQL database. I want to select all triples that have a common predicate and object. Info about RDF and triples I can't seem to work out the SQL. If I had just a single predicate and object to match I would do: select TRIPLE from TRIPLES where PREDICATE="predicateName" and OBJECT="objectName" But if I have a list (HashMap) of many pairs of (predicateName,objectName) I am not sure what I need to do. Please let me know if I need to provide more info, I am not sure that I have made this quite clear, but I am wary of providing too much info and confusing the issue.

    Read the article

  • Git Pull works; Git push fails

    - by Michael
    I thought I set up my key pairs correctly -- I can do git pulls. I can do git commits. But when I do a git push, it counts objects, decompresses, then says: fatal: the remote end hung up unexpectedly. What's the issue here? I'm a super user, so it's not folder writable / readable access problems -- it must be the way I set up the encryption key pair... how do I debug this ... since git pull works?

    Read the article

  • Matching process , issue with query

    - by Blerta Blerta
    i have this code which helps me match two different tables.. now, each of this tables, has a epos_id and a rbpos_id ! I have another table which has pairs of rbpos_id and epos_id, something like: id | epos_id | rbpos_id 1 a3566 465jd 2 hkiyb rbposi When i join this other table, i need to check this condition, i mean, the matching should be done, only and if, the epos_id and rbpos_id of the join i'm doing, have the same id,i mean, belong to the same row.. Here is my current query... Thanks! SELECT retailer.date, retailer.time, retailer.location, retailer.user_id,imovo.mobile_number ". "FROM retailer LEFT JOIN imovo ". " ON addtime(retailer.time, '0:0:50')>imovo.time AND retailer.time <imovo.time AND retailer.date=imovo.date

    Read the article

  • How do I get Mathematica to thread a 2-variable function over two lists, using functional programmin

    - by Leah Wrenn Berman
    Lets say I have a function f[x_, y_], and two lists l1, l2. I'd like to evaluate f[x,y] where x runs over the list l1 and y runs over the list l2, and I'd like to do it without having to make all pairs of the form {l1[[i]],l2[[j]]}. (Motivation: I'm trying to implement some basic Haskell programs in Mathematica. In particular, I'd like to be able to code the Haskell program isMatroid::[[Int]]->Bool isMatroid b =and[or[sort(union(xs\\[x])[y]'elem'b|y<-ys]|xs<-b,ys<-b, xs<-x] I think I can do the rest of it, if I can figure out the original question, but I'd like the code to be Haskell-like. Any suggestions for implementing Haskell-like code in Mathematica would be appreciated.)

    Read the article

  • Parsing a dynamic value with Lift-JSON

    - by Surya Suravarapu
    Let me explain this question with an example. If I have a JSON like the following: {"person1":{"name": "Name One", "address": {"street": "Some Street","city": "Some City"}}, "person2":{"name": "Name Two", "address": {"street": "Some Other Street","city": "Some Other City"}}} [There is no restriction on the number of persons, the input JSON can have many more persons] I could extract this JSON to Persons object by doing var persons = parse(res).extract[T] Here are the related case classes: case class Address(street: String, city: String) case class Person(name: String, address: Address, children: List[Child]) case class Persons(person1: Person, person2: Person) Question: The above scenario works perfectly fine. However the need is that the keys are dynamic in the key/value pairs. So in the example JSON provided, person1 and person2 could be anything, I need to read them dynamically. What's the best possible structure for Persons class to account for that dynamic nature.

    Read the article

  • Is there an equivalent for the Zip function in Clojure Core or Contrib?

    - by John Kane
    In Clojure, I want to combine two lists to give a list of pairs, > (zip '(1 2 3) '(4 5 6)) ((1 4) (2 5) (3 6)) In Haskell or Ruby the function is called zip. Implementing it is not difficult, but I wanted to make sure I wasn't missing a function in Core or Contrib. There is a zip namespace in Core, but it is described as providing access to the Zipper functional technique, which does not appear to be what I am after. Is there an equivalent function for combining 2 or more lists, in this way, in Core? If there is not, is it because there is an idiomatic approach that renders the function unneeded?

    Read the article

  • resizing arrays when close to memory capacity

    - by user548928
    So I am implementing my own hashtable in java, since the built in hashtable has ridiculous memory overhead per entry. I'm making an open-addressed table with a variant of quadratic hashing, which is backed internally by two arrays, one for keys and one for values. I don't have the ability to resize though. The obvious way to do it is to create larger arrays and then hash all of the (key, value) pairs into the new arrays from the old ones. This falls apart though when my old arrays take up over 50% of my current memory, since I can't fit both the old and new arrays in memory at the same time. Is there any way to resize my hashtable in this situation Edit: the info I got for current hashtable memory overheads is from here How much memory does a Hashtable use? Also, for my current application, my values are ints, so rather than store references to Integers, I have an array of ints as my values.

    Read the article

  • Order database results by bayesian rating

    - by One Trick Pony
    I'm not sure this is even possible, but I need a confirmation before doing it the "ugly" way :) So, the "results" are posts inside a database which are stored like this: the posts table, which contains all the important stuff, like the ID, the title, the content the post meta table, which contains additional post data, like the rating (this_rating) and the number of votes (this_num_votes). This data is stored in pairs, the table has 3 columns: post ID / key / value. It's basically the WordPress table structure. What I want is to pull out the highest rated posts, sorted based on this formula: br = ( (avg_num_votes * avg_rating) + (this_num_votes * this_rating) ) / (avg_num_votes + this_num_votes) which I stole form here. avg_num_votes and avg_rating are known variables (they get updated on each vote), so they don't need to be calculated. Can this be done with a mysql query? Or do I need to get all the posts and do the sorting with PHP?

    Read the article

  • Revisions: algorithm and data structure

    - by SODA
    Hi, I need ideas for structuring and processing data with revisions. For example, I have a database of objects (e.g. cars). Each object has a number of properties, which can be arbitrary, so there's no a set schema to describe these objects. These objects are probably saved as key-value pairs. Now I need to change property of an object. I don't want to completely rewrite it - I want to be able to go back and see history of changes to these properties, that's why I want to add new property and keep the old one (so I guess a timestamp would do the job of telling which property is the latest). At the same time I want to be able to get info about any object in a snap, with only latest versions of each of the properties. Any ideas what would be the best approach? At least please point me in the right direction. Thanks!

    Read the article

  • Implement dictionary using java

    - by ahmad
    Task Dictionary ADT The dictionary ADT models a searchable collection of key-element entries Multiple items with the same key are allowed Applications: word-definition pairs Dictionary ADT methods: find(k): if the dictionary has an entry with key k, returns it, else, returns null findAll(k): returns an iterator of all entries with key k insert(k, o): inserts and returns the entry (k, o) remove(e): remove the entry e from the dictionary size(), isEmpty() Operation Output Dictionary insert(5,A) (5,A) (5,A) insert(7,B) (7,B) (5,A),(7,B) insert(2,C) (2,C) (5,A),(7,B),(2,C) insert(8,D) (8,D) (5,A),(7,B),(2,C),(8,D) insert(2,E) (2,E) (5,A),(7,B),(2,C),(8,D),(2,E) find(7) (7,B) (5,A),(7,B),(2,C),(8,D),(2,E) find(4) null (5,A),(7,B),(2,C),(8,D),(2,E) find(2) (2,C) (5,A),(7,B),(2,C),(8,D),(2,E) findAll(2) (2,C),(2,E) (5,A),(7,B),(2,C),(8,D),(2,E) size() 5 (5,A),(7,B),(2,C),(8,D),(2,E) remove(find(5)) (5,A) (7,B),(2,C),(8,D),(2,E) find(5) null (7,B),(2,C),(8,D),(2,E) Detailed explanation: NO Specific requirements: please Get it done i need HELP !!!

    Read the article

  • Drive space hungry NoSQL's databases

    - by forum_inquisitor
    I've tested NoSQL databases like CouchDB, MongoDB and Cassandra and observed tendence to absorbing very large amount of drive space relative to inserted key-value pairs. When comparing CouchDB and MySQL schemaless databases CouchDB is consuming much more drive space than MySQL. I know about that key-value DBs by default are versioning and have long uuid and need key optimalisation - the comparison was between about 15 mln rows in MySQL and 1-5 mln documents listed NoSQL DB's. My question is : Is there any NoSQL with good compaction / compression of data? So that I can have NoSQL database with a size closer to 5GB than 50GB?

    Read the article

  • Code Igniter - URL Rewrite

    - by André Alçada Padez
    Hey, i'm trying to get a project to work, but i am having trouble with the rewrite module. I'm running Wamp over Windows XP. I changed httpd.conf to change the root of localhost to: DocumentRoot "C:/wamp/www/project/docroot/" I have htaccess RewriteEngine On RewriteBase / my Apache has the rewrite module Activated. my base_url() in config.php is 'http://localhost/' in routes.php i have: $route['default_controller'] = "home"; $route['our-recipes'] = "recipes"; and more pairs when i point the browser to http://localhost/ i get the homepage of my site, but when i click on any internal link like to 'our-recipes' it loads but i get the same homepage, with the new url on the location bar. if i try to access 'http://localhost/recipes' i get the same result. this is my folder structure: Can anyone please solve this for me??

    Read the article

  • How to roll my own index in c#?

    - by bill seacham
    I need a faster way to create an index file. The application generates pairs of items to be indexed. I currently add each pair as it is generated to a sorted dictionary and then write it out to a disk file. This works well until the number of items added exceeds one million, at which time it slows to the point that is unacceptable. There can be as many as three million data items to be indexed. I prefer to avoid a database because I do not want to significantly increase the size of the deployment package, which is now less than one-half of one megabyte. I tried Access but it is even slower than the sorted dictionary -if it had an efficient bulk load utility then that might work, but I do not find such a tool for Access. Is there a better way to roll my own index?

    Read the article

  • Split a map using Groovy

    - by Tihom
    I want to split up a map into an array of maps. For example, if there is a map with 25 key/value pairs. I want an array of maps with no more than 10 elements in each map. How would I do this in groovy? I have a solution which I am not excited about, is there better groovy version: static def splitMap(m, count){ if (!m) return def keys = m.keySet().toList() def result = [] def num = Math.ceil(m?.size() / count) (1..num).each { def min = (it - 1) * count def max = it * count > keys.size() ? keys.size() - 1 : it * count - 1 result[it - 1] = [:] keys[min..max].each {k -> result[it - 1][k] = m[k] } } result } m is the map. Count is the max number of elements within the map.

    Read the article

  • Efficient most common suffix algorithm?

    - by taw
    I have a few GBs worth of strings, and for every prefix I want to find 10 most common suffixes. Is there an efficient algorithm for that? An obvious solution would be: Store sorted list of <string, count> pairs. Identify by binary search extent for prefix we're searching. Find 10 highest counts in this extent. Possibly precompute it for all short prefixes, so it doesn't ever need to look at large portion of data. I'm not sure if that would actually be efficient at all. Is there a better way I overlooked? Answers must be real time, but it can take as much preprocessing as necessary.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >