Search Results

Search found 6392 results on 256 pages for 'reduce duplicate'.

Page 155/256 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • Office jukebox systems

    - by Jona
    We're looking for a good office jukebox solution where staff can select songs via a web interface to be played over the central set of speakers. Must haves: Web Interface RSS / easy to scrap display of currently playing songs Ability to play mp3s and manage an ordered playlist. Good cataloguing of media. Multiple OSs supported as clients - Windows, Mac, Fedora Linux (will probably be accomplished by virtue of a web interface). We have tried XBMC which worked well as a proof of concept however the web interface is just too immature and has too many bugs for a reliable multi-user solution. I believe the same will be true of boxee. Nice to have: Ability to play music videos onto a monitor Ability to listen to radio streams specifically Shoutcast and the BBC. Ability to run on Linux is a nice to have but windows solutions which worked well would certainly be considered. I am aware of question 61404 and don't believe this to be a duplicate due to the specific requirements.

    Read the article

  • What free OS should I use on my VPS?

    - by earlz
    Hello, I looked a bit but didn't see any duplicate of this so my question is which free(open source) OS do you use on servers and why do you use that OS? Background I have a VPS at Linode. There is a broad range of options for which OS I can put on it including both 32 and 64bit OSs. I just use it to run my small blog and for hosting random files. It's very low traffic. I have been using 64bit Arch Linux on my VPS and though I love the OS for general usage, for a server the constant breakage is troublesome. So I'm considering trying something new and am looking for suggestions.

    Read the article

  • algorithm q: Fuzzy matching of structured data

    - by user86432
    I have a fairly small corpus of structured records sitting in a database. Given a tiny fraction of the information contained in a single record, submitted via a web form (so structured in the same way as the table schema), (let us call it the test record) I need to quickly draw up a list of the records that are the most likely matches for the test record, as well as provide a confidence estimate of how closely the search terms match a record. The primary purpose of this search is to discover whether someone is attempting to input a record that is duplicate to one in the corpus. There is a reasonable chance that the test record will be a dupe, and a reasonable chance the test record will not be a dupe. The records are about 12000 bytes wide and the total count of records is about 150,000. There are 110 columns in the table schema and 95% of searches will be on the top 5% most commonly searched columns. The data is stuff like names, addresses, telephone numbers, and other industry specific numbers. In both the corpus and the test record it is entered by hand and is semistructured within an individual field. You might at first blush say "weight the columns by hand and match word tokens within them", but it's not so easy. I thought so too: if I get a telephone number I thought that would indicate a perfect match. The problem is that there isn't a single field in the form whose token frequency does not vary by orders of magnitude. A telephone number might appear 100 times in the corpus or 1 time in the corpus. The same goes for any other field. This makes weighting at the field level impractical. I need a more fine-grained approach to get decent matching. My initial plan was to create a hash of hashes, top level being the fieldname. Then I would select all of the information from the corpus for a given field, attempt to clean up the data contained in it, and tokenize the sanitized data, hashing the tokens at the second level, with the tokens as keys and frequency as value. I would use the frequency count as a weight: the higher the frequency of a token in the reference corpus, the less weight I attach to that token if it is found in the test record. My first question is for the statisticians in the room: how would I use the frequency as a weight? Is there a precise mathematical relationship between n, the number of records, f(t), the frequency with which a token t appeared in the corpus, the probability o that a record is an original and not a duplicate, and the probability p that the test record is really a record x given the test and x contain the same t in the same field? How about the relationship for multiple token matches across multiple fields? Since I sincerely doubt that there is, is there anything that gets me close but is better than a completely arbitrary hack full of magic factors? Barring that, has anyone got a way to do this? I'm especially keen on other suggestions that do not involve maintaining another table in the database, such as a token frequency lookup table :). This is my first post on StackOverflow, thanks in advance for any replies you may see fit to give.

    Read the article

  • AES acceleration for Java

    - by chris_l
    I want to encrypt/decrypt lots of small (2-10kB) pieces of data. The performance is ok for now: On a Core2Duo, I get about 90 MBytes/s AES256 (when using 2 threads). But I may need to improve that in the future - or at least reduce the impact on the CPU. Is it possible to use dedicated AES encryption hardware with Java (using JCE, or maybe a different API)? Would Java take advantage of special CPU features (SSE5?!), if I get a better CPU? Or are there faster JCE providers? (I tried SunJCE and BouncyCastle - no big difference.) Other possiblilities?

    Read the article

  • Good software to record desktop/screen operations [closed]

    - by juanmaflyer
    Possible Duplicate: What is the best software for desktop recording? Hi superuser's. I would like to record some tasks in a program (i want to make a video where i will be explaining how to use a particular program) and i haven't found a good software to do it yet... I have just tested camStudio but when i hit the Record Button the computer starts working very slowly (even when recording in a small area... so i wouldn't imagine how slow it will run if recording full screen). So i want you to recommend me some good software to record the screen on a fast way... Thanks a lot. Juan

    Read the article

  • Apply SetEnvIf after Apache RewriteRule

    - by coneybeare
    I have a working apache rewrite rule: RewriteCond %{HTTP_HOST} ^.*foo.com RewriteRule (.*) http://bar.com$1 [R=301,QSA,L] and some working dontlog SetEnvIfs: SetEnvIf Request_URI "^/server-status$" dontlog SetEnvIf Request_URI "^/home/ping$" dontlog SetEnvIf Request_URI "^/haproxy-status$" dontlog SetEnvIf User-Agent ".*internal dummy connection.*" dontlog CustomLog /var/log/apache2/access.log combined env=!dontlog but I can't figure out how to stop the RewriteRule from logging a duplicate line. foo.com and bar.com are both on the same machine. I would expect this rule to work, but it did not: SetEnvIf Host "foo.com" dontlog I still get duplicates in the Apache Log: 10.250.18.97 - - [06/Apr/2012:16:57:12 +0000] "GET / HTTP/1.1" 200 732 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/534.55.3 (KHTML, like Gecko) Version/5.1.5 Safari/534.55.3" 68.194.30.42 - - [06/Apr/2012:16:57:12 +0000] "GET / HTTP/1.1" 200 732 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/534.55.3 (KHTML, like Gecko) Version/5.1.5 Safari/534.55.3" .... where 10.250.18.97 is the server's IP. How can I prevent that RewriteRule from logging?

    Read the article

  • StringBuilder/StringBuffer vs. "+" Operator

    - by matt.seil
    I'm reading "Better, Faster, Lighter Java" (by Bruce Tate and Justin Gehtland) and am familiar with the readability requirements in agile type teams, such as what Robert Martin discusses in his clean coding books. On the team I'm on now, I've been told explicitly not to use the "+" operator because it creates extra (and unnecessary) string objects during runtime. But this article: http://www.ibm.com/developerworks/java/library/j-jtp01274.html Written back in '04 talks about how object allocation is about 10 machine instructions. (essentially free) It also talks about how the GC also helps to reduce costs in this environment. What is the actual performance tradeoffs between using "+," "StringBuilder," or "StringBuffer?" (In my case it is StringBuffer only as we are limited to Java 1.4.2.) StringBuffer to me results in ugly, less readable code, as a couple of examples in Tate's book demonstrates. And StringBuffer is thread-synchronized which seems to have its own costs that outweigh the "danger" in using the "+" operator. Thoughts/Opinions?

    Read the article

  • How can I combine non-identical disks efficiently?

    - by Odys
    There are some not-identical disk of various capacities that I want to combine (somehow). Since there are no duplicate models, I can't use raid between none of them. Is there a way to use them efficiently while being safe? What I have in mind is a software that will use them as if it were Raid-5 or something. I really don't care about max speed. I want in the end to have as less logical drives as possible. Also, I don't mind spending some money on hardware, if needed.

    Read the article

  • .NET Deployment of Interface/Classes for Command Pattern Question

    - by Jonno
    In theory I would like to produce 2 projects: 1) Asp.net (Sever A) 2) DAL running (Server B) I would like to utilise command objects to comunicate with the DAL. ASP.net instantiates a command class e.g. CmdGetAllUsers which impliments IMyCommand interface and sends it to the DAL (using ASMX or WCF). My question is: Would the class definition of CmdGetAllUsers need to exist on the DAL server? Or would having the interface definition be enough? My goal is to reduce the need to redeploy the DAL code, and have it as a fairly simple pass-through layer. Many thanks for your time.

    Read the article

  • Parse and charset: why my script doesn't work

    - by Rebol Tutorial
    I want to extract attribute1 and attribute3 values only. I don't understand why charset doesn't seem to work in my case to "skip" any other attributes (attribute3 is not extracted as I would like): content: {<tag attribute1="valueattribute1" attribute2="valueattribute2" attribute3="valueattribute3"> </tag> <tag attribute2="valueattribute21" attribute1="valueattribute11" > </tag> } attribute1: [{attribute1="} copy valueattribute1 to {"} thru {"}] attribute3: [{attribute3="} copy valueattribute3 to {"} thru {"}] spacer: charset reduce [tab newline #" "] letter: complement spacer to-space: [some letter | end] attributes-rule: [(valueattribute1: none valueattribute3: none) [attribute1 | none] to-space [attribute3 | none] (print valueattribute1 print valueattribute3) | [attribute3 | none] to-space [attribute1 | none] (print valueattribute3 print valueattribute1 valueattribute1: none valueattribute3: none ) | none ] rule: [any [to {<tag } thru {<tag } attributes-rule {>} to {</tag>} thru {</tag>}] to end] parse content rule output is >> parse content rule valueattribute1 none == true >>

    Read the article

  • Uploadify and Image Compression

    - by Ilya Biryukov
    Hi, I am using Uploadify on one of my client's web sites to allow them to upload a large amount of pictures at once to their photo gallery. I am seeing issues lately. They seem to upload large photographs (3 MB and above). I am wondering, is it possible to compress (reduce their size) on the client side, instead of doing it on the server (just like facebook does it). I know I could easily do it on the server, but I am working on another project right now, where I am expecting a large flow of photo uploads. It would require significant amount of CPU time to process them all. So I thought, I'd ask about the client side processing. Thanks.

    Read the article

  • Google App Engine Database Index

    - by fjsj
    I need to store a undirected graph in a Google App Engine database. For optimization purposes, I am thinking to use database indexes. Using Google App Engine, is there any way to define the columns of a database table to create its index? I will need some optimization, since my app uses this stored undirected graph on a content-based filtering for item recommendation. Also, the recommender algorithm updates the weights of some graph's edges. If it is not possible to use database indexes, please suggest another method to reduce query time for the graph table. I believe my algorithm does more data retrieval operations from graph table than write operations. PS: I am using Python.

    Read the article

  • Take Complete Image of CRM Server Application

    - by nicorellius
    I have heard of snapshots or ghost images like this. But I have never used this kind of tool to actually clone a piece of hard drive. I think Norton Partition Magic can do something like this as well, but haven't tried it. So my question is this: How can I duplicate a CRM server application exactly so that I can transfer it to another system? I have a CRM server running LAMP (Linux, Apache, MySQL, and PHP) and I urgently need to transfer these data to another system without actually installing, configuring the dependencies and then doing the same for the software itself. Has anyone done this or does anyone know how to do this?

    Read the article

  • PHP APC - Why is loading cached array op codes slow?

    - by Aaron Kreider
    I'm using APC to reduce my loading time for my PHP files. My files load very fast, except for one file where I define more than 100 arrays. This 270 kb file takes 200 ms to load. The rest of the files are full of objects, methods, and functions. I'm wondering: does OP code caching not work as well for arrays? My APC cache should be big enough to handle all of my classes. Currently 40% of my cache is free. My hit rate is 99%. apc.shm_size=32 M apc.max_file_size = 1M apc.shm_segments= 1 APC 3.1.6 I'm using PHP 5.2, Apache 2, and Windows Vista.

    Read the article

  • How can I Query only __key__ on a Google Appengine PolyModel child?

    - by Gabriel
    So the situation is: I want to optimize my code some for doing counting of records. So I have a parent Model class Base, a PolyModel class Entry, and a child class of Entry Article: How would I query Article.key so I can reduce the query load but only get the Article count. My first thought was to use: q = db.GqlQuery("SELECT __key__ from Article where base = :1", i_base) but it turns out GqlQuery doesn't like that because articles are actually stored in a table called Entry. Would it be possible to Query the class attribute? something like: q = db.GqlQuery("select __key__ from Entry where base = :1 and :2 in class", i_base, 'Article') neither of which work. Turns out the answer is even easier. But I am going to finish this question because I looked everywhere for this. q = db.GqlQuery("select __key__ from Entry where base = :1 and class = :2", i_base, 'Article')

    Read the article

  • Script Speed vs Memory Usage

    - by Doug Neiner
    I am working on an image generation script in PHP and have gotten it working two ways. One way is slow but uses a limited amount of memory, the second is much faster, but uses 6x the memory . There is no leakage in either script (as far as I can tell). In a limited benchmark, here is how they performed: -------------------------------------------- METHOD | TOTAL TIME | PEAK MEMORY | IMAGES -------------------------------------------- One | 65.626 | 540,036 | 200 Two | 20.207 | 3,269,600 | 200 -------------------------------------------- And here is the average of the previous numbers (if you don't want to do your own math): -------------------------------------------- METHOD | TOTAL TIME | PEAK MEMORY | IMAGES -------------------------------------------- One | 0.328 | 540,036 | 1 Two | 0.101 | 3,269,600 | 1 -------------------------------------------- Which method should I use and why? I anticipate this being used by a high volume of users, with each user making 10-20 requests to this script during a normal visit. I am leaning toward the faster method because though it uses more memory, it is for a 1/3 of the time and would reduce the number of concurrent requests.

    Read the article

  • How can I programmically construct the object reference?

    - by Bryan
    Lets just say that I have three textboxes: TextBox1, TextBox2, TextBox3. Normally if I wanted to change the text for example I would put TextBox1.Text = "Whatever" and so on. For what I'm doing right now I would like to something like (TextBox & "i").Text. That obviously isn't the syntax I need to use I'm just using it as an example for what I need to do. So how can I do something like this? The main reason I'm doing this is to reduce code with a loop. Please keep in mind that I'm not actually changing the text of the textboxes I'm simply using that as an example to get the point across.

    Read the article

  • Which programs using my internet connection ? [closed]

    - by Eray Alakese
    Possible Duplicate: Monitor all and any internet traffic from my home PC - what should I use? My internet connection has 4GB quota. But my quota is run out very fast. I setup Net Limiter , for look which programs using my internet connection. But there isn't any weird program. There is only firefox.exe . So, how can i look all programs which using my connection and leech my bandwidth quota? For example, is there any CMD command ? like netstat

    Read the article

  • Self-Configuring Classes W/ Command Line Args: Pattern or Anti-Pattern?

    - by dsimcha
    I've got a program where a lot of classes have really complicated configuration requirements. I've adopted the pattern of decentralizing the configuration and allowing each class to take and parse the command line/configuration file arguments in its c'tor and do whatever it needs with them. (These are very coarse-grained classes that are only instantiated a few times, so there is absolutely no performance issue here.) This avoids having to do shotgun surgery to plumb new options I add through all the levels they need to be passed through. It also avoids having to specify each configuration option in multiple places (where it's parsed and where it's used). What are some advantages/disadvantages of this style of programming? It seems to reduce separation of concerns in that every class is now doing configuration stuff, and to make programs less self-documenting because what parameters a class takes becomes less explicit. OTOH, it seems to increase encapsulation in that it makes each class more self-contained because no other part of the program needs to know exactly what configuration parameters a class might need.

    Read the article

  • Using Different Mappings for Uppercase and Lowercase of the Same Key

    - by cosmic.osmo
    I'm trying use AutoHotkey to map some key combinations in a way that respects upper and lowercase, but I cannot get it to work. For example: I want: AppsKey + L types "a" AppsKey + Shift + L types "b" A. I've tried these, but both combinations only give "b" ("+" appears to be the symbol for shihft): AppsKey & l::Send a AppsKey & +l::Send b B. I've tried this, but it won't compile and gives a "invalid hotkey error": AppsKey & l::Send a AppsKey & Shift & l::Send b C. I've tried this, but it won't compile and gives a "duplicate hotkey error" (which makes sense as it appears the hotkey definitions are case insensitive): AppsKey & l::Send a AppsKey & L::Send b Is this type of mapping possible in AutoHotkey? What am I missing to make it work?

    Read the article

  • How can I programmatically construct the object reference?

    - by Bryan
    Lets just say that I have three textboxes: TextBox1, TextBox2, TextBox3. Normally if I wanted to change the text for example I would put TextBox1.Text = "Whatever" and so on. For what I'm doing right now I would like to something like (TextBox & "i").Text. That obviously isn't the syntax I need to use I'm just using it as an example for what I need to do. So how can I do something like this? The main reason I'm doing this is to reduce code with a loop. Please keep in mind that I'm not actually changing the text of the textboxes I'm simply using that as an example to get the point across.

    Read the article

  • Transfer Win8 user settings between profiles [closed]

    - by GlennFerrieLive
    Possible Duplicate: How do I sync grouped Windows Store apps between devices? Is there a way for me to copy/save/transfer my "start menu" configuration, meaning the grouping and ordering of the elements on the Start screen, between user profiles? Is it in the registry? I am open to manual or "coded" suggestions. UPDATE: I'd like to VETO this closing. I am aware of the "roaming" profile behavior. I want to COPY my configuration BETWEEN profiles on the same machine.... DIFFERENT profile DIFFERENT person. I like the way my start screen is set up. i want to set my wife up with the same layout.

    Read the article

  • top-k selection/merge

    - by tcurdt
    I have n sorted lists. These lists are quite long (300000+ tuples). Selecting the top 10 of the individual lists is of course trivial - they are right at the head of the lists. Where it gets more interesting is when I want the top 10 of all the sorted lists. The question is whether there is an algorithm to calculate the combined top 10 having the correct order while cutting off the long tail of the lists. The goal is to reduce the required space. And if there is: How does one find the limit where is is safe to cut? Note: The actual counts are not important. Only the order is.

    Read the article

  • Avoiding dog-piling or thundering herd in a memcached expiration scenario

    - by Quintin Par
    I have the result of a query that is very expensive. It is the join of several tables and a map reduce job. This is cached in memcached for 15 minutes. Once the cache expires the queries are obviously run and the cache warmed again. But at the point of expiration the thundering herd problem issue can happen. One way to fix this problem, that I do right now is to run a scheduled task that kicks in the 14th minute. But somehow this looks very sub optimal to me. Another approach I like is nginx’s proxy_cache_use_stale updating; mechanism. The webserver/machine continues to deliver stale cache while a thread kicks in the moment expiration happens and updates the cache. Has someone applied this to memcached scenario though I understand this is a client side strategy? If it benefits, I use Django.

    Read the article

  • Win 7: Are motherboard / chipset drivers part of the Software Update?

    - by Horst Walter
    Is the Windows 7 update automatically updating Chipset and motherboard drivers? I am basically talking about mainstream boards / processors (ASUS; Intel, AMD, ...), so the question reads "in general". Certainly there are exotic system not updated automatically. This is a duplicate of dangling Will Windows 7 Update Find Chipset Drivers? (one answer, no acceptance, comment directly questions the answer). I am interested in this topic as well, so I dare to ask the question again.

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >