Search Results

Search found 6392 results on 256 pages for 'reduce duplicate'.

Page 40/256 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • SQL Server Express Failed to Install

    - by JasCav
    I am attempting to install SQL Server Express (as part of the Visual Studio 2010 Professional installation), but it is failing. I am receiving this error log. [06/22/11,16:31:39] Microsoft Visual Studio 2010 Professional - ENU: [2] UpdateFileFetcherFromMsi: Warning: Missing fwlink entry for cabinet: #SP.cab [06/22/11,16:31:40] setup.exe: [2] Duplicate module ID: {0AFE11CA-57AA-4F66-90BE-284F0F3A5ABD} [06/22/11,16:32:12] setup.exe: [2] Duplicate component in install order: SQL EULAs [06/22/11,16:32:12] setup.exe: [2] Duplicate component in install order: SQL EULAs [06/22/11,16:32:12] setup.exe: [2] Duplicate component in install order: SQL EULAs [06/22/11,17:07:55] Microsoft Visual Studio 2010 Professional - ENU: [2] UpdateFileFetcherFromMsi: Warning: Missing fwlink entry for cabinet: #SP.cab [06/22/11,17:07:55] setup.exe: [2] Duplicate module ID: {0AFE11CA-57AA-4F66-90BE-284F0F3A5ABD} [06/23/11,10:39:33] Microsoft Visual Studio 2010 Professional - ENU: [2] UpdateFileFetcherFromMsi: Warning: Missing fwlink entry for cabinet: #SP.cab [06/23/11,10:39:33] setup.exe: [2] Duplicate module ID: {0AFE11CA-57AA-4F66-90BE-284F0F3A5ABD} [06/23/11,10:40:22] setup.exe: [2] Duplicate component in install order: SQL EULAs [06/23/11,10:40:22] setup.exe: [2] Duplicate component in install order: SQL EULAs [06/23/11,10:40:22] setup.exe: [2] Duplicate component in install order: SQL EULAs [06/23/11,10:53:48] Microsoft Visual Studio 2010 Professional - ENU: [2] UpdateFileFetcherFromMsi: Warning: Missing fwlink entry for cabinet: #SP.cab [06/23/11,10:53:48] setup.exe: [2] Duplicate module ID: {0AFE11CA-57AA-4F66-90BE-284F0F3A5ABD} [06/23/11,13:19:26] Microsoft Visual Studio 2010 Professional - ENU: [2] UpdateFileFetcherFromMsi: Warning: Missing fwlink entry for cabinet: #SP.cab [06/23/11,13:19:26] setup.exe: [2] Duplicate module ID: {0AFE11CA-57AA-4F66-90BE-284F0F3A5ABD} [06/23/11,16:47:36] Microsoft SQL Server 2008 Express Service Pack 1 (x64): [2] Error code -2068643839 for this component is not recognized. [06/23/11,16:47:36] Microsoft SQL Server 2008 Express Service Pack 1 (x64): [2] Component Microsoft SQL Server 2008 Express Service Pack 1 (x64) returned an unexpected value. ***EndOfSession*** I'm reading various articles which point towards something being wrong in the register, but I can't find anything specific. Any suggestions for what I can do to fix this?

    Read the article

  • Synaptic returns error

    - by donvoldy666
    I get the following error message when I run synaptic and I can't install any programs SystemError: W:Ignoring file 'getdeb.list.bck' in directory '/etc/apt/sources.list.d/' as it has an invalid filename extension, W:Duplicate sources.list entry 'http://extras.ubuntu.com/ubuntu/ precise/main i386 Packages (/var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages), W:Duplicate sources.list entry 'http://extras.ubuntu.com/ubuntu/ ' precise/main i386 Packages (/var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages), W:Duplicate sources.list entry 'http://extras.ubuntu.com/ubuntu/ precise/main i386 Packages (/var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages), W:Duplicate sources.list entry ' 'http://extras.ubuntu.com/ubuntu'/ precise/main i386 Packages (/var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages), W:Duplicate sources.list entry' 'http://extras.ubuntu.com/ubuntu/ precise/main i386 Packages (/var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages), W:Duplicate sources.list entry 'http://extras.ubuntu.com/ubuntu/ precise/main i386 Packages (/var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages), W:Duplicate sources.list entry 'http://ppa.launchpad.net/ubuntu-mozilla-security/ppa/ubuntu/ precise/main i386 Packages (/var/lib/apt/lists/ppa.launchpad.net_ubuntu-mozilla-security_ppa_ubuntu_dists_precise_main_binary-i386_Packages), E:Encountered a section with no Package: header, E:Problem with MergeList /var/lib/apt/lists/packages.rssowl.org_ubuntu_dists_precise_main_i18n_Translation-en, E:The package lists or status file could not be parsed or opened. I'm new to Linux so please help me out .

    Read the article

  • how to reduce size (disk space) of windows 8?

    - by humanityANDpeace
    This questions is about what things I can do to reduce the size that Windows 8 uses. Background For example: At present and with only one programm installed (MS Access 2007) I have a about 15GB of my harddisk space used. I have little space (its a 17 GB partition on a SSD disk). I would like solutions that are like: Remove files not really needed (drivers not actually needed in the system) Help files not really needed (documentation) pagefile.sys (assuming I would have 4GB ram and no real need for swaping) hiberfil.sys (used for hibernate and sleep... I need that. though I would regain about 4GB space) At best I would like to delete mostly files that I would most likely not need. Though I have no good idea where to start there. Since my setup (hardware will not change) I would be willing to delete all the drivers that windows 8 has for hardware I do not have.... The question is about ways to reduce the space that Windows 8 uses.

    Read the article

  • How to use map/reduce to handle more than 10000 unique keys for grouping in MongoDB?

    - by Magnus Johansson
    I am using MongoDB v1.4 and the mongodb-csharp driver and I try to group on a data store that has more than 10000 keys, so I get this error: assertion: group() can't handle more than 10000 unique keys using c# code like this: Document query = new Document().Append("group", new Document() .Append("key", new Document().Append("myfieldname", true)) .Append("$reduce", new CodeWScope( "function(obj,prev) { prev.count++; }")) .Append("initial", new Document().Append("count", 0)) .Append("ns", "myitems")); I read that I should use map/reduce, but I can't figure out how. Can somebody please shed some light on how to use map/reduce? Or is there any other way to get around this limitation? Thanks.

    Read the article

  • In MongoDB, how can I replicate this simple query using map/reduce in ruby?

    - by Matthew Rathbone
    Hi, So using the regular MongoDB library in Ruby I have the following query to find average filesize across a set of 5001 documents: avg = 0 total = collection.count() Rails.logger.info "#{total} asset creation stats in the system" collection.find().each {|row| avg += (row["filesize"] * (1/total.to_f)) if row["filesize"]} Its pretty simple, so I'm trying to do the same using map/reduce as a learning exercise. This is what I came up with: map = 'function(){emit("filesizes", {size: this.filesize, num: 1});}' reduce = 'function(k, vals){ var result = {size: 0, num: 0}; for(var x in vals) { var new_total = result.num + vals[x].num; result.num = new_total result.size = result.size + (vals[x].size * (vals[x].num / new_total)); } return result; }' @results = collection.map_reduce(map, reduce) However the two queries come back with two different results! What am I doing wrong?

    Read the article

  • Large enterprise application - clients wish to use duplicate e-mails addresses?

    - by Alex Key
    I'd like to know people's opinions, reactions to clients and technical work arounds (if applicable), to the issue of an enterprise application where a client wishes to use duplicate e-mail addresses? To clarify, when I say duplicate e-mail addresses I mean within the same client system, having multiple users that have the same e-mail address. So not just using generic e-mail addresses but using the e-mail address of another user. e.g. Bob Jenkins: [email protected] James Jeffery: [email protected] Context To give this some further context, in the e-learning sector it is common that although all staff in an organisation must complete e-learning - they may not have their own e-mail address so they choose to use their managers e-mail address. Albeit against good practice in public sites... it's a requirement we've over and over again where an organisation is split between office based staff and perhaps e.g. staff in a warehouse. Where problem lies Mr Steak, good point, the problem lies in password resets and perhaps in situations where semi-personal information could be sent (not confidential enough to worry about the insecurities of email). Perhaps reminders for specific system actions, which would be confusing for the unintended party to see (if perhaps misreading the e-mail's intended recipient) Possible solutions System knowing the difference between a "for the attention of" and direct to the person e-mails, including this in the body text. Using alternative communication such as SMS Simply not having e-mails sent to people who are not the intended recipient. Providing an e-mail service ourselfs (not really viable for a corporate IT dept) Thoughts?

    Read the article

  • What algorithms can I use to detect if articles or posts are duplicates?

    - by michael
    I'm trying to detect if an article or forum post is a duplicate entry within the database. I've given this some thought, coming to the conclusion that someone who duplicate content will do so using one of the three (in descending difficult to detect): simple copy paste the whole text copy and paste parts of text merging it with their own copy an article from an external site and masquerade as their own Prepping Text For Analysis Basically any anomalies; the goal is to make the text as "pure" as possible. For more accurate results, the text is "standardized" by: Stripping duplicate white spaces and trimming leading and trailing. Newlines are standardized to \n. HTML tags are removed. Using a RegEx called Daring Fireball URLs are stripped. I use BB code in my application so that goes to. (ä)ccented and foreign (besides Enlgish) are converted to their non foreign form. I store information about each article in (1) statistics table and in (2) keywords table. (1) Statistics Table The following statistics are stored about the textual content (much like this post) text length letter count word count sentence count average words per sentence automated readability index gunning fog score For European languages Coleman-Liau and Automated Readability Index should be used as they do not use syllable counting, so should produce a reasonably accurate score. (2) Keywords Table The keywords are generated by excluding a huge list of stop words (common words), e.g., 'the', 'a', 'of', 'to', etc, etc. Sample Data text_length, 3963 letter_count, 3052 word_count, 684 sentence_count, 33 word_per_sentence, 21 gunning_fog, 11.5 auto_read_index, 9.9 keyword 1, killed keyword 2, officers keyword 3, police It should be noted that once an article gets updated all of the above statistics are regenerated and could be completely different values. How could I use the above information to detect if an article that's being published for the first time, is already existing within the database? I'm aware anything I'll design will not be perfect, the biggest risk being (1) Content that is not a duplicate will be flagged as duplicate (2) The system allows the duplicate content through. So the algorithm should generate a risk assessment number from 0 being no duplicate risk 5 being possible duplicate and 10 being duplicate. Anything above 5 then there's a good possibility that the content is duplicate. In this case the content could be flagged and linked to the article's that are possible duplicates and a human could decide whether to delete or allow. As I said before I'm storing keywords for the whole article, however I wonder if I could do the same on paragraph basis; this would also mean further separating my data in the DB but it would also make it easier for detecting (2) in my initial post. I'm thinking weighted average between the statistics, but in what order and what would be the consequences...

    Read the article

  • Big Data – Buzz Words: What is MapReduce – Day 7 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is Hadoop. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – MapReduce. What is MapReduce? MapReduce was designed by Google as a programming model for processing large data sets with a parallel, distributed algorithm on a cluster. Though, MapReduce was originally Google proprietary technology, it has been quite a generalized term in the recent time. MapReduce comprises a Map() and Reduce() procedures. Procedure Map() performance filtering and sorting operation on data where as procedure Reduce() performs a summary operation of the data. This model is based on modified concepts of the map and reduce functions commonly available in functional programing. The library where procedure Map() and Reduce() belongs is written in many different languages. The most popular free implementation of MapReduce is Apache Hadoop which we will explore tomorrow. Advantages of MapReduce Procedures The MapReduce Framework usually contains distributed servers and it runs various tasks in parallel to each other. There are various components which manages the communications between various nodes of the data and provides the high availability and fault tolerance. Programs written in MapReduce functional styles are automatically parallelized and executed on commodity machines. The MapReduce Framework takes care of the details of partitioning the data and executing the processes on distributed server on run time. During this process if there is any disaster the framework provides high availability and other available modes take care of the responsibility of the failed node. As you can clearly see more this entire MapReduce Frameworks provides much more than just Map() and Reduce() procedures; it provides scalability and fault tolerance as well. A typical implementation of the MapReduce Framework processes many petabytes of data and thousands of the processing machines. How do MapReduce Framework Works? A typical MapReduce Framework contains petabytes of the data and thousands of the nodes. Here is the basic explanation of the MapReduce Procedures which uses this massive commodity of the servers. Map() Procedure There is always a master node in this infrastructure which takes an input. Right after taking input master node divides it into smaller sub-inputs or sub-problems. These sub-problems are distributed to worker nodes. A worker node later processes them and does necessary analysis. Once the worker node completes the process with this sub-problem it returns it back to master node. Reduce() Procedure All the worker nodes return the answer to the sub-problem assigned to them to master node. The master node collects the answer and once again aggregate that in the form of the answer to the original big problem which was assigned master node. The MapReduce Framework does the above Map () and Reduce () procedure in the parallel and independent to each other. All the Map() procedures can run parallel to each other and once each worker node had completed their task they can send it back to master code to compile it with a single answer. This particular procedure can be very effective when it is implemented on a very large amount of data (Big Data). The MapReduce Framework has five different steps: Preparing Map() Input Executing User Provided Map() Code Shuffle Map Output to Reduce Processor Executing User Provided Reduce Code Producing the Final Output Here is the Dataflow of MapReduce Framework: Input Reader Map Function Partition Function Compare Function Reduce Function Output Writer In a future blog post of this 31 day series we will explore various components of MapReduce in Detail. MapReduce in a Single Statement MapReduce is equivalent to SELECT and GROUP BY of a relational database for a very large database. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – HDFS. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Does relying on intellisense and documentation a lot while coding makes you a bad programmer? [duplicate]

    - by sharp12345
    This question already has an answer here: Forgetting basic language functions due to use of IDE, over reliance? [duplicate] 4 answers Is a programmer required to learn and memorize all syntax, or is it ok to keep handy some documentation? Would it affect the way that managers look at coders? What are the downside of depending on intellisense and auto-complete technologies and pdf documentation?

    Read the article

  • How do I remove this duplicate sources.list entry?

    - by blade19899
    Sometimes, when I run sudo apt-get update or install an app sudo apt-get install I get the error below at the bottom in the gnome-terminal and I can't remove it. I searched for it in the software-sources and in y-ppa-manager and I am not sure I want to mess with the sources list since I am a bit of a noob. Duplicate sources.list entry http://archive.canonical.com/ubuntu/ oneiric/partner amd64 Packages \ (/var/lib/apt/lists/archive.canonical.com_ubuntu_dists_oneiric_partner_binary-amd64_Packages)

    Read the article

  • How to remove the unwanted entries from the boot menu? [duplicate]

    - by Sen
    Possible Duplicate: Is there a way to remove/hide old kernel versions? When my system boots up, a big list of some 6 options are shown other than the Windows OS option. They are like : Ubuntu 10.04- linux kernel 2.6.32-25 Ubuntu 10.04- linux kernel 2.6.32-25 (recovery) Ubuntu 10.04- linux kernel 2.6.32-26 Ubuntu 10.04- linux kernel 2.6.32-26 (recovery) ...etc Memory Test.. Windows XP Professional How to remove the unwanted entries from this list?

    Read the article

  • How do i share files between my ubuntu host and xp pro virtual machine? [duplicate]

    - by jake
    Possible Duplicate: Shared folders in XP virtualbox guest I'm using Virtualbox 4.1.8 to run Windows XP PRO. I have ironed out all other kinks on my own but I still can't access a file share. The sole purpose of running a VM is to get iTunes for my iPhone but I can't get to my music file from my guest OS, I'm also new to Ubuntu so im not sure if I have done all the updates right or not, any would be welcome

    Read the article

  • Where can I find free simple 3D models? [duplicate]

    - by fibo-Nacci
    This question is an exact duplicate of: What are good sites that provide free media resources for hobby game development? [closed] I'm learning OpenGL. Unfortunately can't create 3D models, but I would like to write some really simple games, to improve my programming skills. I need some really basic .obj file, which has one bmp, or jpeg texture. Where can I download some for free? Thanks in advance,

    Read the article

  • Is the structure of my site's navigation (via price/service tables) considered 'Duplicate Content' by Google?

    - by James Gadsby
    As I'm building my business website, I'm using service/price tables at the bottom of each service page to demonstrate to customers/potential clients my other offerings. Of course, given that there are 7 or 8 service pages, each with (according to Google) the same service descriptions below the original content for that service, would this be counting as duplicate content? If so, what could I do about it?

    Read the article

  • How can I filter a report with duplicate fields in related records?

    - by Graham Jones
    I have a report where I need to filter out records where there is a duplicate contract number within the same station but a different date. It is not considered a duplicate value becuase of the different date. I then need to summarize the costs and count the contracts but even if i suppress the "duplicate fields" it will summarize the value. I want to select the record with the most current date. Station Trans-DT Cost Contract-No 8 5/11/2010 10 5008 8 5/12/2010 15 5008 9 5/11/2010 12 5012 9 5/15/2010 50 5012

    Read the article

  • How to reduce the Number of threads running at instance in jetty server ?

    - by Thirst for Excellence
    i would like to reduce the live threads on server to reduce the bandwidth consumption for data(data pull while application launching time) transfer from my application to clients in my application. i did setting like is this setting enough to reduce the bandwidth consumption on jetty server ? Please help me any one 1) in Jetty.xml: <Set name="ThreadPool"> <New class="org.eclipse.jetty.util.thread.QueuedThreadPool"> <name="minThreads"> 1 > <Set name="maxThreads" value=50> 2: services-config.xml channel-definition id="my-longpolling-amf" class="mx.messaging.channels.AMFChannel" endpoint url="http://MyIp:8400/blazeds/messagebroker/amflongpolling" class="flex.messaging.endpoints.AMFEndpoint" properties <polling-enabled>true</polling-enabled> <polling-interval-seconds>1</polling-interval-seconds> <wait-interval-millis>60000</wait-interval-millis> <client-wait-interval-millis>1</client-wait-interval-millis> <max-waiting-poll-requests>50</max-waiting-poll-requests> </properties> </channel-definition>

    Read the article

  • How do I reduce the size of mlocate database?

    - by MountainX
    I'm out of space on /var 25G 25G 0 100% /var It looks like mlocate.db is the problem: # find . -printf '%s %p\n' | sort -nr | head 13140140032 ./lib/mlocate/mlocate.db.cgLMAM 12409839616 ./lib/mlocate/mlocate.db.MqGeqe cat /etc/updatedb.conf PRUNE_BIND_MOUNTS="yes" PRUNENAMES=".git .bzr .hg .svn" PRUNEPATHS="/tmp /var/spool /media" PRUNEFS="NFS nfs nfs4 rpc_pipefs afs binfmt_misc proc smbfs autofs iso9660 ncpfs coda devpts ftpfs devfs mfs shfs sysfs cifs lustre_lite tmpfs usbfs udf" I don't see anything else to prune. So how can I fix this? Thanks

    Read the article

  • How can I reduce draw calls when using glBufferSubData and DYNAMIC_DRAW?

    - by Kronos
    At first I had the problem where I had about 150 rectangles rendered every tick. I only used STATIC_DRAW and glBufferData. I added support for DYNAMIC_DRAW and glBufferSubData and now I have a very good result... but the number of draw calls (glDrawArrays) is the same. Best practices from Mozilla Dev website said it should be reduced, but how? Every rectangle has a method render() in which I do following (shortend): _gl.bindBuffer(WebGL.ARRAY_BUFFER, vertexBuffer); _gl.enableVertexAttribArray(a_position); _gl.vertexAttribPointer(a_position, 2, WebGL.FLOAT, false, 0, 0); _gl.bufferSubData(WebGL.ARRAY_BUFFER, 0, vertices); _gl.bindBuffer(WebGL.ARRAY_BUFFER, texCoordBuffer); _gl.enableVertexAttribArray(a_texCoordLocation); _gl.vertexAttribPointer(a_texCoordLocation, 2, WebGL.FLOAT, false, 0, 0); _gl.bufferSubData(WebGL.ARRAY_BUFFER, 0, texVertices); _gl.uniform2fv(_utranslation, _translation); _gl.uniform2fv(_urotation, _rotation); _gl.uniform2f(_location, Dart2D.WIDTH, Dart2D.HEIGHT); _gl.drawArrays(WebGL.TRIANGLES, 0, 6); So every rectangle calls drawArrays in every frame...

    Read the article

  • Can prefixing a dash reduce the search engine rating?

    - by LeoMaheo
    Hi anyone! If I prefix a dash to GUIDs in my URLs on my Web site, in this manner: example.com/some/folders/-35x2ne5r579n32/page-name Will my SEO rating be affected? Background: On my site, people can look up pages by GUID, and by path. For example, both example.com/forum/-3v32nirn32/eat-animals-without-friends and example.com/forum/eat-animals-without-friends could map to the same page. To indicate that 3v32nirn32 is a GUID and not a page name, I thought I could prefix a - and then my webapp would understand. But I wouldn't want my search engine rating to drop. And prefixing a dash in this manner seems weird, so perhaps Googlebot lowers my rating. Hence my question: Do you know if my search engine rating might drop? (Today or in the future?) (I could also e.g. prefix id-, so the URL becomes example.com/forum/id-3v32nirn32, but then people cannot create pages that start with the word "id".) (I think I don't want URLs like this one: example.com/id/some-guid.) Kind regards, Magnus

    Read the article

  • Algorithm to reduce a bitmap mask to a list of rectangles?

    - by mos
    Before I go spend an afternoon writing this myself, I thought I'd ask if there was an implementation already available --even just as a reference. The first image is an example of a bitmap mask that I would like to turn into a list of rectangles. A bad algorithm would return every set pixel as a 1x1 rectangle. A good algorithm would look like the second image, where it returns the coordinates of the orange and red rectangles. The fact that the rectangles overlap don't matter, just that there are only two returned. To summarize, the ideal result would be these two rectangles (x, y, w, h): [ { 3, 1, 2, 6 }, { 1, 3, 6, 2 } ]

    Read the article

  • How can I reduce lagging with GUI/GPU stuff -- make Unity run smaller, quicker, faster?

    - by chris
    Finally installed Ubuntu 12.04 on my HP Pavilion 2000. Have all of my apps on and loaded and am happy thus far. ONE ISSUE -- I'm experiencing a small amount of GUI/GPU style lagging when I go to open menus, move windows, etc. What settings can I disable to allow it to run sharply and quickly, even if i t means sacrificing some of the graphics? Have already installed pre-load. Just want the OS to run sharply and quickly with menu refreshes, window moves, etc. I do not mind sacrificing graphics. Somone mentionted to me I have to install video drivers but the two that come up in system settings under drivers it won't let me install. ALSO : I am driving a second 19" monitor -- would that make a difference performance wise as well? Thanks in advance. Chris

    Read the article

  • How can I reduce the number of spammers registering with my phpBB site?

    - by Jayapal Chandran
    I have a site which runs phpBB, on this site I have enabled user authentication through email when registering enabled captcha However I still get spam users every 20 to 30 minutes. Is there anything I can do to prevent this with the ucp.php file? I have already loaded a large list of IP addresses yet there are spam users registering all the time. One thing I can do is I can check the bounce mail to find the username and can pipe bounced mails to a php script and immediately delete that user, but I have not got any bounce back from hotmail or some other email clients. So this way it will catch hold of a certain percent of spam users but there are still a huge amount of users spamming. What else can I do to prevent spammers abusing my phpBB site?

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >