Search Results

Search found 6392 results on 256 pages for 'reduce duplicate'.

Page 67/256 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Supporting Large Scale Team Development

    With a large-scale development of a database application, the task of supporting a large number of development and test databases, keeping them up to date with different builds can soon become ridiculously complex and costly. Grant Fritchey demonstrates a novel solution that can reduce the storage requirements enormously, and allow individual developers to work on thir own version, using a full set of data.

    Read the article

  • Steltix (NL) is live on Oracle Sales Cloud

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Steltix (NL) uses Oracle Sales Cloud (Oracle Fusion CRM in the Oracle Cloud) to improve the business performance of customers and to reduce costs and minimize risks. If you read Dutch, I encourage you reading the press release here!

    Read the article

  • CheckBox Web Control Basics in ASP.NET 3.5

    In this second part of a series on basic ways to gather user input in ASP.NET 3.5 we ll cover checkbox web controls. Checkboxes are an appropriate choice in situations where a radio button won t work -- where you want to let a user select more than one choice among a group of options for example.... Download a Free Trial of Windows 7 Reduce Management Costs and Improve Productivity with Windows 7

    Read the article

  • Windows 7 Trial for Businesses Gets Extension

    Those still on the fence about trying Windows 7 now have more time to give the new operating system a shot as Microsoft has just announced that they plan to extend their 9 -day Windows Enterprise Trial program to December 31 2 1 .... Download a Free Trial of Windows 7 Reduce Management Costs and Improve Productivity with Windows 7

    Read the article

  • Generating Debt Leads by Using Mobile Marketing

    If you are a debt leads generation company and you're using these methods of generating leads, you should know that the least amount of fields for the consumer to fill out, the higher the lead conversion. Every field will reduce the change of the user clicking on the submit button.

    Read the article

  • The Rise Of CSS In Web Design

    Style sheet languages were introduced in the market because of their capability to reduce the problems which are usually associated with the use of tables in website design. One of the most popular s... [Author: Margarette Mcbride - Web Design and Development - May 04, 2010]

    Read the article

  • Outsource SEO - A Strong Business Case

    Outsourcing became quite popular in the 1990's as companies raced to reduce costs by moving non-essential functions out of the corporate cost structure. One of the main methods for doing this was to outsource. The basic business case to move any function to a subcontract was quite simple. Subcontractors that focus only on one thing have probably developed a deeper technical understanding of the process and are more effective. Economies of scale allow the outsourcer to provide the same (or higher quality) service at a lower price.

    Read the article

  • Is it possible to shrink the size of an HP Smart Array logical drive?

    - by ewwhite
    I know extension is quite possible using the hpacucli utility, but is there an easy way to reduce the size of an existing logical drive (not array)? The controller is a P410i in a ProLiant DL360 G6 server. I'd like to reduce logicaldrive 1 from 72GB to 40GB. => ctrl all show config detail Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: 5001438006FD9A50 Cache Serial Number: PAAVP9VYFB8Y RAID 6 (ADG) Status: Disabled Controller Status: OK Chassis Slot: Hardware Revision: Rev C Firmware Version: 3.66 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 3 secs Surface Scan Mode: Idle Queue Depth: Automatic Monitor and Performance Delay: 60 min Elevator Sort: Enabled Degraded Performance Optimization: Disabled Inconsistency Repair Policy: Disabled Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 15 secs Cache Board Present: True Cache Status: OK Accelerator Ratio: 25% Read / 75% Write Drive Write Cache: Enabled Total Cache Size: 512 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Array: A Interface Type: SAS Unused Space: 412476 MB Status: OK Logical Drive: 1 Size: 72.0 GB Fault Tolerance: RAID 1+0 Heads: 255 Sectors Per Track: 32 Cylinders: 18504 Strip Size: 256 KB Status: OK Array Accelerator: Enabled Unique Identifier: 600508B1001C132E4BBDFAA6DAD13DA3 Disk Name: /dev/cciss/c0d0 Mount Points: /boot 196 MB, / 12.0 GB, /usr 8.0 GB, /var 4.0 GB, /tmp 2.0 GB OS Status: LOCKED Logical Drive Label: AE438D6A5001438006FD9A50BE0A Mirror Group 0: physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK) Mirror Group 1: physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 146 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 146 GB, OK) SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 Device Number: 250 Firmware Version: RevC WWID: 5001438006FD9A5F Vendor ID: PMCSIERA Model: SRC 8x6G

    Read the article

  • Tuning Windows 7 for use in a VM

    - by intuited
    I'm running Windows 7 in a VirtualBox Virtual Machine, and would like to make it run in a more streamlined fashion. I'll be using the install primarily for testing web apps, and have no need for it to run quickly. I would like it to run with minimal memory requirements, and with minimal changes to its virtual hard drive's contents. Changes to the hard drive contents, for example the paging file, result in larger snapshot sizes. Another recent post of mine seems to be related to this issue, but does not directly address issues with Windows. One concern that I have is that Windows seems to be using 17% of its paging file even with over 900MB of memory marked "Standby" or "Free". My uneducated guess is that this is being used to store indexes or some other data that helps to speed up the system but is not really necessary. I'm also wondering if it's normal for Windows to use over 500 MB of "In Use" memory with no apps running. Will this amount decrease if I reduce the amount of "installed" memory in the VM? What steps can I take to reduce the system's memory footprint without incurring an increase in paging file usage?

    Read the article

  • Debugging "clogged" TCP connections

    - by Nikratio
    I'm having trouble with an internet connection that seems to randomly "freeze" arbitrary tcp connections. The connections stay established, but no data is coming through. When this happens, netstat still shows the connection status as ESTABLISHED on both the local computer: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name Timer tcp 0 53 192.168.0.10:41129 173.255.235.238:143 ESTABLISHED 8219/gnutls-cli on (79.31/13/0) ..and the remote server: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name Timer tcp 0 0 173.255.235.238:143 68.5.174.98:41129 ESTABLISHED 5303/imapd off (0.00/0/0) However, it seems that no data at all is transferred. If I run strace on the local and remote process, both just show a repeating sequence of select calls (with different fds of course), e.g. select(6, [0 5], NULL, NULL, {0, 50000}) = 0 (Timeout) select(6, [0 5], NULL, NULL, {0, 50000}) = 0 (Timeout) select(6, [0 5], NULL, NULL, {0, 50000}) = 0 (Timeout) The internet connection overall does not seem affected, I can still establish new connections to the same service on the same server without any problems. However, the affected local applications seem to be unaware of the problem and just hang. When I look at a packet capture of this connection on the client side, the last thing that happens is that the client transmits some data, then nothing happens for about 1100 seconds, and then several TCP Retransmission requests go out, with intervals increasing from 4 seconds to 130 seconds. No activity is captured after that. After about 10 minutes, the connection on the remote end disappears from the netstat (I wasn't able to catch any intermediate state), but still stays ESTABLISHED on the local end. Finally, after some more minutes, the local application aborts with a timeout and disappears from the local netstat output as well. Does anyone have a suggestion of how I could debug this further to find out where the problem lies and how to fix it? Additionaly and/or as a temporary workaround: is is there some way to globally reduce the timeout on client and/or server to reduce the time before the local application aborts?

    Read the article

  • Improve speed of "start menu" in Linux Mint 10 - Ubuntu 10.10 derivative

    - by Gabriel L. Oliveira
    I have a global menu (including application, administration and system tabs) that is taking too much time (for me) to load (about 2.5 seconds). Of course, this time is taken only during first start. After it have loaded, next times are better ( less than 0.2 miliseconds) The menu was taking more time before (about 5 seconds), and I found that was because of the 'Other' part of the menu, that included many applications installed with Wine, so I removed all of them (I didn't need them at all). I have a "normal" knowledge of programming, and I think that the process of starting the menu for the first time has some kind of "cache function", that tries to find which apps are present that need to be placed under menu to be shown to user. But didn't found this function so that I could analyze in details what he is doing (if searching for files under "~/.local/share/applications" or anything else). Also, I found that hitting "Alt-F2" also fires this "cache function", because after waiting it to load, the process of opening the menu took less than 0.2 miliseconds. So, could anyone help me in order to reduce this time? I found on internet that some user could reduce the time by resizing the icons of applications. But found here that most of my icons are already at 25x25 size. Any other idead? Maybe a multiprocess to load it, or include it under startup... don't know. Ps: Sorry if this is an awkward question, but I just do not like waiting for things to happen, and think that this process should be smoother than it's now. Also, thanks in advance!

    Read the article

  • Backing up large network (~200 clients) -- Enough Bandwidth?

    - by mtkoan
    My company wants to institute a backup plan for all of the clients on our network, which is about 200. We back up our servers and SQL databases regularly, but its been our policy to not backup individuals. What is most critical for people is their Documents and PST files in Outlook. PST files can be very large, and most people's are ~1-1.5 GB around here. So with PST files alone that is 200-300 GB of data needing to be transferred daily to a sever for backup. Or compressing first, then transferring, but many of the machines are VERY old and such a task would grind their computer to a halt. Isn't this the reason networks use things like VMware -- to reduce network traffic and streamline backups? Or is this only to reduce hardware costs? Would this much network traffic everyday drastically slow down our network? Enough to the point we'd have to mandate it to be done at night only? Or could we stagger then through out the day? Really appreciate any input, thank you.

    Read the article

  • Looking for issue tracker software for residential property management

    - by Rob
    This question is about a computer software (as per SU guidelines) application for centrally tracking issues concerning the management of a residental block of flats (apartments as they say in the US and France). Issues are incidents - and their resultant unplanned maintenance to address them, also planned one-off maintenance and also regular planned routine maintenance. I live in a block of flats (apartments), and along with other residents, are looking to more closely watch over issues with the communal, shared areas of the premises (corridors, courtyards, stairs, lifts, lights, trash/bin shed, bike stands, parking areas etc) and their maintenance, currently done by a property management company. Our own homes are our own affair internally, its the outside communal areas that I have the interest. The aim being to control costs and possibly reduce them, by proactively managing the property using historical data to predict issues and also to scrutinise maintenance charges against such data to ensure that the costs are as expected. Trending could also be established whereby recurrences of things can be detected and pre-empted to reduce costs. As a software professional, I'm aware of Bugzilla, eventum being free tools for software - which could be customised to fit this application, but wondered if there was something more appropriate. It might be useful for such software to be on a web server, with secure access, so that residents can log in and view the issues.

    Read the article

  • Relayout LVM Disk

    - by Tom
    I have an Ubuntu 11.10 system with two 500GB disks. The partition tables look like this: /dev/sda1 primary 465.52GB /dev/sda2 extended 243.17MB -> /dev/sda5 logical 243.14MB /dev/sdb1 primary 465.76GB sda1 and sdb1 are in a single LVM physical volume group containing a single logical volume containing a single logical filesystem which is mounted as /. sda5 is mounted as /boot. The problem comes when I want to upgrade to Ubuntu 12.04, which requires at least 247MB free on /boot. So I need to reduce the size of sda1 so that I can increase the size of sda2 and sda5. How the heck do I do that? I can find how to shrink the logical volume group, but I'm not at all clear on how to clear out the end part of sda1 so that I can reduce the physical volume group. Does pvresize just deal with this automagically? Or is that wild wishful thinking? I guess the alternatives are to back everything up onto something or other and recreate the thing from scratch or find out whether GRUB2 supports using LVM for /boot.

    Read the article

  • Exchange 2010 SP2 database size

    - by Chad
    I have a single Exchange 2010 sp2 environment with 3 DB stores. I am trying to reduce the sizes by moving the mailboxes to a spare DB and then deleting the empty database. I cleaned up the users mailboxes to reduce the sizes and set the retention periods to 1 day each and waited several days before moving mailboxes. The databases are backing up fine and clearing logs files but when I move the mailboxes I noticed they were taking a long time, even though some were less than 100MB. When I checked the new database size it seems like the orginal mailbox size might be moving (1GB instead of 100MB). Exchange is showing the expected smaller mailbox sizes when I run get-mailbox statistics against the DB. So if I have 5 mailboxes 100MB each it is showing like 3GB instead of around 500MB, and no whitespace. I keep waiting thinking mailby the retention period is not expired yet but it is much longer than 1 day already. I am setting them both to 0 today to see if that works. What am I missing to get the combined mailbox sizes to match the DB size minus whitespace?

    Read the article

  • Connections to IIS sometimes get stuck in CLOSE_WAIT state

    - by randomhuman
    Our application includes an ASP.Net web service that only needs to deal with a handful of clients. As such, the 10 incoming connection limit of Windows XP Pro is generally not a problem. However, on one particular server, connections are occasionally becoming stuck in the CLOSE_WAIT state. These connections build up over time and eventually new client connections are refused because the maximum number of connections are used up. From my googling it sounds like a failure of the webservice to properly close the connection can cause this problem, but as it works just fine on hundreds of other Windows XP pro machines I can't see it being a bug in our code. It also ran fine on the affected machine until some shenanigans on the part of the end user (I think they set about deleting duplicate files in order to reduce their disk usage, but they did not exactly come clean about it). What could the user have changed to introduce this problem? Is there any way I can force connections that are in CLOSE_WAIT to time out rather than letting them hang around? I have seen suggestions to reduce TcpTimedWaitDelay, but that only relates to the TIME_WAIT state, and changing it did not have any effect.

    Read the article

  • ZFS & Deduplicating FLAC Data

    - by jasongullickson
    I'm experimenting with using ZFS to deduplicate a large library of FLAC files. The purpose of this is twofold: Reduce storage utilization Reduce bandwidth needed to sync the library with cloud storage Many of these files are of the same music tracks but from different physical media. This means that for the most part they are the same and usually close to the same size, which makes me think that they should benefit from block-level deduplication. However in my testing I'm not seeing good results. When I create a pool and add three of these tracks (identical songs from different source media) zpool list reports 1.00 dedupe. If I copy all of the files (make exact duplicates of the three) dedupe climbs, so I know that it is enabled and functioning, but it's not finding any duplication in the original collection of files. My first thought was that perhaps some of the variable header data (metadata tags, etc.) might be mis-aligning the bulk of the data in these files (the audio frames) but even making the header data consistent across the three files doesn't seem to have any impact on deduplication. I'm considering taking alternate routes (testing other dedupe filesystems as well as some custom code) but since we're already using ZFS and I like the ZFS replication options, I'd prefer to use ZFS dedupe for this project; but perhaps it's simply not capable of working well with this sort of data. Any feedback regarding tuning that might improve dedupe performance for this sort of dataset, or confirmation that ZFS dedupe is not the right tool for this job are appreciated.

    Read the article

  • Article search engine in php

    - by Jason
    Hello, I am using sphinx as a search engine on my website its working perfect and I have no complain with it. The only thing it lacks is, it does not allow me to search articles whose query length is more than 15 words. I know in reality people don't use more than 3-4 words i want to use it for finding duplicate contents. I was wondering if there is any alternative solution to sphinx. I want to cope with duplicate contents. My main articles table is in innodb but I am also caching articles into MyISAM table as well for full text searching but when I search an article it takes ages to perform one search. Its not the query problem, i think mysql lacks the fulltext searching facility. Thanks Jason

    Read the article

  • DNN redirect Loop

    - by JAllen
    I am trying to duplicate an existing DNN portal that I have for testing purposes by creating a duplicate of the database and duplicating the .net files into a new folder. After I copied the site and changed the webconfig to point to the new site and changed the alias in the database I am getting this error. This webpage has a redirect loop. The webpage at http://xxx.us/xxx/default.aspx has resulted in too many redirects. Clearing your cookies for this site or allowing third-party cookies may fix the problem. If not, it is possibly a server configuration issue and not a problem with your computer.

    Read the article

  • Firebird Insert Distinct Data Using ZeosLib and Delphi

    - by Brad
    I'm using Zeos 7, and Delphi 2K9 and want to check to see if a value is already in the database under a specific field before I post the data to the database. Example: Field Keyword Values of Cheese, Mouse, Trap tblkeywordKEYWORD.Value = Cheese What is wrong with the following? And is there a better way? zQueryKeyword.SQL.Add('IF NOT EXISTS(Select KEYWORD from KEYWORDLIST ='''+ tblkeywordKEYWORD.Value+''')INSERT into KEYWORDLIST(KEYWORD) VALUES ('''+ tblkeywordKEYWORD.Value+'''))'); zQueryKeyword.ExecSql; I tried using the unique constraint in IB Expert, but it gives the following error: Invalid insert or update value(s): object columns are constrained - no 2 table rows can have duplicate column values. attempt to store duplicate value (visible to active transactions) in unique index "UNQ1_KEYWORDLIST". Thanks for any help -Brad

    Read the article

  • Why touching "d_name" makes calls to readdir() fail?

    - by Sarah Mani
    Hi, I'm trying to write a little helper for Windows which eventually will accept a file extension as an argument and return the number of files of that kind in the current directory. To do so, I'm reading the file entries in the directories and after getting the extension I'd like to convert it to lowercase to compare it with the yet-to-add specified argument. When converting the extension to lowercase I found that touching even a duplicate string of the d_name variable will cause a strange behaviour, like no more calls to readdir are called. Here is the code I'm using right now (the commented code is preliminary) and outputs for a given directory: #include <ctype.h> #include <dirent.h> #include <stdio.h> #include <string.h> char * strrch(char *string, size_t elements, char character) { char *reverse = string + elements; while (--reverse != string) if (*reverse == character) return reverse; return NULL; } void test(char *string) { // Even being a duplicate will make it fail: char *str = strdup(string); printf("Strings: %s %s\n", string, str); *str = 'a'; printf("Strings: %s %s\n", string, str); //unsigned short int i = 0; //for (; str[i] != '\0', str++; i++) // str[i] = tolower((unsigned char) str[i]); //puts(str); } int main(int argc, char **argv) { DIR *directory; struct dirent *element; if (directory = opendir(".")) { while (element = readdir(directory)) test(strrch(element->d_name, element->d_namlen, '.')); closedir(directory); puts(NULL); } else puts("Couldn't open the directory.\n"); } Output without modifying the duplicate (modification and the second printf call commented): Strings: (null) (null) Strings: . . Strings: .exe .exe Strings: .pdf .pdf Strings: .c .c Strings: .ini .ini Strings: .pdf .pdf Strings: .pdf .pdf Strings: .pdf .pdf Strings: .flac .flac Strings: .FLAC .FLAC Strings: .lnk .lnk Strings: .URL .URL Output of the same directory (with the code above, with the 2 printfs): Strings: (null) (null) Is there anything wrong? Is it a compiler issue? I'm using GCC 4.4.3 in Windows (MinGW) right now. Thank you very much for your help. By the way, is there any other way to work with files and directories in a Windows environment not using the POSIX functions?

    Read the article

  • Windows xp printer queue issues

    - by BubbaWI
    I have a C# application (.NET 3.5 SP1) that I am printing serial numbers for product for a manufacturing company. I am using Active Reports 3 (ver. 5.2.385.2) to print these labels. However, when the printer(Zebra TLP 3842) has an issue, it seems like the printer queue reprints the print job. This causes duplicate serial numbers to be printed. Does anyone know how to ensure that this will not happen. I would rather not have a serial number print than have duplicate serial numbers.

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >