Search Results

Search found 4167 results on 167 pages for 'rman 11gr2 duplicate'.

Page 19/167 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Git: Find duplicate blobs (files) in this tree

    - by Readonly
    This is sort of a follow-up to this question. If there are multiple blobs with the same contents, they are only stored once in the git repository because their SHA-1's will be identical. How would one go about finding all duplicate files for a given tree? Would you have to walk the tree and look for duplicate hashes, or does git provide backlinks from each blob to all files in a tree that reference it?

    Read the article

  • Duplicate IP address detection with multiple NICs

    - by sfink
    I am using arping -D to detect duplicate IP addresses within a network when setting up servers. (The network is controlled by someone else, and we have had many issues with IP allocation in the past.) It works fine as long as my host has a single NIC on a given VLAN, but when my host has more than one (I have one with 9 NICs on one VLAN and 1 on the other), arping -D always returns false collisions. The problem is that all 9 of my NICs respond to an ARP request for any of the IPs on those NICs. (These are real physical NICs, not aliases or anything.) I send out one ARP request packet, and get 9 ARP is-at ARP replies, one for each MAC address. I could implement my own solution by sniffing packets and checking for any replies with a MAC address other than the local NICs', but it seems like there ought to be an easier way.

    Read the article

  • I need to duplicate software packages installed on one HPUX system to a test system

    - by oneodd1
    I am very inexperienced with HPUX and need to duplicate a production server for a test environment. I have 11.11 on the prod server and have completed the base install on the test server. What I need is a way to add the installed packages from the prod server to the test server. Unfortunately I haven't the foggiest idea how to do this. I've thought of using ignite backup and restore but don't have matching tape types between the two. The other thought was using swlist to gather the installed packages and then going to the website to download and install on the test environment. Has anyone successfully done something like this? Pointers?

    Read the article

  • SBS 2008 Duplicate Files - what can I delete

    - by Stu
    I'm trying to look after my company's SBS 2008 server. It keeps running out of space on the C: (OS)drive, so Exchange stops when it gets down to 4Gb free space. From what I can tell after the last time I had a consultant in to fix it, there should comfortably be about 20Gb. Running a duplicate files report gives me a huge list of duplicated files, and identifies just how much space they are hogging. But my question is, which ones can I delete to recover the space? Can I tell, from what's shown on the report, the files that are redundant and can safely be deleted?

    Read the article

  • puppet duplicate resources and virtual resources

    - by user45097
    Overview Hi just started using Puppet and have been unable to suss something. Problem Because of normalization when I add 2 classes to a node with packages that have the same dependencies it fails. In simple terms have duplicate resources - in this case the package libssl. Note: packages are being held to prevent latest packages being installed. QUestion What's the best practice way to get round this? class ssh { package { 'openssh-server': ensure = installed, require = libssl } package { 'libssl': ensure = installed, } } class apache { package { 'apache': ensure = installed, require = libssl, } package { 'libssl': ensure = installed, } } node server { include apache include openssl-server

    Read the article

  • Finder Sidebar Icons - How do I duplicate?

    - by Wilco
    I've noticed that some system directories, when dragged to the Finder's sidebar, utilize special small-scale icons not visible in any other place. Even when looking at one of these folders in a Finder window using the smallest possible icon size, these "special" icons don't appear (so it's not just the small version of the folder's icon). So my question is, where is this information stored? If I wanted to duplicate this behavior for an arbitrary folder, where would I need to look? I like to replace my home directory with a symlink to a location on another partition, but when I do this, I lose this sidebar icon behavior. I would love to get this back if I can.

    Read the article

  • Unable to delete duplicate pagefile.sys

    - by user128364
    I have one SQL server that contain two drives on it and both drives are on SAN. C drive contains OS and V drive Contain pagefile.sys file. I rebooted my server and both drives was unmapped on SQL server. Some How one drive came (the OS drive) and created pagefile.sys on C drive and then I rebooted my server remapped the v drive. Now I have two pagefile.sys on C and V drive. But under Advance setting only V drive option is checked as pagefile. How do delete duplicate pagefile on C drive. It showing me that " the program is being used" while deleting the pagefile

    Read the article

  • Best Way to Archive Digital Photos and Avoid Duplicate File Names

    - by user31575
    This problem pertains to archiving of digital pictures taken from multiple cameras. Answers here covered the general topic of the-mechanics-of-backups: How do you archive digital photos and videos ? I however face another problem. Having multiple cameras (canon) and multiple SD cards (mixed and matched at random), I have found that different SD cards have different photos with the same file name, i.e. two different photos each name IMG_3141.JPG. Additionally, for better or worse, I've backed up the files to multiple places and need to consolidate my backups. I want to eliminate duplicates, but not clobber files. The only way I can think of is to append the code (md5 or sha1) to the file name, i.e. IMG_3141.JPG becomes IMG_3141_KT229QZ31415926ASDF.JPG, then sorting them out Any better ways? (Note "open letter" address the 'duplicate file name' concern): http://photofocus.com/2010/09/13/an-open-letter-to-digital-camera-manufacturers-regarding-camera-file-naming/ )

    Read the article

  • Best way to duplicate databases nightly?

    - by Margaret
    Hey all We just got two new servers, that are running Windows Server 2008. The intent is to make the machines pretty much identical, copying the content of the master to the slave on a nightly basis, so that if anything fails, the second copy can stand in immediately. It doesn't need to be up-to-the-minute mirroring, though I suppose that wouldn't hurt if performance is not affected. The two machines will, amongst other things, each be running an instance of SQL Server 2008. The aim is to duplicate the databases on the master down to the slave on a nightly basis. Unless I'm misunderstanding, the slave databases in mirrored databases require the primary to be present to work correctly; I'm hoping for some solution where we have a second machine that can be up and running with minimal downtime if the first one falls over. Am I misunderstanding mirroring? Is that the best way to do things, or should I use some other mechanism? If so, what?

    Read the article

  • duplicate video stream using a router (?)

    - by Dani
    I have a viedo stream coming from one site, I want to do the following with the minimum delay possible: I want to duplicate the stream (inside the router - preferred) and send it to a few more locations one of them is local network and the rest - on other networks. I want to be able to do it to several streams simultaneously. Is it possible to do this - using network devices only ? What device is capable to do this ? (I can always record the stream and rebroadcast it - but that's a lot of delay, I'm looking for functionality that similar to port duplicating, but on higher layers). Thanks.

    Read the article

  • Duplicate name exists solution

    - by user978733
    I have about 70 pc's with exactly same hardware. I decided to automate turning on and off. I took 1 PC. Here is what I've done: Changed bios configuration so that now pc's waking when I turn on AC switch Installed Windows XP and configured so that I can turn off remotelly, changed workgroup name to "WG1", and pc name to "ExamPC" Then created acronis backup image of this pc I installed this image in several PC's and tried to test All worked well till windows opened. The problem is, all tested PC's started Windows nearly at the same time, and all of them popped up error Duplicate name exist. I can't figure out any solution. Any suggestions?

    Read the article

  • Fixing copy/paste for Remote Desktop Connection sessions [duplicate]

    - by netadictos
    Possible Duplicate: Can't copy and paste in Remote Desktop Connection session Recently I have been working with Remote Desktop Connection. I use it to access a virtual machine implemented through Hyper-V. I have had many problems with the simple operation of cut-and-paste from my machine to the virtual one. The link between my clipboard and the remote clipboard is often broken. It is usual that this happens, when I copy/cut in the remote machine and then copy in my computer and then paste in the remote machine, How do I fix this?

    Read the article

  • Dropping duplicate|redundant Unique Constraint from FILESTREAM table

    - by electricsk8
    I have a table with a FILESTREAM column, and it has two unique constraints specified for the same FILESTREAM column, ie: ALTER TABLE [dbo].[TableName] ADD CONSTRAINT [UQ_TableName_33C4988760FC61CA] UNIQUE NONCLUSTERED ([GUID_Column]); GO ALTER TABLE [dbo].[TableName] ADD CONSTRAINT [UQ_TableName_33C49887145C0A3F] UNIQUE NONCLUSTERED ([GUID_Column]); GO I'd like to drop one of the unique constraints, as they are duplicates. However, when I try and drop one of the two duplicate constraints, I receive the following error. "A table with FILESTREAM column(s) must have a non-NULL unique ROWGUID column." Anyone know how to remove one of the two constraints?

    Read the article

  • Duplicate Name Exists error from virtual PC with NAT network configuration

    - by Phred
    Whenever I change the network configuration to NAT for a virtual machine running under Virtual PC 2007, I get "A duplicate name exists on the network." error. There is no other machine on my network with this name and running the VM in any other network configuration doesn't cause this error. This seems to be a common problem with Virtual PC 2007 based on a google search but no-one seems to have a solution to it. So far, I've discovered that turning off NETBIOS over TCP causes the problem to go away but I need to join this VM to a domain and you can't do that if NETBIOS over TCP is turned off.

    Read the article

  • Can I set Windows default second-monitor behaviour to "Extend these displays"?

    - by MT_Head
    I travel to multiple offices (and multiple desks in those offices), and whenever possible I plug an external monitor into my laptop. Whenever I plug in a monitor I haven't used before, Windows defaults to "Duplicate these displays" - which messes up the arrangement of icons on my desktop if the external monitor is a different shape from my laptop's monitor. I then select "Extend these displays", and my laptop screen returns to its original shape - but my icons don't go back to their original arrangement. Grrrrr. Fast-forward a few days or weeks; I've got my icons arranged so I can find stuff again - then I go to a new office and it starts all over again. I'm tired of this. Is it possible to make "Extend these displays" the default behavior? I'm using Windows 8 x64 Home Premium, but I had the same complaint under Windows 7 x64 Ultimate. (Prior to that, I hadn't discovered the joy of dual displays. Ah, the time I wasted...)

    Read the article

  • De-duplicate Firefox bookmarks

    - by Zoredache
    What methods exist to de-duplicate Firefox bookmarks. As I search Google I find that there previously was a plugin called CheckPlaces, but that no longer seems to exist. Another popular suggestion seems to be AM-DeadLink, which I tried, but it completely trashed my bookmarks. (Fortunately I had a backup first, and yes I had closed Firefox first as instructed). I was trying to move all my youtube.com bookmarks into a folder. I tried doing a search, and then dragging the bookmarks into the folder. Apparently this creates a copy, instead of moving them as I expected. So now I have 3 of everything since I had tried a couple times.

    Read the article

  • A Duplicate name exists on the network

    - by Adam
    Recently we changed out office IT structure from having a dedicated server to be the DC, a dedicated server for the exchange etc... (Each running Windows Server 2003 R2) Now we have a single server running Windows SBS 2008 and created a new domain (with a different domain name) We then changed every PC so it connected to the new domain and renamed every PC with a new naming structure. After I had done this, we were getting several PCs that would get the following message just before the login screen (Alt+Ctrl+Del Screen) A Duplicate name exists on the network I have checked the ADUC and have removed the trouble PCs from the list and renamed each PC and changed the SID before connecting back onto the domain but still getting this message. I have tried everything that i can think of but still getting the problem. Any help would be greatly appreticated.

    Read the article

  • Find RARs with duplicate content

    - by Scott McClenning
    I need a utility to find RAR files that contain duplicate data (i.e. files within the RAR that hash the same, but could have different names). I can open the RARs and see the CRCs are the same, but I was hoping for a more automated process that would work in bulk (hundreds of files). Hashing the overall RAR won't help because the file contained within could have different names, or the archive could be compressed at different levels. If needed, a utility that would extract the contents of the RARs and then compare would work, but is not preferred. I would prefer a free utility for Windows, but a pay utility or a utility for Linux would be acceptable.

    Read the article

  • Steps to Investigate Cause of Web.Config Duplicate Section

    - by pauly
    Symptoms In IIS Dot Net 2.0 Integrated app pool: double clicking to view any web.config section results in a the following error dialog. "There was an error while performing this operation.... Fielname... web.config... Error: There is a duplicate..." Browsing to the URL displays: "Http 500.19" internal server error.. There is a duplicate... 'system.web.extensions/scripting/scriptResourceHandler' section defined...." Running the app from VS 2008 an "Unable to start debugging on the web server..." dialog is displayed. Things Tried Looked at other application directories on same IIS server. No problem view web.config contents or serving up the app. Removed and re-added the application in IIS. Checked out a new version of the source code. Reverted to prior versions of the web.config file. Looked for web.config files that might have duplicate sections in: Inetpub root. "C:\Windows\Microsoft.NET\Framework\v2.0.50727\CONFIG\machine.config" The "Views" subfolder of the ASP.Net MVC app. Checked out source code to another dev machine. Setup IIS 7 app folder. No problem with Web.config. Question If the reason for this error is another web.config file where else should I look? Are there other reasons for these symptoms?

    Read the article

  • Delete Duplicate records from large csv file C# .Net

    - by Sandhurst
    I have created a solution which read a large csv file currently 20-30 mb in size, I have tried to delete the duplicate rows based on certain column values that the user chooses at run time using the usual technique of finding duplicate rows but its so slow that it seems the program is not working at all. What other technique can be applied to remove duplicate records from a csv file Here's the code, definitely I am doing something wrong DataTable dtCSV = ReadCsv(file, columns); //columns is a list of string List column DataTable dt=RemoveDuplicateRecords(dtCSV, columns); private DataTable RemoveDuplicateRecords(DataTable dtCSV, List<string> columns) { DataView dv = dtCSV.DefaultView; string RowFilter=string.Empty; if(dt==null) dt = dv.ToTable().Clone(); DataRow row = dtCSV.Rows[0]; foreach (DataRow row in dtCSV.Rows) { try { RowFilter = string.Empty; foreach (string column in columns) { string col = column; RowFilter += "[" + col + "]" + "='" + row[col].ToString().Replace("'","''") + "' and "; } RowFilter = RowFilter.Substring(0, RowFilter.Length - 4); dv.RowFilter = RowFilter; DataRow dr = dt.NewRow(); bool result = RowExists(dt, RowFilter); if (!result) { dr.ItemArray = dv.ToTable().Rows[0].ItemArray; dt.Rows.Add(dr); } } catch (Exception ex) { } } return dt; }

    Read the article

  • MySQL Normalization stored procedure performance

    - by srkiNZ84
    Hi, I've written a stored procedure in MySQL to take values currently in a table and to "Normalize" them. This means that for each value passed to the stored procedure, it checks whether the value is already in the table. If it is, then it stores the id of that row in a variable. If the value is not in the table, it stores the newly inserted value's id. The stored procedure then takes the id's and inserts them into a table which is equivalent to the original de-normailized table, but this table is fully normalized and consists of mainly foreign keys. My problem with this design is that the stored procedure takes approximately 10ms or so to return, which is too long when you're trying to work through some 10million records. My suspicion is that the performance is to do with the way in which I'm doing the inserts. i.e. INSERT INTO TableA (first_value) VALUES (argument_from_sp) ON DUPLICATE KEY UPDATE id=LAST_INSERT_ID(id); SET @TableAId = LAST_INSERT_ID(); The "ON DUPLICATE KEY UPDATE" is a bit of a hack, due to the fact that on a duplicate key I don't want to update anything but rather just return the id value of the row. If you miss this step though, the LAST_INSERT_ID() function returns the wrong value when you're trying to run the "SET ..." statement. Does anyone know of a better way to do this in MySQL? Thank you

    Read the article

  • Remove redundant SQL code

    - by Dave Jarvis
    Code The following code calculates the slope and intercept for a linear regression against a slathering of data. It then applies the equation y = mx + b against the same result set to calculate the value of the regression line for each row. Can the two separate sub-selects be joined so that the data and its slope/intercept are calculated without executing the data gathering part of the query twice? SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR * ymxb.SLOPE + ymxb.INTERCEPT as REGRESSION_LINE, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D, (SELECT ((avg(t.AMOUNT * t.YEAR)) - avg(t.AMOUNT) * avg(t.YEAR)) / (stddev( t.AMOUNT ) * stddev( t.YEAR )) as CORRELATION, ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM ( SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ) t ) ymxb WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR Question How do I execute the duplicate bits only once per query, instead of twice? The duplicate bit is the WHERE clause: $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' Related http://stackoverflow.com/questions/1595659/how-to-eliminate-duplicate-calculation-in-sql Thank you!

    Read the article

  • Duplicate Symbol Linker Error (C++ help)

    - by Vash265
    Hi. I'm learning some CSP (constraint satisfaction) theory stuff right now, and am using this library to parse XML files. I'm using Xcode as an IDE. My program compiles fine, but when it goes to link the files, I get a duplicate symbol error with the XMLParser_libxml2.hh file. My files are separated as such: A class header file that includes the XMLParser file above A class implementation file that include the class header file A main file that includes the class header file The duplicate symbol is occurring in main.o and classfile.o, but as far as I can tell, I'm not actually adding that .hh file twice. Full error: ld: duplicate symbol bool CSPXMLParser::UTF8String::to<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >(std::basic_string<char, std::char_traits<char>, std::allocator<char> >&) constin /Users/vash265/CSP/Untitled/build/Untitled.build/Debug/Untitled.build/Objects-normal/x86_64/dStructFill.o and /Users/vash265/CSP/Untitled/build/Untitled.build/Debug/Untitled.build/Objects-normal/x86_64/main.o Copying the implementation of the class into the main file and taking the class implementation file out of the compilation target alleviates the error, but it's a disorganized mess this way, and I'll be adding more classes very soon (and it would be nice to have them in separate files). As I've come to understand it, this is caused by the file (XMLParser_libxml2.hh) having both the class and function definition and implementation in one file (and it seems as though this might have been necessary due to the use of templates in that 'header' file). Any ideas on how to get around sticking all my class files in my main.cpp? (I've tried ifdefs, they don't work).

    Read the article

  • Entityframework duplicate record on second insert

    - by Delysid
    I am building an application for recipe/meal planning, and i have come across a problem i cant seem to figure out. i have a table for units of measure, where i keep the used units in, i only want unique units in here (for grocery list calculation and so forth) but if i use a unit from the table on a recipe, the first time it is okay, nothing is inserted in units of measure, but the second time i get a "duplicate". i suspect it has something to do with entitykey, because the primary key is identity column on the sql server (2008 r2) for some reason it works to change the objectstate on some objects (courses, see code) and that does not generate a duplicate, but that does not work on the unit of measure my insert methods looks like this : public recipe Create(recipe recipe) { using (RecipeDataContext ctx = new RecipeDataContext()) { foreach (recipe_ingredient rec_ing in recipe.recipe_ingredient) { if (rec_ing.ingredient.ingredient_id == 0) { ingredient ing = (from _ing in ctx.ingredients where _ing.name == rec_ing.ingredient.name select _ing).FirstOrDefault(); if (ing != null) { rec_ing.ingredient_id = ing.ingredient_id; rec_ing.ingredient = null; } } if (rec_ing.unit_of_measure.unit_of_measure_id == 0) { unit_of_measure _uom = (from dbUom in ctx.unit_of_measure where dbUom.unit == rec_ing.unit_of_measure.unit select dbUom).FirstOrDefault(); if (_uom != null) { rec_ing.unit_of_measure_id = _uom.unit_of_measure_id; rec_ing.unit_of_measure = null; } } ctx.Recipes.AddObject(recipe); //for some reason it works to change object state of this, and not generate a duplicate ctx.ObjectStateManager.ChangeObjectState(recipe.courses[0], EntityState.Unchanged); } ctx.SaveChanges(); } return recipe; } My datamodel looks like this : http://i.imgur.com/NMwZv.png

    Read the article

  • BIND - why duplicate nameserver entries (@ and *)?

    - by user27465
    I had to manually tweak my DNS service providers BIND file. BIND file, created by professional hosting company, before: $ORIGIN mycoolsite.com. $TTL 300 @ SOA ns1.cheapreg.com. registry.cheapreg.com. ( ... ) @ IN 3600 NS ns1.cheapreg.com. @ IN 3600 NS ns2.cheapreg.com. @ IN 3600 A 199.9.99.85 @ IN 3600 A 199.9.99.86 * IN 3600 A 199.9.99.85 * IN 3600 A 199.9.99.86 www IN 3600 A 199.9.99.85 www IN 3600 A 199.9.99.86 BIND file, created by layman, after: $ORIGIN mycoolsite.com. $TTL 300 @ SOA ns1.cheapreg.com. registry.cheapreg.com. ( ... ) @ IN 3600 NS ns1.cheapreg.com. @ IN 3600 NS ns2.cheapreg.com. * IN 3600 A 219.94.116.50 * IN 3600 A 219.94.116.51 * IN 3600 A 219.94.116.52 The difference is that the "pro"-file has duplicated the nameserver entries, once for @, and once for *, and I haven't. Any reason I should also duplicate nameserver entries (@ and *) ?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >