Search Results

Search found 695 results on 28 pages for 'deletes'.

Page 10/28 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Oracle 10.2.0.1 --> 10.2.0.4 patchset errors on Advanced Queuing tables. Serious or not?

    - by hurfdurf
    We're running Oracle on RHEL 5.4 64-bit. We recently did an upgrade from 10.2.0.1 to 10.2.0.4. Many errors were generated during the upgrade (sample listed below from trace.log) but during application testing afterward everything seemed fine (clean EXP, inserts, updates, deletes, etc.). The errors look like they are all related to Advanced Queuing tables and views. We are not using replication at all, this is a simple single instance db. ORA-24002: QUEUE_TABLE SYS.AQ_EVENT_TABLE does not exist ORA-24032: object AQ$_AQ_SRVNTFN_TABLE_T exists, index could not be created ORA-24032: object AQ$_ALERT_QT_S exists, index could not be created for queue ORA-06512: at "SYS.DBMS_AQADM_SYSCALLS", line 117 ORA-06512: at "SYS.DBMS_AQADM_SYS", line 5116 Is this worth worrying about, and if so, how do I go about cleaning up/recreating the corrupted and/or missing objects?

    Read the article

  • FreeNX Server w/ nxagent 3.5 not able to create shadow sessions

    - by Jenna Whitehouse
    I am running a FreeNX server on Ubuntu 11.10 and am unable to do session shadowing. I get the authorization prompt, but the shadow client crashes after. The NX server log in the user's .nx directory is as follows: Error: Aborting session with 'Server is already active for display 3000 If this server is no longer running, remove /tmp/.X3000-lock and start again'. Session: Aborting session at 'Mon Oct 1 14:26:44 2012'. Session: Session aborted at 'Mon Oct 1 14:26:44 2012'. This then deletes the lock file, which is the lock file for the initial Unix session and crashes out. Everything works for a normal session, and shadowing works up to the authorization prompt. I am using this software: Ubuntu 11.10 freenx-server 0.7.3.zgit.120322.977c28d-0~ppa11 nx-common 0.7.3.zgit.120322.977c28d-0~ppa11 nxagent 1:3.5.0-1-2-0ubuntu1ppa8 nxlibs 1:3.5.0-1-2-0ubuntu1ppa8 Any help is appreciated, thanks!

    Read the article

  • DDD and Value Objects. Are mutable Value Objects a good candidate for Non Aggr. Root Entity?

    - by Tony
    Here is a little problem Have an entity, with a value object. Not a problem. I replace a value object for a new one, then nhibernate inserts the new value and orphan the old one, then deletes it. Ok, that's a problem. Insured is my entity in my domain. He has a collection of Addresses (value objects). One of the addresses is the MailingAddress. When we want to update the mailing address, let's say zipcode was wrong, following Mr. Evans doctrine, we must replace the old object for a new one since it's immutable (a value object right?). But we don't want to delete the row thou, because that address's PK is a FK in a MailingHistory table. So, following Mr. Evans doctrine, we are pretty much screwed here. Unless i make my addressses Entities, so i don't have to "replace" it, and simply update its zipcode member, like the old good days. What would you suggest me in this case? The way i see it, ValueObjects are only useful when you want to encapsulate a group of database table's columns (component in nhibernate). Everything that has a persistence id in the database, is better off to make it an Entity (not necessarily an aggregate root) so you can update its members without recreating the whole object graph, specially if that's a deep-nested object. Do you concur? Is it allowed by Mr. Evans to have a mutable value object? Or is a mutable value object a candidate for an Entity? Thanks

    Read the article

  • Isolating test data in acceptance tests

    - by Matt Phillips
    I'm looking for guidance on how to keep my acceptance tests isolated. Right now the issue I'm having with being able to run the tests in parallel is the database records that are manipulated in the tests. I've written helpers that take care of doing inserts and deletes before tests are executed, to make sure the state is correct. But now I can't run them in parallel against the same database without uniquely generating the test data fields for each test. For example. Testing creating a row i'll delete everything where column A = foo and column B = bar Then I'll navigate through the UI in the test and create a record with column A = foo and column B = bar. Testing that a duplicate row is not allowed to be created. I'll insert a row with column A = foo and column B = bar and then use the UI to try and do the exact same thing. This will display an error message in the UI as expected. These tests work perfectly when ran separately and serially. But I can't run them at the same time for fear that one will create or delete a record the other is expecting. Any tips on how to structure them better so they can be run in parallel?

    Read the article

  • EBS+RAID10+XFS slower than EBS+RAID10+EXT3 using MySQL?

    - by Johann Tagle
    We're currently using EC2 with 16 EBS volumes in RAID10 configuration for our MySQL data. I know some people don't recommend to put EBS volumes to RAID but that's not what I'm concerned about at the moment. Current format is ext3, but we're experimenting with moving to xfs, given many reports that it is faster. However, we're actually experiencing a performance degradation when the partition was converted to xfs - a benchmark run with inserts, updates, selects and deletes was more than 10 seconds slower using xfs. Any idea what could be the problem? Below is the fstab entry (really only changed ext3 to xfs). Database tables are innodb and we are using innodb_file_per_table. /dev/mapper/vg_data-lv_data /data xfs noatime 0 0 Thanks.

    Read the article

  • Deleting a row from self-referencing table

    - by Jake Rutherford
    Came across this the other day and thought “this would be a great interview question!” I’d created a table with a self-referencing foreign key. The application was calling a stored procedure I’d created to delete a row which caused but of course…a foreign key exception. You may say “why not just use a the cascade delete option?” Good question, easy answer. With a typical foreign key relationship between different tables which would work. However, even SQL Server cannot do a cascade delete of a row on a table with self-referencing foreign key. So, what do you do?…… In my case I re-wrote the stored procedure to take advantage of recursion:   -- recursively deletes a Foo ALTER PROCEDURE [dbo].[usp_DeleteFoo]      @ID int     ,@Debug bit = 0    AS     SET NOCOUNT ON;     BEGIN TRANSACTION     BEGIN TRY         DECLARE @ChildFoos TABLE         (             ID int         )                 DECLARE @ChildFooID int                        INSERT INTO @ChildFoos         SELECT ID FROM Foo WHERE ParentFooID = @ID                 WHILE EXISTS (SELECT ID FROM @ChildFoos)         BEGIN             SELECT TOP 1                 @ChildFooID = ID             FROM                 @ChildFoos                             DELETE FROM @ChildFoos WHERE ID = @ChildFooID                         EXEC usp_DeleteFoo @ChildFooID         END                                    DELETE FROM dbo.[Foo]         WHERE [ID] = @ID                 IF @Debug = 1 PRINT 'DEBUG:usp_DeleteFoo, deleted - ID: ' + CONVERT(VARCHAR, @ID)         COMMIT TRANSACTION     END TRY     BEGIN CATCH         ROLLBACK TRANSACTION         DECLARE @ErrorMessage VARCHAR(4000), @ErrorSeverity INT, @ErrorState INT         SELECT @ErrorMessage = ERROR_MESSAGE(), @ErrorSeverity = ERROR_SEVERITY(), @ErrorState = ERROR_STATE()         IF @ErrorState <= 0 SET @ErrorState = 1         INSERT INTO ErrorLog(ErrorNumber,ErrorSeverity,ErrorState,ErrorProcedure,ErrorLine,ErrorMessage)         VALUES(ERROR_NUMBER(), @ErrorSeverity, @ErrorState, ERROR_PROCEDURE(), ERROR_LINE(), @ErrorMessage)         RAISERROR (@ErrorMessage, @ErrorSeverity, @ErrorState)     END CATCH   This procedure will first determine any rows which have the row we wish to delete as it’s parent. It then simply iterates each child row calling the procedure recursively in order to delete all ancestors before eventually deleting the row we wish to delete.

    Read the article

  • How to reset a Fritz!Box DSL router to factory settings?

    - by damluar
    I need to change the setting of router Fritz Box. The setting were set by another person. I can't connect to router using cable, may be standard port or address were changed. So the only option is to switch to factory settings. Usually there is a button which deletes all the settings. I read the documentation and they tell to call the number #991*15901590* on the router. Do I have to connect telephone to router?

    Read the article

  • sending mail using mutt + emacs

    - by lakshmipathi
    How to sent mail from emacs? I have add from address and subject and trapped inside emacs I found this There are two ways to send the message. C-c C-s (mail-send) sends the message and marks the mail buffer unmodified, but leaves that buffer selected so that you can modify the message (perhaps with new recipients) and send it again. C-c C-c (mail-send-and-exit) sends and then deletes the window or switches to another buffer But both ( ctrl+c ctrl+s ) and (ctrl-c crtl+c) are not working ps:Thought it's not programming related. it's programmer environment related question-hoping it won't be closed :)

    Read the article

  • Data Pump: Consistent Export?

    - by Mike Dietrich
    Ouch ... I have to admit as I did say in several workshops in the past weeks that a data pump export with expdp is per se consistent. Well ... I thought it is ... but it's not. Thanks to a customer who is doing a large unicode migration at the moment. We were discussing parameters in the expdp's par file. And I did ask my colleagues after doing some research on MOS. And here are the results of my "research": MOS Note 377218.1 has a nice example showing a data pump export of a partitioned table with DELETEs on that table as inconsistent Background:Back in the old 9i days when Data Pump was designed flashback technology wasn't as popular and well known as today - and UNDO usage was the major concern as a consistent per default export would have heavily relied on UNDO. That's why - similar to good ol' exp - the export won't operate per default in consistency mode To get a consistent data pump export with expdp you'll have to set: FLASHBACK_TIME=SYSTIMESTAMPin your parameter file. Then it will be consistent according to the timestamp when the process has been started. You could use FLASHBACK_SCN instead and determine the SCN beforehand if you'd like to be exact. So sorry if I had proclaimed a feature which unfortunately is not there by default - Mike

    Read the article

  • Can't double click files to open them in inDesign (CS5)

    - by Matt
    I cannot open a file unless I open inDesign (the program) and then do File-Open If I double click, it starts to open, then just hangs forever. AFTER I close it, and look in the directory where they're saved, I see a (temporary?) "lock" file. Now I can double click the original file and it opens just fine. However, now when I close iD it deletes the file and the whole process starts again... I have tried updating the software, uninstalled COMPLETELY and reinstalled, tried a brand new Win7 install. These files are all saved on a network drive, the computer is a new quad-core Dell with 12GB of RAM and a fresh x64 Win7 install on the SSD. Does not happen with other programs.

    Read the article

  • Windows 7, files reappear after deletion.

    - by HeavyWave
    I'm trying to delete some files from a folder. I've taken ownership of the files and the folder. When I delete these files Windows doesn't report any errors and deletes them. BUT, after I press F5 these files reappear again. There are no messages whatsoever, they are just undeletable. I know login off will help, but how do I fix it without going through the pain of closing everything down? P.S. Files disappear from the folder after aprox. 5 minutes. Update. Turns out my version of Windows did not properly upgrade from test version, so it had some weird disk drive issues.

    Read the article

  • Thoughts on Apache log file sizes?

    - by Nathan Long
    Do you place any limits on the size of Apache log files - access.log and error.log? Specifically, can you give: Reasons to limit log file sizes Disk space Any other? Reasons NOT to limit log file sizes Research into performance issues or security breaches Any other? Methods of doing so Cron job that periodically deletes the file, or the first N lines? Any other? Anything you might salvage before deleting For example, grep out how many times a file was downloaded before deleting the access logs I'd like get the thoughts of experienced sysadmins before I do anything. (Marking as community wiki since this may be a matter of opinion.)

    Read the article

  • Can I configure mod_proxy to use different parameters based on HTTP Method?

    - by Graham Lea
    I'm using mod_proxy as a failover proxy with two balance members. While mod_proxy marks dead nodes as dead, it still routes one request per minute to each dead node and, if it's still dead, will either return 503 to the client (if maxattempts=0) or retry on another node (if it's 0). The backends are serving a REST web service. Currently I have set maxattempts=0 because I don't want to retry POSTs and DELETEs. This means that when one node is dead, each minute a random client will receive a 503. Unfortunately, most of our clients are interpreting codes like 503 as "everything is dead" rather than "that didnt work but please try that again". In order to program some kind of automatic retry for safe requests at the proxy layer, I'd like to configure mod_proxy to use maxattempts=1 for GET and HEAD requests and maxattempts=0 for all other HTTP Methods. Is this possible? (And how? :)

    Read the article

  • Script for checking the nologin accounts and then disable the account

    - by suma
    "Could you please share the scripts which does the below ?" I have written a script that scans all the relevent logs daily, makes a list of people that have had any activity that day, and maintains database (just a text file) of users and the last time they logged in. Then I have a second script that examines the database for dates more than x days ago, an notifies the user and administrator 2 weeks prior to locking the account. And if there are any dates more than x+y days ago, deletes the account altogether. This seems to be working for me - but I would like to use a non-proprietary solution if one is available. "Could you please share the scripts?"

    Read the article

  • Automated Acceptance tests under specific contraints

    - by HH_
    This is a follow up to my previous question, which was a bit general, so I'll be asking for a more precise situation. I want to automate acceptance testing on a web application. Briefly, this application allows the user to create contracts for subscribers with the two constraints: You cannot create more than one contract for a subscriber. Once a contract is created, it cannot be deleted (from the UI) Let's say TestCreate is a test case with tests for the normal creation of a contract. The constraints have introduced complexities to the testing process, mainly dependencies between test cases and test executions. Before we run TestCreate we need to make sure that the application is in a suitable state (the subscriber has no contract) If we run TestCreate twice, the second run will fail since the state of the application will have changed. So we need to revert back to the initial state (i.e. delete the contract), which is impossible to do from the UI. More generally, after each test case we should guarantee that the state is reverted back. And since, in this case, it is impossible to do it from the UI, how do you handle this? Possible solution: I thought about doing a backup of the database in the state that I desire, and after each test case, run a script which deletes the db and restores the backup. However, I find that to be too heavy to do for each single test case. In addition, what if some information are stored in files? or in multiple or unaccessible databases? My question: In this situation, what would an experienced tester do to write automated and maintanable tests. Thank you. More info: I'm trying to integrate tests into a BDD framework, which I find to be a neat solution for test documentation and communication, but it does not solve this particular problem (it even makes it harder)

    Read the article

  • Cannot play windows WMA lossless files on Rhythmbox

    - by sr71
    I have installed Ubuntu 10.10 with all its updates (without Windows) on it’s own drive and everything is working fine. I want to play WMA audio files, also mp3 files. The mp3 files play fine. The WMA files do not play. Used "Rhythmbox Music Player" with and without "Ubuntu-restricted" -extras. Still does not play the lossless windows audio files. I am frustrated with searching to play a WMA file ("download this converter"), but one cannot use this until one "deletes this". I have done everything but it still does not play my windows lossless files that I made from all my CD’s. I am looking for a music player that I can use to play mp3’s and WMA lossless music files and automatically put the album cover on and update the info if one exists. Installation should be as simple as possible. Right now I am back to the original virgin Ubuntu 10.10 with all the recent updates. This computer will do nothing but play music (mp3 and WMA) through a stereo system. I also use Internet to update album info for the music. I do not care what bells and whistles the music player program has, as long as it is an easy install and just plays my mp3 and wma lossless music files. Any help would be appreciated.

    Read the article

  • uninstall mysql on linux with plesk

    - by Arsenal
    I'm having trouble uninstalling linux on my centOS 4 that has plesk. I'm actually trying to upgrade my Mysql 4.1 version to Mysql 5.0 using the following command: yum update mysql I get an error list of conflicted files however. When I try to remove mysql 4.1 and perform a clean install but when I use yum remove mysql* It deletes all of its dependencies and appearantly some of these are files needed by plesk, which causes my plesk to stop working. A did a full restore and everything is okay now, but how can I remove mysql without ruining plesk? I have also tried: rpm -qa | grep mysql to get a list of all files and remove them one by one, but there's a duplicate in that list, so I can't delete those (because it says it doesn't know which one to take). Any help would be greatly appreciated!

    Read the article

  • Imap server woes with Android Gingerbread email and Thunderbird

    - by Mojo
    I run my own mail server and use UW's imapd/popd daemons to provide service. This week I just upgraded my OG Droid to a new Droid 3, running Android 2.3.4 (Gingerbread). The email client is much improved over the previous one. But now I have a bad interaction when I try to access email using imap from Thunderbird on a laptop or desktop. Frequently Thunderbird will stop receiving any email at all, and it will appear only on the Droid. Sometimes a Thunderbird restart will make the mail appear, but none of my "deletes" will be recorded, so when I start Thunderbird again, all my old email reappears. If I kill all of the open imap daemons and restart xinetd, I can force it to behave for maybe a session. I've tried turning off IDLE service (push email) on both sides, to no apparent avail. I've also tried installing DroidMail with the same result.

    Read the article

  • Do large folder sizes slow down IO performance?

    - by Aaron
    We have a Linux server process that writes a few thousand files to a directory, deletes the files, and then writes a few thousand more files to the same directory without deleting the directory. What I'm starting to see is that the process doing the writing is getting slower and slower. My question is this: The directory size of the folder has grown from 4096 to over 200000 as see by this output of ls -l. root@ad57rs0b# ls -l 15000PN5AIA3I6_B total 232 drwxr-xr-x 2 chef chef 233472 May 30 21:35 barcodes On ext3, can these large directory sizes slow down performance? Thanks. Aaron

    Read the article

  • Is this technique for stat tracking without a database workable?

    - by baptzmoffire
    If I wanted to create a chess game, for iOS, that tracked both player moves (for retracing the progression of a game and for player stats), what would be the simplest route to take? To clarify, I want to track not only the moves a player has made in a particular game, but how often that player has made that move in past games. For example I want to be able to track: How many times a given player has opened by moving the king pawn up two squares (e4) as white, on move number one. What is the percentage of time the player responds to white's e4 opening move, with moving his own king pawn to e5? What percentage of time does he respond by moving his queenside bishop pawn to c5? And so on. If it's not clear, the stat tracking system should also be able to report how many times this player, as black, move his queen to h1, on move number 30. I'm using Parse.com for my back-end as a server (BaaS) service. If I were to create a class that writes strings that identify move number, player color, moved piece, algebraic notation of the square (e.g. "d8") to a file, locally in the file system saves the file to Parse, and deletes the temporary file from file system upon opening the same game in my tableview (a la a "With Friends" game), download this file from Parse, parse through it and retrieve all stats/history, assign all relevant values to variables Is this plan viable, or is there an easier way?

    Read the article

  • How to avoid game objects accidentally deleting themselves in C++

    - by Tom Dalling
    Let's say my game has a monster that can kamikaze explode on the player. Let's pick a name for this monster at random: a Creeper. So, the Creeper class has a method that looks something like this: void Creeper::kamikaze() { EventSystem::postEvent(ENTITY_DEATH, this); Explosion* e = new Explosion; e->setLocation(this->location()); this->world->addEntity(e); } The events are not queued, they get dispatched immediately. This causes the Creeper object to get deleted somewhere inside the call to postEvent. Something like this: void World::handleEvent(int type, void* context) { if(type == ENTITY_DEATH){ Entity* ent = dynamic_cast<Entity*>(context); removeEntity(ent); delete ent; } } Because the Creeper object gets deleted while the kamikaze method is still running, it will crash when it tries to access this->location(). One solution is to queue the events into a buffer and dispatch them later. Is that the common solution in C++ games? It feels like a bit of a hack, but that might just be because of my experience with other languages with different memory management practices. In C++, is there a better general solution to this problem where an object accidentally deletes itself from inside one of its methods?

    Read the article

  • Schedule a batch file with parameters containing spaces

    - by Danilo Brambilla
    Hi, I need to schedule a task in Windows Server 2003 that executes this script that deletes files older that n days in the specified folder. The script needs 3 parameters: %1 path to folder where files need to be deleted %2 file names (es. *.log) %3 number of days @echo off forfiles -p %1 -s -m %2 -d -%3 -c "cmd /c del /q @path" The script works fine if the first parameter has no spaces inside. This is an example of parameters that work: "C:\Program Files\SCRIPT\DeleteFilesOlderThanXDays.cmd" N:\FOLDER\FOLDER *.zip 60 This is an example that does not work: "C:\Program Files\SCRIPT\DeleteFilesOlderThanXDays.cmd" N:\Program Files\LOG *.zip 60 This does not work too: "C:\Program Files\SCRIPT\DeleteFilesOlderThanXDays.cmd" "N:\Program Files\LOG" *.zip 60 I think it would be a quotes problem but I can't figure out the solution. I'd like not to insert values directly into the script if possible Thank you all for help

    Read the article

  • Podcast aggregation - please recommend for Windows 7

    - by bazza-formez
    Hi, I need a good podcast aggregator. I was using Juice 2.2, but that will not work on my new pc which is running Windows 7 (64 bit). Could anyone recommend one please ? I need the following functionality : 1- Subscribe to podcasts (mp3's from radio shows) using rss 2- Have OPML support so I can load up my old subscriptions easily 3- That runs quietly in the backgrond and looks after itself 4- That deletes old episodes automatically after a set time 5- Isn't just designed for an IPOD (I use a simple generic mp3 player to listen). Any ideas? Thanks! Bazza

    Read the article

  • What is the best drive cleaner?

    - by allindal
    What is the best drive "cleaner" application, an application that deletes roaming, temp. and different useless caches. Something similar to CCleaner, but more powerful. I need it to delete more than the basic stuff. Like duplications of complex files or redundancies, (example... for every game there's the DirectX suite) without deleting program essential files, obviously. I know most of this has to do with my selection of these programs, but I haven't seen anything that lets me select types of files to delete, not just specific files.

    Read the article

  • Advice on designing web application with a 40+ year lifetime

    - by user2708395
    Scenario Currently, I am apart of a health care project whose main requirement is to capture data with unknown attributes using user generated forms by health care providers. The second requirement is that data integrity is key and that the application will be used for 40+ years. We are currently migrating the client's data from the past 40 years from various sources (Paper, Excel, Access, etc...) to the database. Future requirements are: Workflow management of forms Schedule management of forms Security/Role based management Reporting engine Mobile/Tablet support Situation Only 6 months in, the current (contracted) architect/senior programmer has taken the "fast" approach and has designed a poor system. The database is not normalized, the code is coupled, the tiers have no dedicated purpose and data is starting to go missing since he has designed some beans to perform "deletes" on the database. The code base is extremely bloated and there are jobs just to synchronize data since the database is not normalized. His approach has been to rely on backup jobs to restore missing data and doesn't seem to believe in re-factoring. Having presented my findings to the PM, the architect will be removed when his contract ends. I have been given the task to re-architect this application. My team consists of me and one junior programmer. We have no other resources. We have been granted a 6-month requirement freeze in which we can focus on re-building this system. I suggested using a CMS system like Drupal, but for policy reasons at the client's organization, the system must be built from scratch. This is the first time that I will be designing a system with a 40+ lifespan. I have only worked on projects with 3-5 year lifespans, so this situation is very new, yet exciting. Questions What design considerations will make the system more "future proof"? What experiences have you had in designing such systems - both failures and successes? What questions should be asked to the client/PM to make the system more "future proof"?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >