Search Results

Search found 10463 results on 419 pages for 'task tracking'.

Page 79/419 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • How to refresh a parent page #Rails

    - by sameera
    Hi Guys, I have the fllowing requirement, I have a model called Task to display user tasks 1 . Link to add a new task (in the tasks index page) 2 . when a user click the link, 'tasks/new' action will open up inside a popup 3 . when the user save the new task, I want to close 'new task' popup and refresh the parent page 'tasks/index' so that new task will display I guess, i will have to execute a page reload java script at the end of 'tasks/create' action. But i'm not sure how to. can anyone help me out to make this happen, thanks in advance cheers, sameera

    Read the article

  • What's the best way to return a subset of a list

    - by Pikrass
    I have a list of tasks. A task is defined by a name, a due date and a duration. My TaskManager class handles a std::list<Task> sorted by due date. It has to provide a way to get the tasks due for a specific date. How would you implement that ? I think a good way (from API point of view) would be to provide a std::list<Task>::iterator pair. So I would have a TaskManager::begin(date) method. Do you think this method should get the iterator by iterating from the start of the list until it finds the first task due on that date, or by getting it from a std::map<date, std::list<Task>::iterator> (but then we have to keep it up-to-date when adding or removing tasks) ? And then, how could I implement the TaskManager::end(date) method ?

    Read the article

  • Ant: how do I disable all non-error messages?

    - by java.is.for.desktop
    Hello, everyone! When running ant from command line on my Netbeans projects, I get the following messages hundreds of times, which is very annoying: Trying to override old definition of task http://www.netbeans.org/ns/j2se-project/3:javac Trying to override old definition of task http://www.netbeans.org/ns/j2se-project/3:depend Trying to override old definition of task http://www.netbeans.org/ns/j2se-project/1:nbjpdastart Trying to override old definition of task http://www.netbeans.org/ns/j2se-project/3:debug Trying to override old definition of task http://www.netbeans.org/ns/j2se-project/1:java Depending of the kind of the project, there can be much more of such lines. And this is with the -q or -quiet option. Any idea, how to disable this message? Thank you!

    Read the article

  • Executing legacy MSBuild scripts in TFS 2010 Build

    - by Jakob Ehn
    When upgrading from TFS 2008 to TFS 2010, all builds are “upgraded” in the sense that a build definition with the same name is created, and it uses the UpgradeTemplate  build process template to execute the build. This template basically just runs MSBuild on the existing TFSBuild.proj file. The build definition contains a property called ConfigurationFolderPath that points to the TFSBuild.proj file. So, existing builds will run just fine after upgrade. But what if you want to use the new workflow functionality in TFS 2010 Build, but still have a lot of MSBuild scripts that maybe call custom MSBuild tasks that you don’t have the time to rewrite? Then one option is to keep these MSBuild scrips and call them from a TFS 2010 Build workflow. This can be done using the MSBuild workflow activity that is avaiable in the toolbox in the Team Foundation Build Activities section: This activity wraps the call to MSBuild.exe and has the following parameters: Most of these properties are only relevant when actually compiling projects, for example C# project files. When calling custom MSBuild project files, you should focus on these properties: Property Meaning Example CommandLineArguments Use this to send in/override MSBuild properties in your project “/p:MyProperty=SomeValue” or MSBuildArguments (this will let you define the arguments in the build definition or when queuing the build) LogFile Name of the log file where MSbuild will log the output “MyBuild.log” LogFileDropLocation Location of the log file BuildDetail.DropLocation + “\log” Project The project to execute SourcesDirectory + “\BuildExtensions.targets” ResponseFile The name of the MSBuild response file SourcesDirectory + “\BuildExtensions.rsp” Targets The target(s) to execute New String() {“Target1”, “Target2”} Verbosity Logging verbosity Microsoft.TeamFoundation.Build.Workflow.BuildVerbosity.Normal Integrating with Team Build   If your MSBuild scripts tries to use Team Build tasks, they will most likely fail with the above approach. For example, the following MSBuild project file tries to add a build step using the BuildStep task:   <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\TeamBuild\Microsoft.TeamFoundation.Build.targets" /> <Target Name="MyTarget"> <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Name="MyBuildStep" Message="My build step executed" Status="Succeeded"></BuildStep> </Target> </Project> When executing this file using the MSBuild activity, calling the MyTarget, it will fail with the following message: The "Microsoft.TeamFoundation.Build.Tasks.BuildStep" task could not be loaded from the assembly \PrivateAssemblies\Microsoft.TeamFoundation.Build.ProcessComponents.dll. Could not load file or assembly 'file:///D:\PrivateAssemblies\Microsoft.TeamFoundation.Build.ProcessComponents.dll' or one of its dependencies. The system cannot find the file specified. Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask. You can see that the path to the ProcessComponents.dll is incomplete. This is because in the Microsoft.TeamFoundation.Build.targets file the task is referenced using the $(TeamBuildRegPath) property. Also note that the task needs the TeamFounationServerUrl and BuildUri properties. One solution here is to pass these properties in using the Command Line Arguments parameter:   Here we pass in the parameters with the corresponding values from the curent build. The build log shows that the build step has in fact been inserted:   The problem as you probably spted is that the build step is insert at the top of the build log, instead of next to the MSBuild activity call. This is because we are using a legacy team build task (BuildStep), and that is how these are handled in TFS 2010. You can see the same behaviour when running builds that are using the UpgradeTemplate, that cutom build steps shows up at the top of the build log.

    Read the article

  • Monitor and Control Memory Usage in Google Chrome

    - by Asian Angel
    Do you want to know just how much memory Google Chrome and any installed extensions are using at a given moment? With just a few clicks you can see just what is going on under the hood of your browser. How Much Memory are the Extensions Using? Here is our test browser with a new tab and the Extensions Page open, five enabled extensions, and one disabled at the moment. You can access Chrome’s Task Manager using the Page Menu, going to Developer, and selecting Task manager… Or by right clicking on the Tab Bar and selecting Task manager. There is also a keyboard shortcut (Shift + Esc) available for the “keyboard ninjas”. Sitting idle as shown above here are the stats for our test browser. All of the extensions are sitting there eating memory even though some of them are not available/active for use on our new tab and Extensions Page. Not so good… If the default layout is not to your liking then you can easily modify the information that is available by right clicking and adding/removing extra columns as desired. For our example we added Shared Memory & Private Memory. Using the about:memory Page to View Memory Usage Want even more detail? Type about:memory into the Address Bar and press Enter. Note: You can also access this page by clicking on the Stats for nerds Link in the lower left corner of the Task Manager Window. Focusing on the four distinct areas you can see the exact version of Chrome that is currently installed on your system… View the Memory & Virtual Memory statistics for Chrome… Note: If you have other browsers running at the same time you can view statistics for them here too. See a list of the Processes currently running… And the Memory & Virtual Memory statistics for those processes. The Difference with the Extensions Disabled Just for fun we decided to disable all of the extension in our test browser… The Task Manager Window is looking rather empty now but the memory consumption has definitely seen an improvement. Comparing Memory Usage for Two Extensions with Similar Functions For our next step we decided to compare the memory usage for two extensions with similar functionality. This can be helpful if you are wanting to keep memory consumption trimmed down as much as possible when deciding between similar extensions. First up was Speed Dial”(see our review here). The stats for Speed Dial…quite a change from what was shown above (~3,000 – 6,000 K). Next up was Incredible StartPage (see our review here). Surprisingly both were nearly identical in the amount of memory being used. Purging Memory Perhaps you like the idea of being able to “purge” some of that excess memory consumption. With a simple command switch modification to Chrome’s shortcut(s) you can add a Purge Memory Button to the Task Manager Window as shown below.  Notice the amount of memory being consumed at the moment… Note: The tutorial for adding the command switch can be found here. One quick click and there is a noticeable drop in memory consumption. Conclusion We hope that our examples here will prove useful to you in managing the memory consumption in your own Google Chrome installation. If you have a computer with limited resources every little bit definitely helps out. Similar Articles Productive Geek Tips Stupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeMonitor CPU, Memory, and Disk IO In Windows 7 with Taskbar MetersFix for Firefox memory leak on WindowsHow to Purge Memory in Google ChromeHow to Make Google Chrome Your Default Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs

    Read the article

  • Things I've noticed with DVCS

    - by Wes McClure
    Things I encourage: Frequent local commits This way you don't have to be bothered by changes others are making to the central repository while working on a handful of related tasks.  It's a good idea to try to work on one task at a time and commit all changes at partitioned stopping points.  A local commit doesn't have to build, just FYI, so a stopping point doesn't mean a build point nor a point that you can push centrally.  There should be several of these in any given day.  2 hours is a good indicator that you might not be leveraging the power of frequent local commits.  Once you have verified a set of changes works, save them away, otherwise run the risk of introducing bugs into it when working on the next task.  The notion of a task By task I mean a related set of changes that can be completed in a few hours or less.  In the same token don’t make your tasks so small that critically related changes aren’t grouped together.  Use your intuition and the rest of these principles and I think you will find what is comfortable for you. Partial commits Sometimes one task explodes or unknowingly encompasses other tasks, at this point, try to get to a stopping point on part of the work you are doing and commit it so you can get that out of the way to focus on the remainder.  This will often entail committing part of the work and continuing on the rest. Outstanding changes as a guide If you don't commit often it might mean you are not leveraging your version control history to help guide your work.  It's a great way to see what has changed and might be causing problems.  The longer you wait, the more that has changed and the harder it is to test/debug what your changes are doing! This is a reason why I am so picky about my VCS tools on the client side and why I talk a lot about the quality of a diff tool and the ability to integrate that with a simple view of everything that has changed.  This is why I love using TortoiseHg and SmartGit: they show changed files, a diff (or two way diff with SmartGit) of the current selected file and a commit message all in one window that I keep maximized on one monitor at all times. Throw away / stash commits There is extreme value in being able to throw away a commit (or stash it) that is getting out of hand.  If you do not commit often you will have to isolate the work you want to commit from the work you want to throw away, which is wasted productivity and highly prone to errors.  I find myself doing this about once a week, especially when doing exploratory re-factoring.  It's much easier if I can just revert all outstanding changes. Sync with the central repository daily The rest of us depend on your changes.  Don't let them sit on your computer longer than they have to.  Waiting increases the chances of merge conflict which just decreases productivity.  It also prohibits us from doing deploys when people say they are done but have not merged centrally.  This should be done daily!  Find a way to partition the work you are doing so that you can sync at least once daily. Things I discourage: Lots of partial commits right at the end of a series of changes If you notice lots of partial commits at the end of a set of changes, it's likely because you weren't frequently committing, nor were you watching for the size of the task expanding beyond a single commit.  Chances are this cost you productivity if you use your outstanding changes as a guide, since you would have an ever growing list of changes. Committing single files Committing single files means you waited too long and no longer understand all the changes involved.  It may mean there were overlapping changes in single files that cannot be isolated.  In either case, go back to the suggestions above to avoid this.  Committing frequently does not mean committing frequently right at the end of a day's work. It should be spaced out over the course of several tasks, not all at the end in a 5 minute window.

    Read the article

  • ?Oracle Database 12c????Information Lifecycle Management ILM?Storage Enhancements

    - by Liu Maclean(???)
    Oracle Database 12c????Information Lifecycle Management ILM ?????????Storage Enhancements ???????? Lifecycle Management ILM ????????? Automatic Data Placement ??????, ??ADP? ?????? 12c???????Datafile??? Online Move Datafile, ????????????????datafile???????,??????????????? ????(12.1.0.1)Automatic Data Optimization?heat map????????: ????????? (CDB)?????Automatic Data Optimization?heat map Row-level policies for ADO are not supported for Temporal Validity. Partition-level ADO and compression are supported if partitioned on the end-time columns. Row-level policies for ADO are not supported for in-database archiving. Partition-level ADO and compression are supported if partitioned on the ORA_ARCHIVE_STATE column. Custom policies (user-defined functions) for ADO are not supported if the policies default at the tablespace level. ADO does not perform checks for storage space in a target tablespace when using storage tiering. ADO is not supported on tables with object types or materialized views. ADO concurrency (the number of simultaneous policy jobs for ADO) depends on the concurrency of the Oracle scheduler. If a policy job for ADO fails more than two times, then the job is marked disabled and the job must be manually enabled later. Policies for ADO are only run in the Oracle Scheduler maintenance windows. Outside of the maintenance windows all policies are stopped. The only exceptions are those jobs for rebuilding indexes in ADO offline mode. ADO has restrictions related to moving tables and table partitions. ??????row,segment???????????ADO??,?????create table?alter table?????? ????ADO??,??????????????,???????????????? storage tier , ?????????storage tier?????????, ??????????????ADO??????????? segment?row??group? ?CREATE TABLE?ALERT TABLE???ILM???,??????????????????ADO policy? ??ILM policy???????????????? ??????? ????ADO policy, ?????alter table  ???????,?????????????? CREATE TABLE sales_ado (PROD_ID NUMBER NOT NULL, CUST_ID NUMBER NOT NULL, TIME_ID DATE NOT NULL, CHANNEL_ID NUMBER NOT NULL, PROMO_ID NUMBER NOT NULL, QUANTITY_SOLD NUMBER(10,2) NOT NULL, AMOUNT_SOLD NUMBER(10,2) NOT NULL ) ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SQL> SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled 2 FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLED -------------------- -------------------------- -------------- P41 DATA MOVEMENT YES ALTER TABLE sales MODIFY PARTITION sales_1995 ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLE ------------------------ ------------- ------ P1 DATA MOVEMENT YES P2 DATA MOVEMENT YES /* You can disable an ADO policy with the following */ ALTER TABLE sales_ado ILM DISABLE POLICY P1; /* You can delete an ADO policy with the following */ ALTER TABLE sales_ado ILM DELETE POLICY P1; /* You can disable all ADO policies with the following */ ALTER TABLE sales_ado ILM DISABLE_ALL; /* You can delete all ADO policies with the following */ ALTER TABLE sales_ado ILM DELETE_ALL; /* You can disable an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DISABLE POLICY P2; /* You can delete an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DELETE POLICY P2; ILM ???????: ?????ILM ADP????,???????: ?????? ???? activity tracking, ????2????????,???????????????????: SEGMENT-LEVEL???????????????????? ROW-LEVEL????????,??????? ????????: 1??????? SEGMENT-LEVEL activity tracking ALTER TABLE interval_sales ILM  ENABLE ACTIVITY TRACKING SEGMENT ACCESS ???????INTERVAL_SALES??segment level  activity tracking,?????????????????? 2? ??????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING (CREATE TIME , WRITE TIME); 3????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING  (READ TIME); ?12.1.0.1.0?????? ??HEAT_MAP??????????, ?????system??session?????heap_map????????????? ?????????HEAT MAP??,? ALTER SYSTEM SET HEAT_MAP = ON; ?HEAT MAP??????,??????????????????????????  ??SYSTEM?SYSAUX????????????? ???????HEAT MAP??: ALTER SYSTEM SET HEAT_MAP = OFF; ????? HEAT_MAP????, ?HEAT_MAP??? ?????????????????????? ?HEAT_MAP?????????Automatic Data Optimization (ADO)??? ??ADO??,Heat Map ?????????? ????V$HEAT_MAP_SEGMENT ??????? HEAT MAP?? SQL> select * from V$heat_map_segment; no rows selected SQL> alter session set heat_map=on; Session altered. SQL> select * from scott.emp; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ---------- ---------- --------- ---------- --------- ---------- ---------- ---------- 7369 SMITH CLERK 7902 17-DEC-80 800 20 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 7566 JONES MANAGER 7839 02-APR-81 2975 20 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 19-APR-87 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30 7876 ADAMS CLERK 7788 23-MAY-87 1100 20 7900 JAMES CLERK 7698 03-DEC-81 950 30 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 14 rows selected. SQL> select * from v$heat_map_segment; OBJECT_NAME SUBOBJECT_NAME OBJ# DATAOBJ# TRACK_TIM SEG SEG FUL LOO CON_ID -------------------- -------------------- ---------- ---------- --------- --- --- --- --- ---------- EMP 92997 92997 23-JUL-13 NO NO YES NO 0 ??v$heat_map_segment???,?v$heat_map_segment??????????????X$HEATMAPSEGMENT V$HEAT_MAP_SEGMENT displays real-time segment access information. Column Datatype Description OBJECT_NAME VARCHAR2(128) Name of the object SUBOBJECT_NAME VARCHAR2(128) Name of the subobject OBJ# NUMBER Object number DATAOBJ# NUMBER Data object number TRACK_TIME DATE Timestamp of current activity tracking SEGMENT_WRITE VARCHAR2(3) Indicates whether the segment has write access: (YES or NO) SEGMENT_READ VARCHAR2(3) Indicates whether the segment has read access: (YES or NO) FULL_SCAN VARCHAR2(3) Indicates whether the segment has full table scan: (YES or NO) LOOKUP_SCAN VARCHAR2(3) Indicates whether the segment has lookup scan: (YES or NO) CON_ID NUMBER The ID of the container to which the data pertains. Possible values include:   0: This value is used for rows containing data that pertain to the entire CDB. This value is also used for rows in non-CDBs. 1: This value is used for rows containing data that pertain to only the root n: Where n is the applicable container ID for the rows containing data The Heat Map feature is not supported in CDBs in Oracle Database 12c, so the value in this column can be ignored. ??HEAP MAP??????????????????,????DBA_HEAT_MAP_SEGMENT???????? ???????HEAT_MAP_STAT$?????? ??Automatic Data Optimization??????: ????1: SQL> alter system set heat_map=on; ?????? ????????????? scott?? http://www.askmaclean.com/archives/scott-schema-script.html SQL> grant all on dbms_lock to scott; ????? SQL> grant dba to scott; ????? @ilm_setup_basic C:\APP\XIANGBLI\ORADATA\MACLEAN\ilm.dbf @tktgilm_demo_env_setup SQL> connect scott/tiger ; ???? SQL> select count(*) from scott.employee; COUNT(*) ---------- 3072 ??? 1 ?? SQL> set serveroutput on SQL> exec print_compression_stats('SCOTT','EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 3072 Adv/basic compressed : 0 Others : 0 PL/SQL ???????? ???????3072?????? ????????? ????policy ???????????? alter table employee ilm add policy row store compress advanced row after 3 days of no modification / SQL> set serveroutput on SQL> execute list_ilm_policies; -------------------------------------------------- Policies defined for SCOTT -------------------------------------------------- Object Name------ : EMPLOYEE Subobject Name--- : Object Type------ : TABLE Inherited from--- : POLICY NOT INHERITED Policy Name------ : P1 Action Type------ : COMPRESSION Scope------------ : ROW Compression level : ADVANCED Tier Tablespace-- : Condition type--- : LAST MODIFICATION TIME Condition days--- : 3 Enabled---------- : YES -------------------------------------------------- PL/SQL ???????? SQL> select sysdate from dual; SYSDATE -------------- 29-7? -13 SQL> execute set_back_chktime(get_policy_name('EMPLOYEE',null,'COMPRESSION','ROW','ADVANCED',3,null,null),'EMPLOYEE',null,6); Object check time reset ... -------------------------------------- Object Name : EMPLOYEE Object Number : 93123 D.Object Numbr : 93123 Policy Number : 1 Object chktime : 23-7? -13 08.13.42.000000 ?? Distnt chktime : 0 -------------------------------------- PL/SQL ???????? ?policy?chktime???6??, ????set_back_chktime???????????????“????”?,?????????,???????? ?????? alter system flush buffer_cache; alter system flush buffer_cache; alter system flush shared_pool; alter system flush shared_pool; SQL> execute set_window('MONDAY_WINDOW','OPEN'); Set Maint. Window OPEN ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : TRUE ----------------------------- PL/SQL ???????? SQL> exec dbms_lock.sleep(60) ; PL/SQL ???????? SQL> exec print_compression_stats('SCOTT', 'EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 338 Adv/basic compressed : 2734 Others : 0 PL/SQL ???????? ??????????????? Adv/basic compressed : 2734 ??????? SQL> col object_name for a20 SQL> select object_id,object_name from dba_objects where object_name='EMPLOYEE'; OBJECT_ID OBJECT_NAME ---------- -------------------- 93123 EMPLOYEE SQL> execute list_ilm_policy_executions ; -------------------------------------------------- Policies execution details for SCOTT -------------------------------------------------- Policy Name------ : P22 Job Name--------- : ILMJOB48 Start time------- : 29-7? -13 08.37.45.061000 ?? End time--------- : 29-7? -13 08.37.48.629000 ?? ----------------- Object Name------ : EMPLOYEE Sub_obj Name----- : Obj Type--------- : TABLE ----------------- Exec-state------- : SELECTED FOR EXECUTION Job state-------- : COMPLETED SUCCESSFULLY Exec comments---- : Results comments- : --- -------------------------------------------------- PL/SQL ???????? ILMJOB48?????policy?JOB,?12.1.0.1??J00x???? ?MMON_SLAVE???M00x???15????????? select sample_time,program,module,action from v$active_session_history where action ='KDILM background EXEcution' order by sample_time; 29-7? -13 08.16.38.369000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.38.388000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.39.390000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.23.38.681000000 ?? ORACLE.EXE (M002) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.32.38.968000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.39.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.40.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.36.40.066000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.42.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.43.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.44.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.38.42.386000000 ?? ORACLE.EXE (M001) MMON_SLAVE KDILM background EXEcution select distinct action from v$active_session_history where action like 'KDILM%' KDILM background CLeaNup KDILM background EXEcution SQL> execute set_window('MONDAY_WINDOW','CLOSE'); Set Maint. Window CLOSE ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : FALSE ----------------------------- PL/SQL ???????? SQL> drop table employee purge ; ????? ???? ????? spool ilm_usecase_1_cleanup.lst @ilm_demo_cleanup ; spool off

    Read the article

  • Core Data @sum aggregate

    - by nasim
    I am getting an exception when I try to get @sum on a column in iPhone Core-Data application. My two models are following - Task model: @interface Task : NSManagedObject { } @property (nonatomic, retain) NSString * taskName; @property (nonatomic, retain) NSSet* completion; @end @interface Task (CoreDataGeneratedAccessors) - (void)addCompletionObject:(NSManagedObject *)value; - (void)removeCompletionObject:(NSManagedObject *)value; - (void)addCompletion:(NSSet *)value; - (void)removeCompletion:(NSSet *)value; @end Completion model: @interface Completion : NSManagedObject { } @property (nonatomic, retain) NSNumber * percentage; @property (nonatomic, retain) NSDate * time; @property (nonatomic, retain) Task * task; @end And here is the fetch: NSFetchRequest *request = [[NSFetchRequest alloc] init]; request.entity = [NSEntityDescription entityForName:@"Task" inManagedObjectContext:context]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"taskName" ascending:YES]; request.sortDescriptors = [NSArray arrayWithObject:sortDescriptor]; NSError *error; NSArray *results = [context executeFetchRequest:request error:&error]; NSArray *parents = [results valueForKeyPath:@"taskName"]; NSArray *children = [results valueForKeyPath:@"[email protected]"]; NSLog(@"%@ %@", parents, children); [request release]; [sortDescriptor release]; The exception is thrown at the fourth line from bottom. The thrown exception is: *** -[NSCFSet decimalValue]: unrecognized selector sent to instance 0x3b25a30 *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[NSCFSet decimalValue]: unrecognized selector sent to instance 0x3b25a30' I would very much appreciate any kind of help. Thanks.

    Read the article

  • Java problem with multiple threads when executing a runnable jar file

    - by Spi1988
    I have developed a Java Swing application, which uses the SwingWorker class to perform some long running tasks. When the application is run from the IDE (Netbeans), I can start multiple long running tasks simultaneously without any problem. I created a runnable jar file for the application, in order to be able to run it from outside the IDE. The application when run from this jar file works well with the only exception that it doesn't allow me to start 2 long running tasks simultaneously. When I start the first task (assume it takes 2 minitues to complete), every thing works fine, the UI does not freeze (it never freezes). However, when I try to run another task (assume it takes just 10 seconds, therefore it should finish before the first task) while the first task has not yet completed, nothing seems to happen. In reality, the second task would have started, and also finished its processing, however its results are only displayed once the first task completes. I dunno why this is happening. Is there some restriction on the number of threads which could run simultaneously on the JVM? Are there any jvm arguments which i could try to solve this problem. I hope i explained my problem well. Thanks in advance, Peter Bartolo

    Read the article

  • .Net Remoting: Serialize Object and implementation

    - by flogo
    Hi, In my scenario there is a client-side assembly that contains a class (Task). This class implements an interface (ITask) that is known on the server. I'm trying to send a Task object from client to server without copying the assembly of the client manually to the server. If I just serialize the task object, the server obviously complains about the missing assembly. I then tried to serialze typeof(Task).Assembly but could not derserialize it on the server. Next I tried to File.ReadAllBytes(typeof(Task).Assembly.Location) and saved it to a temporary file on the server, which threw an exception on Assembly.LoadFrom(@".\temporary.dll"); Why am I doing this? Java RMI has a neat feature to request the implementation of an object that is received through remoting but is stil "unkown" (this JVM doesn't have the *.class file). This can be used for a compute server that just knows the interface of a "task" containing a run() method and downloads the implementation of this method on demand. This way the server doesn't have to be changed for new tasks. I'm trying to achieve something like this in .Net.

    Read the article

  • In threads, WaitForMultipleObjects never returns if set to INFINITE

    - by AKN
    Let say I have three thread handles HandleList[0] = hThread1; HandleList[1] = hThread2; HandleList[2] = hThread3; /*All the above are of type HANDLE*/ Before closing the application, I want the thread to get its task done. So I want to make app wait till thread completes. So I do, WaitForMultipleObjects(3, HandleList, TRUE, INFINITE ); By this I'm able to make the thread, complete its task. But control never move to next line after the call to WaitForMultileObjects irrespective of all thread completing its task. If I use, some seconds instead of INFINITE, it comes to next line after that many secs, irrspective of whether thread completes its task or not. WaitForMultipleObjects(3, HandleList, TRUE, 10000 ); My problem here is, I'm can't go for seconds, as I may not be sure whether the threads will complete its task with the given time. To list my problem in simple words, I want all my thread to finish the task, before I close my app. How can I achieve it using WaitForMultipleObjects API?

    Read the article

  • Why would using a Temp table be faster than a nested query?

    - by Mongus Pong
    We are trying to optimise some of our queries. One query is doing the following: SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, INTO [#Gadget] FROM task t SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) as Client FROM [#Gadget] order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC DROP TABLE [#Gadget] (I have removed the complex subquery, cos I dont think its relevant other than to explain why this query has been done as a two stage process.) Now I would have thought it would be far more efficient to merge this down into a single query using subqueries as : SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) FROM ( SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, FROM task t ) as sub order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC This would give the optimiser better information to work out what was going on and avoid any temporary tables. It should be faster. But it turns out it is a lot slower. 8 seconds vs under 5 seconds. I cant work out why this would be the case as all my knowledge of databases imply that subqueries would always be faster than using temporary tables. Can anyone explain what could be going on!?!?

    Read the article

  • where to store temporary data in MVC 2.0 project

    - by StuffHappens
    Hello! I'm starting to learn MVC 2.0 and I'm trying to create a site with a quiz: user is asked a question and given several options of answer. If he chooses the right answer he gets some points, if he doesn't, he looses them. I tried to do this the following way public class HomeController : Controller { private ITaskGenerator taskGenerator = new TaskGenerator(); private string correctAnswer; public ActionResult Index() { var task = taskGenerator .GenerateTask(); ViewData["Task"] = task.Task; ViewData["Options"] = task.Options; correctAnswer= task.CorrectAnswer; return View(); } public ActionResult Answer(string id) { if (id == correctAnswer) return View("Correct") return View("Incorrect"); } } But I have a problem: when user answers the cotroller class is recreated and I loose correct answer. So what is the best place to store correct answer? Should I create a static class for this purpose? Thanks for your help!

    Read the article

  • Celery / Django Single Tasks being run multiple times

    - by felix001
    I'm facing an issue where I'm placing a task into the queue and it is being run multiple times. From the celery logs I can see that the same worker is running the task ... [2014-06-06 15:12:20,731: INFO/MainProcess] Received task: input.tasks.add_queue [2014-06-06 15:12:20,750: INFO/Worker-2] starting runner.. [2014-06-06 15:12:20,759: INFO/Worker-2] collection started [2014-06-06 15:13:32,828: INFO/Worker-2] collection complete [2014-06-06 15:13:32,836: INFO/Worker-2] generation of steps complete [2014-06-06 15:13:32,836: INFO/Worker-2] update created [2014-06-06 15:13:33,655: INFO/Worker-2] email sent [2014-06-06 15:13:33,656: INFO/Worker-2] update created [2014-06-06 15:13:34,420: INFO/Worker-2] email sent [2014-06-06 15:13:34,421: INFO/Worker-2] FINISH - Success However when I view the actual logs of the application it is showing 5-6 log lines for each step (??). Im using Django 1.6 with RabbitMQ. The method for placing into the queue is via placing a delay on a function. This function (task decorator is added( then calls a class which is run. Has anyone any idea on the best way to troubleshoot this ? Edit : As requested heres the code, views.py In my view im sending my data to the queue via ... from input.tasks import add_queue_project add_queue_project.delay(data) tasks.py from celery.decorators import task @task() def add_queue_project(data): """ run project """ logger = logging_setup(app="project") logger.info("starting project runner..") f = project_runner(data) f.main() class project_runner(): """ main project runner """ def __init__(self,data): self.data = data self.logger = logging_setup(app="project") def self.main(self): .... Code settings.py THIRD_PARTY_APPS = ( 'south', # Database migration helpers: 'crispy_forms', # Form layouts 'rest_framework', 'djcelery', ) import djcelery djcelery.setup_loader() BROKER_HOST = "127.0.0.1" BROKER_PORT = 5672 # default RabbitMQ listening port BROKER_USER = "test" BROKER_PASSWORD = "test" BROKER_VHOST = "test" CELERY_BACKEND = "amqp" # telling Celery to report the results back to RabbitMQ CELERY_RESULT_DBURI = "" CELERY_IMPORTS = ("input.tasks", ) celeryd The line im running is to start celery, python2.7 manage.py celeryd -l info Thanks,

    Read the article

  • Which way is preferred when doing asynchronous WCF calls?

    - by Mikael Svenson
    When invoking a WCF service asynchronous there seems to be two ways it can be done. 1. public void One() { WcfClient client = new WcfClient(); client.BegindoSearch("input", ResultOne, null); } private void ResultOne(IAsyncResult ar) { WcfClient client = new WcfClient(); string data = client.EnddoSearch(ar); } 2. public void Two() { WcfClient client = new WcfClient(); client.doSearchCompleted += TwoCompleted; client.doSearchAsync("input"); } void TwoCompleted(object sender, doSearchCompletedEventArgs e) { string data = e.Result; } And with the new Task<T> class we have an easy third way by wrapping the synchronous operation in a task. 3. public void Three() { WcfClient client = new WcfClient(); var task = Task<string>.Factory.StartNew(() => client.doSearch("input")); string data = task.Result; } They all give you the ability to execute other code while you wait for the result, but I think Task<T> gives better control on what you execute before or after the result is retrieved. Are there any advantages or disadvantages to using one over the other? Or scenarios where one way of doing it is more preferable?

    Read the article

  • Identifying a class which is extending an abstract class

    - by Simon A. Eugster
    Good Evening, I'm doing a major refactoring of http://wiki2xhtml.sourceforge.net/ to finally get better overview and maintainability. (I started the project when I decided to start programming, so – you get it, right? ;)) At the moment I wonder how to solve the problem I'll describe now: Every file will be put through several parsers (like one for links, one for tables, one for images, etc.): public class WikiLinks extends WikiTask { ... } public class WikiTables extends WikiTask { ... } The files will then be parsed about this way: public void parse() { if (!parse) return; WikiTask task = new WikiLinks(); do { task.parse(this); } while ((task = task.nextTask()) != null); } Sometimes I may want to use no parser at all (for files that only need to be copied), or only a chosen one (e.g. for testing purposes). So before running task.parse() I need to check whether this certain parser is actually necessary/desired. (Perhaps via Blacklist or Whitelist.) What would you suggest for comparing? An ID for each WikiTask (how to do?)? Comparing the task Object itself against a new instance of a WikiTask (overhead)?

    Read the article

  • Adding to the DOM via Prototype works in Chrome but not Firefox?

    - by zaczap
    I've been working on some Javascript code to add rows to a table dynamically (a small task management system) and it works perfectly in Chrome but not in Firefox. Code in question: var task = new Element('tr', {id:arg}); task.innerHTML = "<td class='notes'>asd</td><td class='check'>*</td>"; //task.innerHTML = "<td class='notes'>&nbsp;</td><td class='check'><input type='checkbox' onclick=\"javascript:complete('"+task.id+"')\" /></td><td class='description'>asd</td><td class='start'>&nbsp;</td><td class='due'></td>"; $('tasks').insert(task); // the commented line above is what the code was originally that does work in chrome When I look at the HTML model in the Firefox debugger, all that is added is: <tr id="arg"><td>asd*</td></tr> Figuring that Chrome might be better at interpreting innerHTML into DOM elements than Firefox, I changed the code to make td elements and add them to my tr element but that didn't improve the situation at all.

    Read the article

  • Exclude notes based on attribute wildcard in XSL node selection

    - by C A
    Using cruisecontrol for continuous integration, I have some annoyances with Weblogic Ant tasks and how they think that server debug information are warnings rather than debug, so are shown in my build report emails. The XML output from cruise is similar to: <cruisecontrol> <build> <target name="compile-xxx"> <task name="xxx" /> </target> <target name="xxx.weblogic"> <task name="wldeploy"> <message priority="warn">Message which isn't really a warning"</message> </task> </target> </build> </cruisecontrol> In the cruisecontrol XSL template the current selection for the task list is: <xsl:variable name="tasklist" select="/cruisecontrol/build//target/task"/> What I would like is something which selects the tasklist in the same way, but doesn't include any target nodes which have the attribute name="*weblogic" where * is a wildcard. I have tried <xsl:variable name="tasklist" select="/cruisecontrol/build//target[@name!='*weblogic']/task"/> but this doesn't seem to have worked. I'm not an expert with XSLT, and just want to get this fixed so I can carry on the real development of the project. Any help is much appreciated.

    Read the article

  • Exclude nodes based on attribute wildcard in XSL node selection

    - by C A
    Using cruisecontrol for continuous integration, I have some annoyances with Weblogic Ant tasks and how they think that server debug information are warnings rather than debug, so are shown in my build report emails. The XML output from cruise is similar to: <cruisecontrol> <build> <target name="compile-xxx"> <task name="xxx" /> </target> <target name="xxx.weblogic"> <task name="wldeploy"> <message priority="warn">Message which isn't really a warning"</message> </task> </target> </build> </cruisecontrol> In the cruisecontrol XSL template the current selection for the task list is: <xsl:variable name="tasklist" select="/cruisecontrol/build//target/task"/> What I would like is something which selects the tasklist in the same way, but doesn't include any target nodes which have the attribute name="*weblogic" where * is a wildcard. I have tried <xsl:variable name="tasklist" select="/cruisecontrol/build//target[@name!='*weblogic']/task"/> but this doesn't seem to have worked. I'm not an expert with XSLT, and just want to get this fixed so I can carry on the real development of the project. Any help is much appreciated.

    Read the article

  • Logic in the db for maintaining a points system relationship?

    - by MarcusBooster
    I'm making a little web based game and need to determine where to put logic that checks the integrity of some underlying data in the sql database. Each user keeps track of points assigned to him, and points are awarded by various tasks. I keep a record of each task transaction to make sure they're not repeated, and to keep track of the value of the task at the time of completion, since an individual award level my fluctuate over time. My schema looks like this so far: create table player ( player_ID serial primary key, player_Points int not null default 0 ); create table task ( task_ID serial primary key, task_PointsAwarded int not null ); create table task_list ( player_ID int references player(player_ID), task_ID int references task(task_ID), when_completed timestamp default current_timestamp, point_value int not null, --not fk because task value may change later constraint pk_player_task_id primary key (player_ID, task_ID) ); So, the player.player_Points should be the total of all his cumulative task points in the task_list. Now where do I put the logic to enforce this? Should I do away with player.player_Points altogether and do queries every time I want to know the total score? Which seems wasteful since I'll be doing that query a lot over the course of a game. Or, put a trigger in the task_list that automatically updates the player.player_Points? Is that too much logic to have in the database and should just maintain this relationship in the application? Thanks.

    Read the article

  • Write out to text file using T-SQL

    - by sasfrog
    I am creating a basic data transfer task using TSQL where I am retrieving certain records from one database that are more recent than a given datetime value, and loading them into another database. This will happen periodically throughout the day. It's such a small task that SSIS seems like overkill - I want to just use a scheduled task which runs a .sql file. Where I need guidance is that I need to persist the datetime from the last run of this task, then use this to filter the records next time the task runs. My initial thought is to just store the datetime in a text file, and update (overwrite) it as part of the task each time it runs. I can read the file in without problems using T-SQL, but writing back out has got me stuck. I've seen plenty of examples which make use of a dynamically-built bcp command, which is then executed using xp_cmdshell. Trouble is, security on the server I'm deploying to precludes the use of xp_cmdshell. So, my question is, are there other ways to simply write a datetime value to a file using TSQL, or should I be thinking about a different approach? EDIT: happy to be corrected about SSIS being "overkill"...

    Read the article

  • Loop through Array with conditional output based on key/value pair

    - by Daniel C
    I have an array with the following columns: Task Status I would like to print out a table that shows a list of the Tasks, but not the Status column. Instead, for Tasks where the Status = 0, I want to add a tag <del> to make the completed task be crossed out. Here's my current code: foreach ($row as $key => $val){ if ($key != 'Status') print "<td>$val</td>"; else if ($val == '0') print "<td><del>$val</del></td>"; } This seems to be correct, but when I print it out, it prints all the tasks with the <del> tag. So basically the "else" clause is being run every time. Here is the var_dump($row): array 'Task' => string 'Task A' (length=6) 'Status' => string '3' (length=1) array 'Task' => string 'Task B' (length=6) 'Status' => string '0' (length=1)

    Read the article

  • Merging rows with uniqueness constraints

    - by Flambino
    I've got a little time-tracking web app (implemented in Rails 3.2.8 & MySQL). The app has several users who add their time to specific tasks, on a given date. The system is set up so a user can only have 1 time entry (i.e. row) per task per date. I.e. if you add time twice on the same task and date, it'll add time to the existing row, rather than create a new one. Now I'm looking to merge 2 tasks. In the simplest terms, merging task ID 2 into task ID 1 would take this time | user_id | task_id | date ------+----------+----------+----------- 10 | 1 | 1 | 2012-10-29 15 | 2 | 1 | 2012-10-29 10 | 1 | 2 | 2012-10-29 5 | 3 | 2 | 2012-10-29 and change it into this time | user_id | task_id | date ------+----------+----------+----------- 20 | 1 | 1 | 2012-10-29 <-- time values merged (summed) 15 | 2 | 1 | 2012-10-29 <-- no change 5 | 3 | 1 | 2012-10-29 <-- task_id changed (no merging necessary) I.e. merge by summing the time values, where the given user_id/date/task combo would conflict. I figure I can use a unique constraint to do a ON DUPLICATE KEY UPDATE ... if I do an insert for every task_id=2 entry. But that seems pretty inelegant. I've also tried to figure a way to first update all the rows in task 1 with the summed-up times, but I can't quite figure that one out. Any ideas?

    Read the article

  • Keep Track of Your Tasks with toDoo

    - by Asian Angel
    A tasks list can be convenient but most times you can not include details for those tasks or have to have an online account to do so. If you want to keep your tasks list with you on your computer or laptop and be able to add plenty of details then you might want to look at toDoo. Note: Requires Adobe AIR (download link at bottom of article). toDoo in Action Once you have installed toDoo everything is rather straightforward for getting started. The first time that you start toDoo there will be a temporary “fill-in” for the “Subject & Details Areas”. Simply highlight over the temporary text and add your information. Notice that if desired you can easily set a custom date and time for your tasks right below the “Details Area”. Note: toDoo does not minimize to the “System Tray”. Once you have everything set all that you need to do is click on “add task”. Here was our first new task being viewed in the “toDoo Description Tab”. Time to add a second task…here you can see the drop-down calendar. You can scroll through and select a different month very easily…just click on the desired day and it will be automatically set. Adding our second task… If you need to edit any of the details for a particular task you can do so in the “Edit toDoo Tab”. This nice little app is convenient and easy to use. Conclusion ToDoo is a simple straightforward app that lets you keep track of your tasks list and relevant details without an online account (especially helpful if you are without a wireless connection at a given moment). If you are looking for more of a list approach that runs on your desktop, then check out our article on Doomi here. Links Download ToDoo at Softpedia Download ToDoo at Adobe Marketplace Download Adobe AIR Similar Articles Productive Geek Tips Turn Chrome’s New Tab Page into a Google Tasks PageMake To-Do Bar in Outlook 2007 Show Only Today’s TasksAdd a non-Google Tasks List to ChromeKeep Track of Homework Assignments with SoshikuTrack the Amount of Time You Spend Online in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images Get Wildlife Photography Tips at BBC’s PhotoMasterClasses Mashpedia is a Real-time Encyclopedia

    Read the article

  • SQL SERVER – Auditing and Profiling Database Made Easy with SQL Audit and Comply

    - by Pinal Dave
    Do you like auditing your database, or can you think of about a million other things you’d rather do?  Unfortunately, auditing is incredibly important.  As with tax audits, it is important to audit databases to ensure they are following all the rules, but they are also important for troubleshooting and security. There are several ways to audit SQL Server.  There is manual auditing, which is going through your database “by hand,” and obviously takes a long time and is quite inefficient.  SQL Server also provides programs to help you audit your systems.  Different administrators will have different opinions about best practices and which tools to use, and each one will be perfected for certain systems and certain users. Today, though, I would like to talk about Apex SQL Audit.  It is an auditing tool that acts like “track changes” in a word processing document.  It will log what has changed on the database, who made the changes, and what effects these changes have had (i.e. what objects were affected down the line).  All this information is logged, and can be easily viewed or printed for easy access. One of the best features of Apex is that it is so customizable (and easy to use!).  First, start Apex.  Then you can connect to the database you would like to monitor. Once you select your database, you can select which table you want to audit. You can customize right down to the field you’d like to audit, and then select which types of actions you’d like tracked – insert, delete, or update.  Repeat these steps for every database you want monitored. To create the logs, choose “Create triggers” in the menu.  The script written here will be what logs each insert, delete, and update function.  Press F5 to execute.  All this tracking information will be stored in AUDIT_LOG_DATA and AUDIT_LOG_TRANSACTIONS tables.  View these tables using ApexSQL Audit reports. These transaction logs can be extremely detailed – especially on very busy servers, where every move it traced.  Reading them can be overwhelming, to say the least.  Apex has tried to make things easier for the average DBA, though. You can read these tracking logs in Apex, and it will display data and objects that affect your server – even things that were happening on your server before you installed Apex! To read these logs, open Apex, and connect to that database you want to audit. Go to the Transaction Logs tab, and add the logs you want to read. To narrow down what results you want to see, you can use the Filter tab to choose time, operation type, name, users, and more. Click Open, and you can see the results in a grid (as shown below).  You can export these results to CSV, HTML, XML or SQL files and save on the hard disk. One of the advantages is that since there are no triggers here, there are no other processes that will affect SQL Server performance.  Using this method is also how to view history from your database that occurred before Apex was installed.  This type of tracking does require storage space for the data sources, as the database must be fully running, and the transaction logs must exist (things not stored in the transactions logs will not be recoverable). Apex can also replace SQL Server Profiler and SQL Server Traces – which are much more complex and error-prone – with its ApexSQL Comply.  It can do fault tolerant auditing, centralized reporting, and “who saw what” information in an easy-to-use interface.  The tracking settings can be altered by the user, or the default options will provide solutions to the most common auditing problems. To get started: open ApexSQL Comply, and selected Database Filter Settings to choose which database you’d like to audit.  You can select which tracking you’re like in Operation Types – DML, DDL, queries executed, execute statements, and more.  To get started, click Start Auditing. After this, every action will be stored in the central repository database (ApexSQLCrd).  You can view the audit and create a report (or view the standard default report) using a wizard. You can see how easy it is to use ApexSQL Comply.  You can easily set audits, including the type and time, and create customized reports.  Remote users can easily access the reports through the user interface (available online, as well), and security concerns are all taken care of by the program.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >