Search Results

Search found 31269 results on 1251 pages for 'process management'.

Page 112/1251 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Best way: restructure an existing Team Foundation Server (TFS) solution

    - by dhh
    In my department we are developing several smaller AddOns for some unified communication server. For versioning and distributed development we use a Team Foundation Server 2012. But: there is only one large TFS solution for all of our applications and libraries: Main Solution Applications App 1 App 2 App 3 Externals Libraries Lib 1 Lib 2 Tools The "Application" path contains all main applications. Those are not depending on each other, but they depend on the Libraries and Externals projects. The "Externals" path contains some external DLLs referenced in our Applications and Libraries. The Libraries path contains commonly used libs (UI templates, Helper classes, etc.). They do not depend on each other and they are referenced in the Libraries and the Tools projects. The Tools path contains some helper programs like setup helpers, update web services, etc. Now, there's some major points why I'd like to change this structure: We can't use server builds. It's uncomfortable to manage TFS scrum management with sprints, impediments, etc. with a solution structure like that. Every developer always has access to all projects in the solution. A complete build lasts too long if one accidentally hits [F6] in Visual Studio... What would you change in this solution? How would you break those projects into smaller Solutions, how should those solutions be structured. My first approach would be, to create one TFS project for each Application, Library and Tool. But how can I ensure that e.g. App 2 always contains the newest version of Lib 1? Do I have to monitor changes on Lib 1 and update App 2 manually as soon as the Lib changes? Or can I somehow force Visual Studio to always use the newest version of an external project somehow?

    Read the article

  • What is the breakdown of jobs in game development?

    - by Destry Ullrich
    There's a project I'm trying to start for Indie Game Development; specifically, it's going to be a social networking website that lets developers meet through (It's a secret). One of the key components is showing what skills members have. Question: I need to know what MAJOR game development roles are not represented in the following list, keeping in mind that many specialist roles are being condensed into more broad, generalist roles: Art Animator (Characters, creatures, props, etc.) Concept Artist (2D scenes, environments, props, silhouettes, etc.) Technical Artist (UI artists, typefaces, graphic designers, etc.) 3D Artist (Modeling, rigging, texture, lighting, etc.) Audio Composer (Scores, music, etc.) Sound Engineer (SFX, mood setting, audio implementation, etc.) Voice (Dialog, acting, etc.) Design Creative Director (Initial direction, team management, communications, etc.) Gameplay Designer (Systems, mechanics, control mapping, etc.) World Designer (Level design, aesthetics, game progression, events, etc.) Writer (Story, mythos, dialog, flavor text, etc.) Programming Engine Programming (Engine creation, scripting, physics, etc.) Graphics Engineer (Sprites, lighting, GUI, etc.) Network Engineer (LAN, multiplayer, server support, etc.) Technical Director (I don't know what a technical director would even do.) Post Script: I have an art background, so i'm not familiar with what the others behind game creation actually do. What's missing from this list, and if you feel some things should be changed around how so?

    Read the article

  • Is using SVN for development and CM a bad practice?

    - by GatorGuy
    I have a bit of experience with SVN as a pure programmer/developer. Within my company, however, we use SVN as our configuration management tool. I thought using SVN for development at the same time was OK since we could use branches and the trunk for dev, and tags for releases. To me, the tags were the CM part, and the branches/trunk were the dev part. Recently a person, who develops high level code (but outside of the "pure SW" group) mentioned that the existing philosophy (mixing SVN for dev and CM) was wrong... in his opinion. His reasoning is that he thinks the company's CM tool should always link to run-able SW (so branches would break this rule). He also mentioned that a CM tool shouldn't be a backup utility for daily or incremental commits. Finally, he doesn't like the idea of having to jump from revision 143 to 89 in order to get a working copy... and further that CM tools shouldn't allow reversion to a broken state. In general he wants to separate the CM and back-up/dev utilties that SVN offers. Honestly, I am new and the person with this perspective is one of seniority, experience, and success, so I want to field this dilemma with the stackoverflow userbase to see if his approach has merit. My question: Should SVN be purely used for development, and another tool for CM (or vice versa)? Why? If so, what tools would you suggest for this combo? Or do you think that integrating both CM and dev into SVN is the best approach? Why? Thanks.

    Read the article

  • What to do as a new team lead on a project with maintainability problems?

    - by Mr_E
    I have just been put in charge of a code project with maintainability problems. What things can I do to get the project on a stable footing? I find myself in a place where we are working with a very large multi-tiered .NET system that is missing a lot of the important things such as unit tests, IOC, MEF, too many static classes, pure datasets etc. I'm only 24 but I've been here for almost three years (this app has been in development for 5) and mostly due to time constraints we've been just adding in more crap to fit the other crap. After doing a number of projects in my free time I have begun to understand just how important all those concepts are. Also due to employee shifting I find myself to now be the team lead on this project and I really want to come up with some smart ways to improve this app. Ways where the value can be explained to management. I have ideas of what I would like to do but they all seem so overwhelming without much upfront gain. Any stories of how people have or would have dealt with this would be a very interesting read. Thanks.

    Read the article

  • How to deal with bad developers who hold back the project

    - by ILovePaperTowels
    We're at the end of a project, but we continue to run into issues because of a single piece of the project. This piece is handled by a specific developer. Finally, we grabbed latest and started reviewing it. It's just horrendous code! Trying to step through it was difficult and it's a relatively simple workflow. The point of this question is, how to deal with this situation. The developer has a hard time accepting criticism (constructive or otherwise) and feels he is more knowledgeable than others on the team who are, well, highly decorated, experienced and accomplished developers. It's difficult to even get into a topic about development because it turns into "I know what I'm talking about and you're just wrong!" type of conversation. A request has already been put in to replace this developer but it is a hard sell since devs are in short supply where we are and this is a corporation with a LOT of political bs. Management has been notified a few times but nothing is happening.

    Read the article

  • SANS Mobility Policy Survey Webcast follow up

    - by Darin Pendergraft
    Hello Everyone!  If you missed the SANS mobility survey webcast on October 23 - here is a link to the replay and to the slides: [Warning -  you have to register to see the replay and to get the slides] https://www.sans.org/webcasts/byod-security-lists-policies-mobility-policy-management-survey-95429 The webcast had a lot of great information about how organizations are setting up and managing their mobile access policies.  Here are a couple of key takeaways: 1.  Who is most concerned about mobile access policy? Security Analysts >> CISOs >> CIOs - the focus is coming from the risk and security office - so what does that mean for the IT teams? 2. How important is mobile policy? 77% said "Critical" or "Extremely Important" - so this means mobile access policies will get a lot of attention.  3. When asked about the state of their mobile policies: Over 35% said they didn't have a mobile access policy and another 35% said they simply ask their employees to sign a usage agreement.  So basically ~70% of the respondents were not actively managing or monitoring mobile access. Be sure to watch the webcast replay for all of the details. Box, Oracle and RSA were all co-sponsors of the survey and webcast and all were invited to give a brief presentation at the end.

    Read the article

  • Creating a MiniDump of a running process

    - by Lodle
    Im trying to make a tool for my end users that can create a MiniDump of my application if it hangs (i.e. external to the app). Im using the same code as the internal MiniDumper but with the handle and processid of the app but i keep getting error code 0xD0000024 when calling MiniDumpWriteDump. Any ideas? void produceDump( const char* exe ) { DWORD processId = 0; HANDLE process = findProcess(exe, processId); if (!process || processId == 0) { printf("Unable to find exe %s to produce dump.\n", exe); return; } LONG retval = EXCEPTION_CONTINUE_SEARCH; HWND hParent = NULL; // find a better value for your app // firstly see if dbghelp.dll is around and has the function we need // look next to the EXE first, as the one in System32 might be old // (e.g. Windows 2000) HMODULE hDll = NULL; char szDbgHelpPath[_MAX_PATH]; if (GetModuleFileName( NULL, szDbgHelpPath, _MAX_PATH )) { char *pSlash = _tcsrchr( szDbgHelpPath, '\\' ); if (pSlash) { _tcscpy( pSlash+1, "DBGHELP.DLL" ); hDll = ::LoadLibrary( szDbgHelpPath ); } } if (hDll==NULL) { // load any version we can hDll = ::LoadLibrary( "DBGHELP.DLL" ); } LPCTSTR szResult = NULL; int err = 0; if (hDll) { MINIDUMPWRITEDUMP pDump = (MINIDUMPWRITEDUMP)::GetProcAddress( hDll, "MiniDumpWriteDump" ); if (pDump) { char szDumpPath[_MAX_PATH]; char szScratch [_MAX_PATH]; time_t rawtime; struct tm * timeinfo; time ( &rawtime ); timeinfo = localtime ( &rawtime ); char comAppPath[MAX_PATH]; SHGetFolderPath(NULL, CSIDL_COMMON_APPDATA , NULL, SHGFP_TYPE_CURRENT, comAppPath ); //COMMONAPP_PATH _snprintf(szDumpPath, _MAX_PATH, "%s\\DN", comAppPath); CreateDirectory(szDumpPath, NULL); _snprintf(szDumpPath, _MAX_PATH, "%s\\DN\\D", comAppPath); CreateDirectory(szDumpPath, NULL); _snprintf(szDumpPath, _MAX_PATH, "%s\\DN\\D\\dumps", comAppPath); CreateDirectory(szDumpPath, NULL); char fileName[_MAX_PATH]; _snprintf(fileName, _MAX_PATH, "%s_Dump_%04d%02d%02d_%02d%02d%02d.dmp", exe, timeinfo->tm_year+1900, timeinfo->tm_mon, timeinfo->tm_mday, timeinfo->tm_hour, timeinfo->tm_min, timeinfo->tm_sec ); _snprintf(szDumpPath, _MAX_PATH, "%s\\DN\\D\\dumps\\%s", comAppPath, fileName); // create the file HANDLE hFile = ::CreateFile( szDumpPath, GENERIC_WRITE, FILE_SHARE_WRITE, NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL ); if (hFile!=INVALID_HANDLE_VALUE) { MINIDUMP_CALLBACK_INFORMATION mci; mci.CallbackRoutine = (MINIDUMP_CALLBACK_ROUTINE)MyMiniDumpCallback; mci.CallbackParam = 0; MINIDUMP_TYPE mdt = (MINIDUMP_TYPE)(MiniDumpWithPrivateReadWriteMemory | MiniDumpWithDataSegs | MiniDumpWithHandleData | //MiniDumpWithFullMemoryInfo | //MiniDumpWithThreadInfo | MiniDumpWithProcessThreadData | MiniDumpWithUnloadedModules ); // write the dump BOOL bOK = pDump( process, processId, hFile, mdt, NULL, NULL, &mci ); DWORD lastErr = GetLastError(); if (bOK) { printf("Crash dump saved to: %s\n", szDumpPath); return; } else { _snprintf( szScratch, _MAX_PATH, "Failed to save dump file to '%s' (error %u)", szDumpPath, lastErr); szResult = szScratch; err = ERR_CANTSAVEFILE; } ::CloseHandle(hFile); } else { _snprintf( szScratch, _MAX_PATH, "Failed to create dump file '%s' (error %u)", szDumpPath, GetLastError()); szResult = szScratch; err = ERR_CANTMAKEFILE; } } else { szResult = "DBGHELP.DLL too old"; err = ERR_DBGHELP_TOOLD; } } else { szResult = "DBGHELP.DLL not found"; err = ERR_DBGHELP_NOTFOUND; } printf("Could not produce a crash dump of %s.\n\n[error: %u %s].\n", exe, err, szResult); return; } this code works 100% when its internal to the process (i.e. with SetUnhandledExceptionFilter)

    Read the article

  • Scala isn't allowing me to execute a batch file whose path contains spaces.Same Java code does.What

    - by Geo
    Here's the code I have: var commandsBuffer = List[String]() commandsBuffer ::= "cmd.exe" commandsBuffer ::= "/c" commandsBuffer ::= '"'+vcVarsAll.getAbsolutePath+'"' commandsBuffer ::= "&&" otherCommands.foreach(c => commandsBuffer ::= c) val asArray = commandsBuffer.reverse.toArray val processOutput = processutils.Proc.executeCommand(asArray,true) return processOutput otherCommands is an Array[String], containing the following elements: vcbuild /rebuild path to a .sln file vcVarsAll contains the path to Visual Studio's vcvarsall.bat. It's path is C:\tools\microsoft visual studio 2005\vc\vcvarsall.bat. The error I receive is: 'c:\Tools\Microsoft' is not recognized as an internal or external command, operable program or batch file.. The processutils.Proc.executeCommand has the following implementation: def executeCommand(params:Array[String],display:Boolean):(String,String) = { val process = java.lang.Runtime.getRuntime.exec(params) val outStream = process.getInputStream val errStream = process.getErrorStream ... } The same code, executed from Java/Groovy works. What am I doing wrong?

    Read the article

  • .NET StandardInput Sending Modifiers

    - by Paul Oakham
    We have some legacy software which depends on sending keystrokes to a DOS window and then scraping the screen. I am trying to re-create the software by redirecting the input and output streams of the process directly to my application. This part I have managed fine using: _Process = new Process(); { _Process.StartInfo.FileName = APPLICATION; _Process.StartInfo.RedirectStandardOutput = true; _Process.StartInfo.RedirectStandardInput = true; _Process.StartInfo.RedirectStandardError = true; _Process.StartInfo.UseShellExecute = false; _Process.StartInfo.CreateNoWindow = true; _Process.OutputDataReceived += new DataReceivedEventHandler(_Process_OutputDataReceived); _Process.ErrorDataReceived += new DataReceivedEventHandler(_Process_ErrorDataReceived); } My problem is I need to send some command modifiers such as Ctrl, ALT and Space as well as F1-12 to this process but am unsure how. I can send basic text and I receive response's fine. I just need to emulate these modifiers.

    Read the article

  • Problems with CloseMainWindow() to close a Windows Explorer window

    - by MorgoZ
    Hello! I´m facing a problem when trying to close a Windows Explorer (not Internet Explorer) window through another application, using the "Process.CloseMainWindow()" method; because it doesn´t close the Explorer window, it tries to close the full Windows (Operative System), by the way, Windows XP. The code is as follows: [DllImport("user32.dll")] static extern int GetForegroundWindow(); [DllImport("user32.dll")] private static extern UInt32 GetWindowThreadProcessId(Int32 hWnd, out Int32 lpdwProcessId); public String[] exeCommand() { try { //Get App Int32 hwnd = 0; hwnd = GetForegroundWindow(); Process actualProcess = Process.GetProcessById(GetWindowProcessID(hwnd)); //Close App if (!actualProcess.CloseMainWindow()) actualProcess.Kill(); } catch { throw; } return null; } Suppose that the "actualProcess" is "explorer.exe" Any help will be appreciated!! Salutes!

    Read the article

  • Recommended way to manage persistent PHP script processes?

    - by BigglesZX
    First off - hello, this is my first Stack Overflow question so I'll try my best to communicate properly. The title of my question may be a bit ambiguous so let me expand upon it immediately: I'm planning a project which involves taking data inputs from several "streaming" APIs, Twitter being one example. I've got a basic script coded up in PHP which runs indefinitely from the command line, taking input from the Twitter streaming API and doing very basic things with it. My eventual objective is to have several such processes running (perhaps daemonized using the System Daemon PEAR class), and I would like to be able to manage them from some governing process (also a PHP script). By manage I mean basic operations such as stop/start and (most crucially) automatically restarting a process that crashes. I would appreciate any pointers on how best to approach this process management angle. Again, apologies if this question is too expansive - tips on more tightly focused lines of enquiry would be appreciated if necessary. Thanks for reading and I look forward to your answers.

    Read the article

  • How to auto-restart a python script on fail?

    - by norm
    This post describes how to keep a child process alive in a BASH script: http://stackoverflow.com/questions/696839/how-do-i-write-a-bash-script-to-restart-a-process-if-it-dies This worked great for calling another BASH script. However, I tried executing something similar where the child process is a Python script: #!/bin/bash PYTHON=/usr/bin/python2.6 function myprocess { $PYTHON daemon.py start } NOW=$(date +"%b-%d-%y") until myprocess; do echo "$NOW Prog crashed. Restarting..." >> error.txt sleep 1 done Now the behaviour is completely different. It seems the python script is no longer a child of of the bash script but seems to have 'taken over' the BASH scripts PID - so there is no longer a BASH wrapper round the called script...why?

    Read the article

  • C# Monitor Programs Periodically Based on PID

    - by ThaKidd
    Thank you in advance for you ideas and input. I would like to periodically check to see if a third party program is currently running on a user's system from my program. I am currently launching the program as follows in C#: String plinkConString = ""; // my connection string Process plink = Process.Start(utilityPath + @"\putty.exe", plinkConString); int plinkProcessId = plink.Id; I launch the program and grab its pid in a Windows environment. As Putty/PLink may disconnect from its SSH server at some point and close, what is the best way to monitor how this process is doing in code? Is there a better way to launch this program to monitor its success or failure?

    Read the article

  • c#: RedirectStandardOutput prevents exit exception?

    - by user1743977
    All I want is to redirect standard output of a process to a file. Sounds easy, but everything I tried doesn't work: 1) putting a dos-style redirection in the list of arguments (e.g. "param1 param2 output.txt") doesn't work, 2) using RedirectStandardOutput = true works, BUT, apparently the process does not through an exception when it exists. So the handler defined via process.Exited += ... doesn't catch anything. To be clear, once I remove the 'RedirectStandardOutput = true ' statement, it DOES catch an exception. What am I doing wrong ? Thanks, Ofer

    Read the article

  • Why does setting a form's enabled property crash the application?

    - by Ruirize
    private void launchbutton_Click(object sender, EventArgs e) { launchbutton.Enabled = false; Process proc = new Process(); proc.EnableRaisingEvents = true; proc.StartInfo.WindowStyle = System.Diagnostics.ProcessWindowStyle.Hidden; //The arguments/filename is set here, just removed for privacy. proc.Exited += new EventHandler(procExit); proc.Start(); } private void procExit(object sender, EventArgs e) { MessageBox.Show("YAY","WOOT"); Thread.Sleep(2000); launchbutton.Enabled = true; } 2 Seconds after I quit the created process, my program crashes. Why?

    Read the article

  • How do I make a background thread in Java that allows the main application to exit completely? This

    - by Bob
    I have a Java application that creates a new thread to do some work. I can launch the new thread with no problems. When the "main" program terminates, I want the thread I created to keep running - which it does... But the problem is, when I run the main application from Eclipse or from Ant under Windows, control doesn't return unless the background process is killed. If I fork the main java process in ant, I want control to return to ant once the main thread is done with its work... But as it is, ant continues to wait until both the main process and the created thread are both terminated. How do I launch the thread in the background such that control will return to ant when the "main" application is finished? (By the way, when I run the same application under Linux, I am able to do this with no problems).

    Read the article

  • ??ORACLE(?):PMON Release Lock

    - by Liu Maclean(???)
    ?????Oracle????????????PMON???????,??????ORACLE PROCESS,??cleanup dead process????release enqueue lock ,???cleanup latch? ????????????????, ????????????Pmon cleanup dead process?release lock??????????? ??Oracle=> MicroOracle, Maclean???????????Oracle behavior: SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE    11.2.0.3.0      Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com SQL> select pid,program  from v$process;        PID PROGRAM ---------- ------------------------------------------------          1 PSEUDO          2 [email protected] (PMON)          3 [email protected] (PSP0)          4 [email protected] (VKTM)          5 [email protected] (GEN0)          6 [email protected] (DIAG)          7 [email protected] (DBRM)          8 [email protected] (PING)          9 [email protected] (ACMS)         10 [email protected] (DIA0)         11 [email protected] (LMON)         12 [email protected] (LMD0)         13 [email protected] (LMS0)         14 [email protected] (RMS0)         15 [email protected] (LMHB)         16 [email protected] (MMAN)         17 [email protected] (DBW0)         18 [email protected] (LGWR)         19 [email protected] (CKPT)         20 [email protected] (SMON)         21 [email protected] (RECO)         22 [email protected] (RBAL)         23 [email protected] (ASMB)         24 [email protected] (MMON)         25 [email protected] (MMNL)         26 [email protected] (MARK)         27 [email protected] (D000)         28 [email protected] (SMCO)         29 [email protected] (S000)         30 [email protected] (LCK0)         31 [email protected] (RSMN)         32 [email protected] (TNS V1-V3)         33 [email protected] (W000)         34 [email protected] (TNS V1-V3)         35 [email protected] (TNS V1-V3)         37 [email protected] (ARC0)         38 [email protected] (ARC1)         40 [email protected] (ARC2)         41 [email protected] (ARC3)         43 [email protected] (GTX0)         44 [email protected] (RCBG)         46 [email protected] (QMNC)         47 [email protected] (TNS V1-V3)         48 [email protected] (TNS V1-V3)         49 [email protected] (Q000)         50 [email protected] (Q001)         51 [email protected] (GCR0) SQL> drop table maclean; Table dropped. SQL> create table maclean(t1 int); Table created. SQL> insert into maclean values(1); 1 row created. SQL> commit; Commit complete. ?????????, ?????????:PID=2  PMONPID=11 LMONPID=18 LGWRPID=20 SMONPID=12 LMD ??????2???”enq: TX – row lock contention”?????,???KILL??????,??????PMON?recover dead process?release TX lock: PROCESS A: QL> select addr,spid,pid from v$process where addr = ( select paddr from v$session where sid=(select distinct sid from v$mystat)); ADDR             SPID                            PID ---------------- ------------------------ ---------- 00000000BD516B80 17880                            46 SQL> select distinct sid from v$mystat;        SID ----------         22 SQL> update maclean set t1=t1+1; 1 row updated. PROCESS B SQL> select addr,spid,pid from v$process where addr = ( select paddr from v$session where sid=(select distinct sid from v$mystat)); ADDR             SPID                            PID ---------------- ------------------------ ---------- 00000000BD515AD0 17908                            45 SQL> update maclean set t1=t1+1; HANG.............. PROCESS B ??"enq: TX – row lock contention"?HANG? ????PROCESS C?? ?SMON?10500 event trace ??PMON?KST TRACE: SQL> set linesize 200 pagesize 1400 SQL> select * from v$lock where sid=22; ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK ---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ---------- 00000000BDCD7618 00000000BDCD7670         22 AE        100          0          4          0         48          2 00007F63268A9E28 00007F63268A9E88         22 TM      77902          0          3          0         32          2 00000000B9BB4950 00000000B9BB49C8         22 TX     458765        892          6          0         32          1 PROCESS A holde?ENQUEUE LOCK??? AE?TM?TX SQL> alter system switch logfile; System altered. SQL> alter system checkpoint; System altered. SQL> alter system flush buffer_cache; System altered. SQL> alter system set "_trace_events"='10000-10999:255:2,20,33'; System altered. SQL> ! kill -9 17880 KILL PROCESS A ???PROCESS B??update ?PMON ? PROCESS B ?errorstack ?KST TRACE????? SQL> oradebug setorapid 2; Oracle pid: 2, Unix process pid: 17533, image: [email protected] (PMON) SQL> oradebug dump errorstack 4; Statement processed. SQL> oradebug tracefile_name /s01/orabase/diag/rdbms/vprod/VPROD1/trace/VPROD1_pmon_17533.trc SQL> oradebug setorapid 45; Oracle pid: 45, Unix process pid: 17908, image: [email protected] (TNS V1-V3) SQL> oradebug dump errorstack 4; Statement processed. SQL>oradebug tracefile_name /s01/orabase/diag/rdbms/vprod/VPROD1/trace/VPROD1_ora_17908.trc ??PMON? KST TRACE: 2012-05-18 10:37:34.557225 :8001ECE8:db_trace:ktur.c@5692:ktugru(): [10444:2:1] next rollback uba: 0x00000000.0000.00 2012-05-18 10:37:34.557382 :8001ECE9:db_trace:ksl2.c@16009:ksl_update_post_stats(): [10005:2:1] KSL POST SENT postee=18 num=4 loc='ksa2.h LINE:285 ID:ksasnd' id1=0 id2=0 name=   type=0 2012-05-18 10:37:34.557514 :8001ECEA:db_trace:ksq.c@8540:ksqrcli(): [10704:2:1] ksqrcl: release TX-0007000d-0000037c mode=X 2012-05-18 10:37:34.558819 :8001ECF0:db_trace:ksl2.c@16009:ksl_update_post_stats(): [10005:2:1] KSL POST SENT postee=45 num=5 loc='kji.h LINE:3418 ID:kjata: wake up enqueue owner' id1=0 id2=0 name=   type=0 2012-05-18 10:37:34.559047 :8001ECF8:db_trace:ksl2.c@16009:ksl_update_post_stats(): [10005:2:1] KSL POST SENT postee=12 num=6 loc='kjm.h LINE:1224 ID:kjmpost: post lmd' id1=0 id2=0 name=   type=0 2012-05-18 10:37:34.559271 :8001ECFC:db_trace:ksq.c@8826:ksqrcli(): [10704:2:1] ksqrcl: SUCCESS 2012-05-18 10:37:34.559291 :8001ECFD:db_trace:ktu.c@8652:ktudnx(): [10813:2:1] ktudnx: dec cnt xid:7.13.892 nax:0 nbx:0 2012-05-18 10:37:34.559301 :8001ECFE:db_trace:ktur.c@3198:ktuabt(): [10444:2:1] ABORT TRANSACTION - xid: 0x0007.00d.0000037c 2012-05-18 10:37:34.559327 :8001ECFF:db_trace:ksq.c@8540:ksqrcli(): [10704:2:1] ksqrcl: release TM-0001304e-00000000 mode=SX 2012-05-18 10:37:34.559365 :8001ED00:db_trace:ksq.c@8826:ksqrcli(): [10704:2:1] ksqrcl: SUCCESS 2012-05-18 10:37:34.559908 :8001ED01:db_trace:ksq.c@8540:ksqrcli(): [10704:2:1] ksqrcl: release AE-00000064-00000000 mode=S 2012-05-18 10:37:34.559982 :8001ED02:db_trace:ksq.c@8826:ksqrcli(): [10704:2:1] ksqrcl: SUCCESS 2012-05-18 10:37:34.560217 :8001ED03:db_trace:ksfd.c@15379:ksfdfods(): [10298:2:1] ksfdfods:fob=0xbab87b48 aiopend=0 2012-05-18 10:37:34.560336 :GSIPC:kjcs.c@4876:kjcsombdi(): GSIPC:SOD: 0xbc79e0c8 action 3 state 0 chunk (nil) regq 0xbc79e108 batq 0xbc79e118 2012-05-18 10:37:34.560357 :GSIPC:kjcs.c@5293:kjcsombdi(): GSIPC:SOD: exit cleanup for 0xbc79e0c8 rc: 1, loc: 0x303 2012-05-18 10:37:34.560375 :8001ED04:db_trace:kss.c@1414:kssdch(): [10809:2:1] kssdch(0xbd516b80 = process, 3) 1 0 exit 2012-05-18 10:37:34.560939 :8001ED06:db_trace:kmm.c@10578:kmmlrl(): [10257:2:1] KMMLRL: Entering: flg(0x0) rflg(0x4) 2012-05-18 10:37:34.561091 :8001ED07:db_trace:kmm.c@10472:kmmlrl_process_events(): [10257:2:1] KMMLRL: Events: succ(3) wait(0) fail(0) 2012-05-18 10:37:34.561100 :8001ED08:db_trace:kmm.c@11279:kmmlrl(): [10257:2:1] KMMLRL: Reg/update: flg(0x0) rflg(0x4) 2012-05-18 10:37:34.563325 :8001ED0B:db_trace:kmm.c@12511:kmmlrl(): [10257:2:1] KMMLRL: Update: ret(0) 2012-05-18 10:37:34.563335 :8001ED0C:db_trace:kmm.c@12768:kmmlrl(): [10257:2:1] KMMLRL: Exiting: flg(0x0) rflg(0x4) 2012-05-18 10:37:34.563354 :8001ED0D:db_trace:ksl2.c@2598:kslwtbctx(): [10005:2:1] KSL WAIT BEG [pmon timer] 300/0x12c 0/0x0 0/0x0 wait_id=78 seq_num=79 snap_id=1 PMON??dead process A??????????TX Lock:ksqrcl: release TX-0007000d-0000037c mode=X ?????Post Process B,??Process B ?acquire?TX lock???????:KSL POST SENT postee=45 num=5 loc=’kji.h LINE:3418 ID:kjata: wake up enqueue owner’ id1=0 id2=0 name=   type=0 Process B???PMON??????????ksl2.c@14563:ksliwat(): [10005:45:151] KSL POST RCVD poster=2 num=5 loc=’kji.h LINE:3418 ID:kjata: wake up enqueue owner’ id1=0 id2=0 name=   type=0 fac#=3 posted=0×3 may_be_posted=1kslwtbctx(): [10005:45:151] KSL WAIT BEG [latch: ges resource hash list] 3162668560/0xbc827e10 91/0x5b 0/0×0 wait_id=14 seq_num=15 snap_id=1kslwtectx(): [10005:45:151] KSL WAIT END [latch: ges resource hash list] 3162668560/0xbc827e10 91/0x5b 0/0×0 wait_id=14 seq_num=15 snap_id=1 ?RAC????POST LMD(lock Manager)??,????????GES??:2012-05-18 10:37:34.559047 :8001ECF8:db_trace:ksl2.c@16009:ksl_update_post_stats(): [10005:2:1] KSL POST SENT postee=12 num=6 loc=’kjm.h LINE:1224 ID:kjmpost: post lmd’ id1=0 id2=0 name=   type=0 ??ksqrcl: release TX????????:ksq.c@8826:ksqrcli(): [10704:2:1] ksqrcl: SUCCESS ??PMON abort Process A???Transaction2012-05-18 10:37:34.559291 :8001ECFD:db_trace:ktu.c@8652:ktudnx(): [10813:2:1] ktudnx: dec cnt xid:7.13.892 nax:0 nbx:02012-05-18 10:37:34.559301 :8001ECFE:db_trace:ktur.c@3198:ktuabt(): [10444:2:1] ABORT TRANSACTION – xid: 0×0007.00d.0000037c ??Process A?????maclean??TM lock:ksq.c@8540:ksqrcli(): [10704:2:1] ksqrcl: release TM-0001304e-00000000 mode=SXksq.c@8826:ksqrcli(): [10704:2:1] ksqrcl: SUCCESS ??Process A?????AE ( Prevent Dropping an edition in use) lock:ksq.c@8540:ksqrcli(): [10704:2:1] ksqrcl: release AE-00000064-00000000 mode=Sksq.c@8826:ksqrcli(): [10704:2:1] ksqrcl: SUCCESS ??cleanup process Akjcs.c@4876:kjcsombdi(): GSIPC:SOD: 0xbc79e0c8 action 3 state 0 chunk (nil) regq 0xbc79e108 batq 0xbc79e118GSIPC:kjcs.c@5293:kjcsombdi(): GSIPC:SOD: exit cleanup for 0xbc79e0c8 rc: 1, loc: 0×303kss.c@1414:kssdch(): [10809:2:1] kssdch(0xbd516b80 = process, 3) 1 0 exit 0xbd516b80??PROCESS A ?paddr ???? kssdch???????? ??process???state object SO KSS: delete children of state obj. PMON ??kmmlrl()????instance goodness??update for session drop deltakmmlrl(): [10257:2:1] KMMLRL: Entering: flg(0×0) rflg(0×4)kmmlrl_process_events(): [10257:2:1] KMMLRL: Events: succ(3) wait(0) fail(0)kmmlrl(): [10257:2:1] KMMLRL: Reg/update: flg(0×0) rflg(0×4)kmmlrl(): [10257:2:1] KMMLRL: Update: ret(0)kmmlrl(): [10257:2:1] KMMLRL: Exiting: flg(0×0) rflg(0×4) ????????PMON???? 3s???”pmon timer”??kslwtbctx(): [10005:2:1] KSL WAIT BEG [pmon timer] 300/0x12c 0/0×0 0/0×0 wait_id=78 seq_num=79 snap_id=1

    Read the article

  • What speed are Wi-Fi management and control frames sent at?

    - by Bryce Thomas
    There are a bunch of different 802.11 Wi-Fi standards, e.g. 802.11a, 802.11b, 802.11g, 802.11n etc. that all support different speeds. Wi-Fi frames are generally categorised as one of the following: Data frames - carry the actual application data Control frames - coordinate when its safe to send/reduce collisions Management frames - handle connection discovery/setup/tear down (e.g. AP discovery, association, disassociation) My question is about whether all these frames, and specifically management frames, are transmitted at the fastest supported speed available, or whether certain classes of frames are transmitted at some lowest common denominator speed. I have noticed that when I put an 802.11b/g only device into monitor mode and capture traffic over the air, I still see management frames (e.g. association/disassociation) being transmitted between my phone and AP which are both 802.11n, even though 802.11n has a higher transfer rate. So I am imagining one of two possibilities: My 802.11n phone/AP had to negotiate a slower speed for some reason and that's why I can see their frames on my 802.11b/g monitoring device. Management frames (and perhaps control frames also?) are sent at a lower speed, and it's only data frames that are transmitted faster with newer 802.11 standards. The reason I would like to know which one of these two possibilities (or perhaps a third possibility) is the case is that I want to capture management frames, and need to know whether using an 802.11b/g card is going to lead to me missing some frames sent at higher speeds than the monitoring card can observe. If management frames are indeed sent at a slower rate, then it's all good. If I just happen to be seeing the management frames because my phone/AP have negotiated a slower rate though, then I need to reconsider what card I use for packet capture.

    Read the article

  • Conversation as User Assistance

    - by ultan o'broin
    Applications User Experience members (Erika Web, Laurie Pattison, and I) attended the User Assistance Europe Conference in Stockholm, Sweden. We were impressed with the thought leadership and practical application of ideas in Anne Gentle's keynote address "Social Web Strategies for Documentation". After the conference, we spoke with Anne to explore the ideas further. Anne Gentle (left) with Applications User Experience Senior Director Laurie Pattison In Anne's book called Conversation and Community: The Social Web for Documentation, she explains how user assistance is undergoing a seismic shift. The direction is away from the old print manuals and online help concept towards a web-based, user community-driven solution using social media tools. User experience professionals now have a vast range of such tools to start and nurture this "conversation": blogs, wikis, forums, social networking sites, microblogging systems, image and video sharing sites, virtual worlds, podcasts, instant messaging, mashups, and so on. That user communities are a rich source of user assistance is not a surprise, but the extent of available assistance is. For example, we know from the Consortium for Service Innovation that there has been an 'explosion' of user-generated content on the web. User-initiated community conversations provide as much as 30 times the number of official help desk solutions for consortium members! The growing reliance on user community solutions is clearly a user experience issue. Anne says that user assistance as conversation "means getting closer to users and helping them perform well. User-centered design has been touted as one of the most important ideas developed in the last 20 years of workplace writing. Now writers can take the idea of user-centered design a step further by starting conversations with users and enabling user assistance in interactions." Some of Anne's favorite examples of this paradigm shift from the world of traditional documentation to community conversation include: Writer Bob Bringhurst's blog about Adobe InDesign and InCopy products and Adobe's community help The Microsoft Development Network Community Center ·The former Sun (now Oracle) OpenDS wiki, NetBeans Ruby and other community approaches to engage diverse audiences using screencasts, wikis, and blogs. Cisco's customer support wiki, EMC's community, as well as Symantec and Intuit's approaches The efforts of Ubuntu, Mozilla, and the FLOSS community generally Adobe Writer Bob Bringhurst's Blog Oracle is not without a user community conversation too. Besides the community discussions and blogs around documentation offerings, we have the My Oracle Support Community forums, Oracle Technology Network (OTN) communities, wiki, blogs, and so on. We have the great work done by our user groups and customer councils. Employees like David Haimes reach out, and enthusiastic non-employee gurus like Chet Justice (OracleNerd), Floyd Teter and Eddie Awad provide great "how-to" information too. But what does this paradigm shift mean for existing technical writers as users turn away from the traditional printable PDF manual deliverables? We asked Anne after the conference. The writer role becomes one of conversation initiator or enabler. The role evolves, along with the process, as the users define their concept of user assistance and terms of engagement with the product instead of having it pre-determined. It is largely a case now of "inventing the job while you're doing it, instead of being hired for it" Anne said. There is less emphasis on formal titles. Anne mentions that her own title "Content Stacker" at OpenStack; others use titles such as "Content Curator" or "Community Lead". However, the role remains one essentially about communications, "but of a new type--interacting with users, moderating, curating content, instead of sitting down to write a manual from start to finish." Clearly then, this role is open to more than professional technical writers. Product managers who write blogs, developers who moderate forums, support professionals who update wikis, rock star programmers with a penchant for YouTube are ideal. Anyone with the product knowledge, empathy for the user, and flair for relationships on the social web can join in. Some even perform these roles already but do not realize it. Anne feels the technical communicator space will move from hiring new community conversation professionals (who are already active in the space through blogging, tweets, wikis, and so on) to retraining some existing writers over time. Our own research reveals that the established proponents of community user assistance even set employee performance objectives for internal content curators about the amount of community content delivered by people outside the organization! To take advantage of the conversations on the web as user assistance, enterprises must first establish where on the spectrum their community lies. "What is the line between community willingness to contribute and the enterprise objectives?" Anne asked. "The relationship with users must be managed and also measured." Anne believes that the process can start with a "just do it" approach. Begin by reaching out to existing user groups, individual bloggers and tweeters, forum posters, early adopter program participants, conference attendees, customer advisory board members, and so on. Use analytical tools to measure the level of conversation about your products and services to show a return on investment (ROI), winning management support. Anne emphasized that success with the community model is dependent on lowering the technical and motivational barriers so that users can readily contribute to the conversation. Simple tools must be provided, and guidelines, if any, must be straightforward but not mandatory. The conversational approach is one where traditional style and branding guides do not necessarily apply. Tools and infrastructure help users to create content easily, to search and find the information online, read it, rate it, translate it, and participate further in the content's evolution. Recognizing contributors by using ratings on forums, giving out Twitter kudos, conference invitations, visits to headquarters, free products, preview releases, and so on, also encourages the adoption of the conversation model. The move to conversation as user assistance is not free, but there is a business ROI. The conversational model means that customer service is enhanced, as user experience moves from a functional to a valued, emotional level. Studies show a positive correlation between loyalty and financial performance (Consortium for Service Innovation, 2010), and as customer experience and loyalty become key differentiators, user experience professionals cannot explore the model's possibilities. The digital universe (measured at 1.2 million petabytes in 2010) is doubling every 12 to 18 months, and 70 percent of that universe consists of user-generated content (IDC, 2010). Conversation as user assistance cannot be ignored but must be embraced. It is a time to manage for abundance, not scarcity. Besides, the conversation approach certainly sounds more interesting, rewarding, and fun than the traditional model! I would like to thank Anne for her time and thoughts, and recommend that all user assistance professionals read her book. You can follow Anne on Twitter at: http://www.twitter.com/annegentle. Oracle's Acrolinx IQ deployment was used to author this article.

    Read the article

  • Can't remove burg theme packages

    - by Lassi
    Today after trying to install and remove BURG and few themes I faced an issue. Now I can't install or remove anything. Here is the output (unfortunately partly in Finnish, I couldn't change language since it also seems to depend on package listings: lassi@lassi-ubuntu:~$ sudo apt-get autoremove Luetaan pakettiluetteloita... Valmis Muodostetaan riippuvuussuhteiden puu Luetaan tilatietoja... Valmis Seuraavat paketit POISTETAAN: burg-theme-fortune burg-theme-gnome burg-theme-picchio 0 päivitetty, 0 uutta asennusta, 3 poistettavaa ja 0 päivittämätöntä. 3 ei asennettu kokonaan tai poistettiin. Toiminnon jälkeen vapautuu 7 180 k t levytilaa. Haluatko jatkaa [K/e]? k (Luetaan tietokantaa... 166462 files and directories currently installed.) Poistetaan pakettia burg-theme-fortune... sudo: update-burg: command not found dpkg: virhe käsiteltäessä burg-theme-fortune (--remove): aliprosessi installed post-removal script palautti virhetilakoodin 1 Poistetaan pakettia burg-theme-gnome... sudo: update-burg: command not found dpkg: virhe käsiteltäessä burg-theme-gnome (--remove): aliprosessi installed post-removal script palautti virhetilakoodin 1 Poistetaan pakettia burg-theme-picchio... sudo: update-burg: command not found dpkg: virhe käsiteltäessä burg-theme-picchio (--remove): aliprosessi installed post-removal script palautti virhetilakoodin 1 Käsittelyssä tapahtui liian monta virhettä: burg-theme-fortune burg-theme-gnome burg-theme-picchio E: Sub-process /usr/bin/dpkg returned an error code (1) Basically what seems to happen is this: It creates the package lists, then tries to remove packet burg-theme-fortune. This fails as update-burg command was not found. Then dpkg reports an error while processing the packet. Same goes with all 3 packages. In the end it claims that there were too many errors, and packages stay installed. I also tried installing burg as it tries to run command update-burg, but appears that it tries to delete these packages always when I try to install or remove or do anything with apt. Any ideas how I could solve this issue? Edit: Here is the output of apt-get install burg (tried installing again to get English output) lassi@lassi-ubuntu:~$ LC_ALL=C sudo apt-get install burg [sudo] password for lassi: Reading package lists... Done Building dependency tree Reading state information... Done burg is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 3 not fully installed or removed. Need to get 0 B/6169 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 167497 files and directories currently installed.) Preparing to replace burg-theme-fortune 0.5.0-1 (using .../burg-theme-fortune_0.5.0-1_all.deb) ... Unpacking replacement burg-theme-fortune ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: warning: subprocess old post-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing /var/cache/apt/archives/burg-theme-fortune_0.5.0-1_all.deb (--unpack): subprocess new post-removal script returned error exit status 1 Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Preparing to replace burg-theme-gnome 0.5.0-1 (using .../burg-theme-gnome_0.5.0-1_all.deb) ... Unpacking replacement burg-theme-gnome ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: warning: subprocess old post-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing /var/cache/apt/archives/burg-theme-gnome_0.5.0-1_all.deb (--unpack): subprocess new post-removal script returned error exit status 1 Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Preparing to replace burg-theme-picchio 0.5.0-1 (using .../burg-theme-picchio_0.5.0-1_all.deb) ... Unpacking replacement burg-theme-picchio ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: warning: subprocess old post-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing /var/cache/apt/archives/burg-theme-picchio_0.5.0-1_all.deb (--unpack): subprocess new post-removal script returned error exit status 1 Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/burg-theme-fortune_0.5.0-1_all.deb /var/cache/apt/archives/burg-theme-gnome_0.5.0-1_all.deb /var/cache/apt/archives/burg-theme-picchio_0.5.0-1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) lassi@lassi-ubuntu:~$

    Read the article

  • SQL SERVER – Copy Data from One Table to Another Table – SQL in Sixty Seconds #031 – Video

    - by pinaldave
    Copy data from one table to another table is one of the most requested questions on forums, Facebook and Twitter. The question has come in many formats and there are places I have seen developers are using cursor instead of this direct method. Earlier I have written the similar article a few years ago - SQL SERVER – Insert Data From One Table to Another Table – INSERT INTO SELECT – SELECT INTO TABLE. The article has been very popular and I have received many interesting and constructive comments. However there were two specific comments keep on ending up on my mailbox. 1) SQL Server AdventureWorks Samples Database does not have table I used in the example 2) If there is a video tutorial of the same example. After carefully thinking I decided to build a new set of the scripts for the example which are very similar to the old one as well video tutorial of the same. There was no better place than our SQL in Sixty Second Series to cover this interesting small concept. Let me know what you think of this video. Here is the updated script. -- Method 1 : INSERT INTO SELECT USE AdventureWorks2012 GO ----Create TestTable CREATE TABLE TestTable (FirstName VARCHAR(100), LastName VARCHAR(100)) ----INSERT INTO TestTable using SELECT INSERT INTO TestTable (FirstName, LastName) SELECT FirstName, LastName FROM Person.Person WHERE EmailPromotion = 2 ----Verify that Data in TestTable SELECT FirstName, LastName FROM TestTable ----Clean Up Database DROP TABLE TestTable GO --------------------------------------------------------- --------------------------------------------------------- -- Method 2 : SELECT INTO USE AdventureWorks2012 GO ----Create new table and insert into table using SELECT INSERT SELECT FirstName, LastName INTO TestTable FROM Person.Person WHERE EmailPromotion = 2 ----Verify that Data in TestTable SELECT FirstName, LastName FROM TestTable ----Clean Up Database DROP TABLE TestTable GO Related Tips in SQL in Sixty Seconds: SQL SERVER – Insert Data From One Table to Another Table – INSERT INTO SELECT – SELECT INTO TABLE Powershell – Importing CSV File Into Database – Video SQL SERVER – 2005 – Export Data From SQL Server 2005 to Microsoft Excel Datasheet SQL SERVER – Import CSV File into Database Table Using SSIS SQL SERVER – Import CSV File Into SQL Server Using Bulk Insert – Load Comma Delimited File Into SQL Server SQL SERVER – 2005 – Generate Script with Data from Database – Database Publishing Wizard What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • Running a scheduled task as SYSTEM with console window open

    - by raoulsson
    I am auto creating scheduled tasks with this line within a batch windows script: schtasks /Create /RU SYSTEM /RP SYSTEM /TN startup-task-%%i /TR %SPEEDWAY_DIR%\%TARGET_DIR%%%i\%STARTUPFILE% /SC HOURLY /MO 1 /ST 17:%%i1:00 I wanted to avoid using specific user credentials and thus decided to use SYSTEM. Now, when checking in the taskmanagers process list or, even better, directly with the C:\> schtasks command itself, all is working well, the tasks are running as intended. However in this particular case I would like to have an open console window where I can see the log flying by. I know I could use C:\> tail -f thelogfile.log if I installed e.g. cygwin (on all machines) or some proprietary tools like Baretail on Windows. But since I only switch to these machines in case of trouble, I would prefer to start the scheduled task in such a way that every user immediately sees the log. Any chance? Thanks!

    Read the article

  • Backup Meta-Data

    - by BuckWoody
    I'm working on a PowerShell script to show me the trending durations of my backup activities. The first thing I need is the data, so I looked at the Standard Reports in SQL Server Management Studio, and found a report that suited my needs, so I pulled out the script that it runs and modified it to this T-SQL Script. A few words here - you need to be in the MSDB database for this to run, and you can add a WHERE clause to limit to a database, timeframe, type of backup, whatever. For that matter, I won't use all of the data in this query in my PowerShell script, but it gives me lots of avenues to graph: SELECT distinct t1.name AS 'DatabaseName' ,(datediff( ss,  t3.backup_start_date, t3.backup_finish_date)) AS 'DurationInSeconds' ,t3.user_name AS 'UserResponsible' ,t3.name AS backup_name ,t3.description ,t3.backup_start_date ,t3.backup_finish_date ,CASE WHEN t3.type = 'D' THEN 'Database' WHEN t3.type = 'L' THEN 'Log' WHEN t3.type = 'F' THEN 'FileOrFilegroup' WHEN t3.type = 'G' THEN 'DifferentialFile' WHEN t3.type = 'P' THEN 'Partial' WHEN t3.type = 'Q' THEN 'DifferentialPartial' END AS 'BackupType' ,t3.backup_size AS 'BackupSizeKB' ,t6.physical_device_name ,CASE WHEN t6.device_type = 2 THEN 'Disk' WHEN t6.device_type = 102 THEN 'Disk' WHEN t6.device_type = 5 THEN 'Tape' WHEN t6.device_type = 105 THEN 'Tape' END AS 'DeviceType' ,t3.recovery_model  FROM sys.databases t1 INNER JOIN backupset t3 ON (t3.database_name = t1.name )  LEFT OUTER JOIN backupmediaset t5 ON ( t3.media_set_id = t5.media_set_id ) LEFT OUTER JOIN backupmediafamily t6 ON ( t6.media_set_id = t5.media_set_id ) ORDER BY backup_start_date DESC I'll munge this into my Excel PowerShell chart script tomorrow. Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • dpkg error : old pre-removal script returned error exit status 102

    - by Siva Prasad Varma
    I am unable to install or remove a package on my Ubuntu 10.04 due to the following error. $ sudo apt-get autoremove Password: Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: busybox 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. Need to get 0B/212kB of archives. After this operation, 627kB disk space will be freed. Do you want to continue [Y/n]? y Selecting previously deselected package nscd. (Reading database ... 235651 files and directories currently installed.) Preparing to replace nscd 2.11.1-0ubuntu7.8 (using .../nscd_2.11.1-0ubuntu7.8_amd64.deb) ... invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: warning: old pre-removal script returned error exit status 102 dpkg - trying script from the new package instead ... invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: error processing /var/cache/apt/archives/nscd_2.11.1-0ubuntu7.8_amd64.deb (--unpack): subprocess new pre-removal script returned error exit status 102 update-rc.d: warning: /etc/rc2.d/S76nscd is not a symbolic link invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 102 Errors were encountered while processing: /var/cache/apt/archives/nscd_2.11.1-0ubuntu7.8_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) What should I do to resolve this error. I have tried sudo dpkg --remove --force-remove-reinstreq nscd but it did not work.

    Read the article

  • Backup Meta-Data

    - by BuckWoody
    I'm working on a PowerShell script to show me the trending durations of my backup activities. The first thing I need is the data, so I looked at the Standard Reports in SQL Server Management Studio, and found a report that suited my needs, so I pulled out the script that it runs and modified it to this T-SQL Script. A few words here - you need to be in the MSDB database for this to run, and you can add a WHERE clause to limit to a database, timeframe, type of backup, whatever. For that matter, I won't use all of the data in this query in my PowerShell script, but it gives me lots of avenues to graph: SELECT distinct t1.name AS 'DatabaseName' ,(datediff( ss,  t3.backup_start_date, t3.backup_finish_date)) AS 'DurationInSeconds' ,t3.user_name AS 'UserResponsible' ,t3.name AS backup_name ,t3.description ,t3.backup_start_date ,t3.backup_finish_date ,CASE WHEN t3.type = 'D' THEN 'Database' WHEN t3.type = 'L' THEN 'Log' WHEN t3.type = 'F' THEN 'FileOrFilegroup' WHEN t3.type = 'G' THEN 'DifferentialFile' WHEN t3.type = 'P' THEN 'Partial' WHEN t3.type = 'Q' THEN 'DifferentialPartial' END AS 'BackupType' ,t3.backup_size AS 'BackupSizeKB' ,t6.physical_device_name ,CASE WHEN t6.device_type = 2 THEN 'Disk' WHEN t6.device_type = 102 THEN 'Disk' WHEN t6.device_type = 5 THEN 'Tape' WHEN t6.device_type = 105 THEN 'Tape' END AS 'DeviceType' ,t3.recovery_model  FROM sys.databases t1 INNER JOIN backupset t3 ON (t3.database_name = t1.name )  LEFT OUTER JOIN backupmediaset t5 ON ( t3.media_set_id = t5.media_set_id ) LEFT OUTER JOIN backupmediafamily t6 ON ( t6.media_set_id = t5.media_set_id ) ORDER BY backup_start_date DESC I'll munge this into my Excel PowerShell chart script tomorrow. Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >