Search Results

Search found 5237 results on 210 pages for 'lightweight processes'.

Page 33/210 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • The Social Business Thought Leaders - Ray Wang

    - by kellsey.ruppel
    It seems both consumers and businesses are at the peak of the social hype. Overwhelmed by social media channels, platforms, and processes both in their private and professional life, many early adopters are starting to feel the social fatigue. Mirroring what happened with email and web sites during the late 1990's - early 2000's, more and more managers are looking to move from ubiquitous social media tactics to the most appropriate business use case and processes. This step becomes even more important considering the year over year contraction in IT budgets and the consequent need to maximize return on every dollar spent in new technologies. Ray Wang, CEO and Principal Analyst at Constellation Research, suggests engagement through collaborative technologies both as a conceptual model and a transformational tool for enterprises to reap business value. Without participation - the reasoning goes - there is no value and good technology alone is not enough to guarantee employee and customer adoption. Enterprise gamification is a new lever to succeed with Social Business by directing a critical mass of participation towards desired outcomes. What kind of outcomes? A recent study from Constellation Research (see 2012 Q1 Gamification Early Adopters Best Practices) highlights how Marketing, Customer Service and HR are leading the pack with gamification in processes such as: Sustaining long term customer loyalty (76.4%) Improving response in campaign to lead (74.5%) Right channeling incidents for resolution in social media (67.3%) Growing the number service and support incidents resolved by the community (63.6%) Improving employee referral rates and effective recruiting (43.6%) Driving on-boarding success with new hires (20%) More than simply adding badges, points and leaderboards to existing processes, enterprise gamification should be holistically embedded into employee and customer experience to stimulate specific behaviors. According to Ray Wang this can be done at three core levels: Measurable actions. The behaviors we want to facilitate consist of granular actions (i.e likes, comments, posts, recommendations, etc) and more complex actions (i.e projects, initiatives, programmes) attributed to individuals, groups and/or external actors  Reputation. The reputation an individual has earned through his actions is a key factor in building motivation among others and it is determined by its identity, social standing status and competitiveness Incentives or the intrinsic and extrinsic rewards that motivate behaviors and drive actions Listen to Ray Wang's video-interview to learn more about the dynamics that are shaping the future of collaboration and how gamification can help organizations attain new levels of engagement.

    Read the article

  • Lubuntu: neither shut-down nor restart works

    - by Rantanplan
    aI have a freshly installed Lubuntu 14.04.1 (installed with forcepae option on a laptop with Pentium M processor). The only problem that I have found so far is that I cannot shut-down or restart the laptop. It always continues showing "Lubuntu" and some dots. Pressing Esc it says wait-for-state stop/waiting * Stopping rsync daemon rsync [OK] * Asking all remaining processes to terminate… [OK] * Killing all remaining processes… [fail] ModemManager [597] : <info> Caught signal, shutting down… ModemManager [597] : <info> ModemManager is shut down nm-dispatcher.action: Could not get the system bus. Make sure the message bus daemon is running! Message: Did not receive a reply. Possible causes: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. * Deactivating swap… [OK] * Will now halt The cursor remains blinking but the only way to switch it off is to hold the power-off key pressed for some seconds. I tried sudo shutdown -h now, sudo halt and sudo poweroff resulting in the same problem. I also tried to add acpi=force in GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" in /etc/default/grub and run sudo update-grub; then, using the taskbar's shot-down button lead to a direct stop of the laptop equal to holding the power-off key pressed for some seconds. Next I followed the answer http://askubuntu.com/a/202481/288322. Now, I directly receive some messages during shut-down starting wait-for-state stop/waiting * Stopping rsync daemon rsync [OK] * Asking all remaining processes to terminate… [OK] [ 240.944277] INFO: task kworker/0:2:24: block for more than 120 seconds. [ 240.944461] Tainted: G S 3.13.0-34-generic #60-Ubuntu [ 240.944623] "echo 0 > /proc/sys/kernel/hung_tasks_timeout_secs" disables this message. followed by some more similar lines and then: * Killing all remaining processes… [fail] ModemManager [576] : <info> Caught signal, shutting down… nm-dispatcher.action: Caught signal 15, shutting down... ModemManager [576] : <info> ModemManager is shut down * Deactivating swap… [OK] * Will now halt [ 600.944276] INFO: task kworker/0:2:24: block for more than 120 seconds. [ 600.944458] Tainted: G S 3.13.0-34-generic #60-Ubuntu [ 600.944619] "echo 0 > /proc/sys/kernel/hung_tasks_timeout_secs" disables this message. Then, nothing more was coming during the next 5 minutes. If you know where can I find relevant error information, I will be happy to search for them.

    Read the article

  • Checking who is connected to your server, with PowerShell.

    - by Fatherjack
    There are many occasions when, as a DBA, you want to see who is connected to your SQL Server, along with how they are connecting and what sort of activities they are carrying out. I’m going to look at a couple of ways of getting this information and compare the effort required and the results achieved of each. SQL Server comes with a couple of stored procedures to help with this sort of task – sp_who and its undocumented counterpart sp_who2. There is also the pumped up version of these called sp_whoisactive, written by Adam Machanic which does way more than these procedures. I wholly recommend you try it out if you don’t already know how it works. When it comes to serious interrogation of your SQL Server activity then it is absolutely indispensable. Anyway, back to the point of this blog, we are going to look at getting the information from sp_who2 for a remote server. I wrote this Powershell script a week or so ago and was quietly happy with it for a while. I’m relatively new to Powershell so forgive both my rather low threshold for entertainment and the fact that something so simple is a moderate achievement for me. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server # connection and query stuff         $ConnectionStr = "Server=$Server;Database=Master;Integrated Security=True" $Query = "EXEC sp_who2" $Connection = new-object system.Data.SQLClient.SQLConnection $Table = new-object "System.Data.DataTable" $Connection.connectionstring = $ConnectionStr try{ $Connection.open() $Command = $Connection.CreateCommand() $Command.commandtext = $Query $result = $Command.ExecuteReader() $Table.Load($result) } catch{ # Show error $error[0] | format-list -Force } $Title = "Data access processes (" + $Table.Rows.Count + ")" $Table | Out-GridView -Title $Title $Connection.close() So this is pretty straightforward, create an SMO object that represents our chosen server, define a connection to the database and a table object for the results when we get them, execute our query over the connection, load the results into our table object and then, if everything is error free display these results to the PowerShell grid viewer. The query simply gets the results of ‘EXEC sp_who2′ for us. Depending on how many connections there are will influence how long the query runs. The grid viewer lets me sort and search the results so it can be a pretty handy way to locate troublesome connections. Like I say, I was quite pleased with this, it seems a pretty simple script and was working well for me, I have added a few parameters to control the output and give me more specific details but then I see a script that uses the $SMOServer object itself to provide the process information and saves having to define the connection object and query specifications. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server $Processes = $SMOServer.EnumProcesses() $Title = "SMO processes (" + $Processes.Rows.Count + ")" $Processes | Out-GridView -Title $Title Create the SMO object of our server and then call the EnumProcesses method to get all the process information from the server. Staggeringly simple! The results are a little different though. Some columns are the same and we can see the same basic information so my first thought was to which runs faster – so that I can get my results more quickly and also so that I place less stress on my server(s). PowerShell comes with a great way of testing this – the Measure-Command function. All you have to do is wrap your piece of code in Measure-Command {[your code here]} and it will spit out the time taken to execute the code. So, I placed both of the above methods of getting SQL Server process connections in two Measure-Command wrappers and pressed F5! The Powershell console goes blank for a while as the code is executed internally when Measure-Command is used but the grid viewer windows appear and the console shows this. You can take the output from Measure-Command and format it for easier reading but in a simple comparison like this we can simply cross refer the TotalMilliseconds values from the two result sets to see how the two methods performed. The query execution method (running EXEC sp_who2 ) is the first set of timings and the SMO EnumProcesses is the second. I have run these on a variety of servers and while the results vary from execution to execution I have never seen the SMO version slower than the other. The difference has varied and the time for both has ranged from sub-second as we see above to almost 5 seconds on other systems. This difference, I would suggest is partly due to the cost overhead of having to construct the data connection and so on where as the SMO EnumProcesses method has the connection to the server already in place and just needs to call back the process information. There is also the difference in the data sets to consider. Let’s take a look at what we get and where the two methods differ Query execution method (sp_who2) SMO EnumProcesses Description - Urn What looks like an XML or JSON representation of the server name and the process ID SPID Spid The process ID Status Status The status of the process Login Login The login name of the user executing the command HostName Host The name of the computer where the  process originated BlkBy BlockingSpid The SPID of a process that is blocking this one DBName Database The database that this process is connected to Command Command The type of command that is executing CPUTime Cpu The CPU activity related to this process DiskIO - The Disk IO activity related to this process LastBatch - The time the last batch was executed from this process. ProgramName Program The application that is facilitating the process connection to the SQL Server. SPID1 - In my experience this is always the same value as SPID. REQUESTID - In my experience this is always 0 - Name In my experience this is always the same value as SPID and so could be seen as analogous to SPID1 from sp_who2 - MemUsage An indication of the memory used by this process but I don’t know what it is measured in (bytes, Kb, Mb…) - IsSystem True or False depending on whether the process is internal to the SQL Server instance or has been created by an external connection requesting data. - ExecutionContextID In my experience this is always 0 so could be analogous to REQUESTID from sp_who2. Please note, these are my own very brief descriptions of these columns, detail can be found from MSDN for columns in the sp_who results here http://msdn.microsoft.com/en-GB/library/ms174313.aspx. Where the columns are common then I would use that description, in other cases then the information returned is purely for interpretation by the reader. Rather annoyingly both result sets have useful information that the other doesn’t. sp_who2 returns Disk IO and LastBatch information which is really useful but the SMO processes method give you IsSystem and MemUsage which have their place in fault diagnosis methods too. So which is better? On reflection I think I prefer to use the sp_who2 method primarily but knowing that the SMO Enumprocesses method is there when I need it is really useful and I’m sure I’ll use it regularly. I’m OK with the fact that it is the slower method because Measure-Command has shown me how close it is to the other option and that it really isn’t a large enough margin to matter.

    Read the article

  • Parallel MSBuild FTW - Build faster in parallel

    - by deadlydog
    Hey everyone, I just discovered this great post yesterday that shows how to have msbuild build projects in parallel Basically all you need to do is pass the switches “/m:[NumOfCPUsToUse] /p:BuildInParallel=true” into MSBuild. Example to use 4 cores/processes (If you just pass in “/m” it will use all CPU cores): MSBuild /m:4 /p:BuildInParallel=true "C:\dev\Client.sln" Obviously this trick will only be useful on PCs with multi-core CPUs (which we should all have by now) and solutions with multiple projects; So there’s no point using it for solutions that only contain one project.  Also, testing shows that using multiple processes does not speed up Team Foundation Database deployments either in case you’re curious Also, I found that if I didn’t explicitly use “/p:BuildInParallel=true” I would get many build errors (even though the MSDN documentation says that it is true by default). The poster boasts compile time improvements up to 59%, but the performance boost you see will vary depending on the solution and its project dependencies.  I tested with building a solution at my office, and here are my results (runs are in seconds): # of Processes 1st Run 2nd Run 3rd Run Avg Performance 1 192 195 200 195.67 100% 2 155 156 156 155.67 79.56% 4 146 149 146 147.00 75.13% 8 136 136 138 136.67 69.85%   So I updated all of our build scripts to build using 2 cores (~20% speed boost), since that gives us the biggest bang for our buck on our solution without bogging down a machine, and developers may sometimes compile more than 1 solution at a time.  I’ve put the any-PC-safe batch script code at the bottom of this post. The poster also has a follow-up post showing how to add a button and keyboard shortcut to the Visual Studio IDE to have VS build in parallel as well (so you don’t have to use a build script); if you do this make sure you use the .Net 4.0 MSBuild, not the 3.5 one that he shows in the screenshot.  While this did work for me, I found it left an MSBuild.exe process always hanging around afterwards for some reason, so watch out (batch file doesn’t have this problem though).  Also, you do get build output, but it may not be the same that you’re used to, and it doesn’t say “Build succeeded” in the status bar when completed, so I chose to not make this my default Visual Studio build option, but you may still want to. Happy building! ------------------------------------------------------------------------------------- :: Calculate how many Processes to use to do the build. SET NumberOfProcessesToUseForBuild=1  SET BuildInParallel=false if %NUMBER_OF_PROCESSORS% GTR 2 (                 SET NumberOfProcessesToUseForBuild=2                 SET BuildInParallel=true ) MSBuild /maxcpucount:%NumberOfProcessesToUseForBuild% /p:BuildInParallel=%BuildInParallel% "C:\dev\Client.sln"

    Read the article

  • Lifecycle Technology Delivers AutoVue Visualization Integration for SAP

    Lifecycle Technology is an Oracle development partner and has built a Connector for Oracle's AutoVue visualization solution and SAP. Their area of expertise lies in integrating AutoVue visualization and printing solutions with SAP business processes within Asset Lifecycle Management,Product Lifecycle Management,and Document Management Systems. Lifecycle visually enables a variety of SAP workflows and processes in manufacturing,plant maintenance,and production. Their solutions allow SAP enterprise customers to view technical or engineering documents in the appropriate business context and/or print them as required in their workflows, improving productivity and decision making.

    Read the article

  • Monitoring C++ applications

    - by Scott A
    We're implementing a new centralized monitoring solution (Zenoss). Incorporating servers, networking, and Java programs is straightforward with SNMP and JMX. The question, however, is what are the best practices for monitoring and managing custom C++ applications in large, heterogenous (Solaris x86, RHEL Linux, Windows) environments? Possibilities I see are: Net SNMP Advantages single, central daemon on each server well-known standard easy integration into monitoring solutions we run Net SNMP daemons on our servers already Disadvantages: complex implementation (MIBs, Net SNMP library) new technology to introduce for the C++ developers rsyslog Advantages single, central daemon on each server well-known standard unknown integration into monitoring solutions (I know they can do alerts based on text, but how well would it work for sending telemetry like memory usage, queue depths, thread capacity, etc) simple implementation Disadvantages: possible integration issues somewhat new technology for C++ developers possible porting issues if we switch monitoring vendors probably involves coming up with an ad-hoc communication protocol (or using RFC5424 structured data; I don't know if Zenoss supports that without custom Zenpack coding) Embedded JMX (embed a JVM and use JNI) Advantages consistent management interface for both Java and C++ well-known standard easy integration into monitoring solutions somewhat simple implementation (we already do this today for other purposes) Disadvantages: complexity (JNI, thunking layer between native C++ and Java, basically writing the management code twice) possible stability problems requires a JVM in each process, using considerably more memory JMX is new technology for C++ developers each process has it's own JMX port (we run a lot of processes on each machine) Local JMX daemon, processes connect to it Advantages single, central daemon on each server consistent management interface for both Java and C++ well-known standard easy integration into monitoring solutions Disadvantages: complexity (basically writing the management code twice) need to find or write such a daemon need a protocol between the JMX daemon and the C++ process JMX is new technology for C++ developers CodeMesh JunC++ion Advantages consistent management interface for both Java and C++ well-known standard easy integration into monitoring solutions single, central daemon on each server when run in shared JVM mode somewhat simple implementation (requires code generation) Disadvantages: complexity (code generation, requires a GUI and several rounds of tweaking to produce the proxied code) possible JNI stability problems requires a JVM in each process, using considerably more memory (in embedded mode) Does not support Solaris x86 (deal breaker) Even if it did support Solaris x86, there are possible compiler compatibility issues (we use an odd combination of STLPort and Forte on Solaris each process has it's own JMX port when run in embedded mode (we run a lot of processes on each machine) possibly precludes a shared JMX server for non-C++ processes (?) Is there some reasonably standardized, simple solution I'm missing? Given no other reasonable solutions, which of these solutions is typically used for custom C++ programs? My gut feel is that Net SNMP is how people do this, but I'd like other's input and experience before I make a decision.

    Read the article

  • Simple rendering produces minor stutter

    - by Ben
    For some reason, this game loop renders the movement of a simple rectangle with no stuttering. double currTime; double prevTime = System.nanoTime() / NANO_TO_SEC; double FPSTIMER = System.nanoTime(); double maxTimeDiff = 100.0 / 1000.0; double delta = 1.0 / 60.0; int processes = 0, frames = 0; while(true){ currTime = System.nanoTime() / NANO_TO_SEC; if(currTime - prevTime > maxTimeDiff) prevTime = currTime; if(currTime >= prevTime){ process(); processes++; prevTime += delta; if(currTime < prevTime){ render(); frames++; } } else{ try{ Thread.sleep((long) (1000 * (prevTime - currTime))); } catch(Exception e){} } if(System.nanoTime() - FPSTIMER > 1000000000.0){ System.out.println("Process: " + (1000 / processes) + "ms FPS: " + (1000 / frames) + "ms"); processes = frames = 0; FPSTIMER += 1000000000.0; } } But for this game loop, I get really minor stuttering where the movement does not look smooth. long prevTime = System.currentTimeMillis(); long prevRenderTime = 0; long currRenderTime = 0; long delta = 0; long msPerTick = 1000 / 60; int frames = 0; int ticks = 0; double FPSTIMER = System.currentTimeMillis(); while (true){ long currTime = System.currentTimeMillis(); delta += (currTime - prevTime) / msPerTick; prevTime = currTime; while (delta >= 1){ ticks++; process(); delta -= 1; } prevRenderTime = System.currentTimeMillis(); render(); frames++; currRenderTime = System.currentTimeMillis(); try{ Thread.sleep((long) ((1000 / FPS) - (currRenderTime - prevRenderTime))); } catch(Exception e){} if(System.currentTimeMillis() - FPSTIMER > 1000.0){ System.out.println("Process: " + (1000.0 / ticks) + "ms FPS: " + (1000.0 / frames) + "ms"); ticks = frames = 0; FPSTIMER += 1000.0; } Is there any critical difference that I'm missing here? The one thing I noticed is that if I uncap the fps for the second game loop, the stuttering goes away. It doesn't make sense to me. Also, the second game loop came from Notch's Minicraft code with just my thread sleeping code added in.

    Read the article

  • What is the value of an Enterprise Resource Planning (ERP) System?

    According to PWC.com ERP systems can add tremendous value to a company’s core business functionality.  Below PWC.com summarizes the primary value that an ERP can add to a company. ERPs are a collection business application that coordinates the resources, information, and activities required for core business processes. ERPs are strategic tools used to reduce costs, improve business processes, and healthier risk management.

    Read the article

  • How to Research Keywords - 3 Truths the Gurus Withheld

    Turns out the first valuable lesson I picked up on how to research keywords was the fact that information overload is the fastest way to kill your dreams when it comes to doing anything in your life, especially anything involving your business. Please, please don't make the mistakes that I have! There aren't any science or complex formulas there is simply processes that work if you work the processes.

    Read the article

  • Ubuntu 12.04 partial freeze

    - by user1550594
    While working on the system suddenly if i open a random file or start browsing and on other random moments the system starts hanging or start responding slowly. In this situation, i can able to open the virtual console using the Ctrl + Alt + F1-F6 and type the top and list the processes. Due to urgency, i will kill the long running processes or the over memory utilizing process to recover the screen back. Can anyone give me a permanent fix.!

    Read the article

  • Introduction to Agile Development

    - by Grant Fritchey
    Even though my current job is a little weird, I still consider myself to be a DBA. I didn’t start that way in IT. I came through support and into development. I loved development. There was a constant struggle to attempt to improve your code, your understanding, and, most importantly, the process of development itself. Development can be slow and tedious. Left alone, developers can simply disappear to build a project and not come back for two years, at which time they deliver it. But, maybe that software isn’t what you wanted, or it’s no longer needed, or who knows what. So developers are constantly attempting to improve their processes in order to deliver more relavent software quicker (something DBAs could learn about). I really admire it. One of the many processes that has come out of that constant striving is known as Agile. As the name implies, Agile development attempts to come up with a quick, fast turning, business aware, well, for want of a word, agile, process that is more responsive to the needs of the business. There are tons and tons of books and blogs and videos on the subject that can get you going. But, Agile isn’t easy (note, Easy is not part of the name). Agile processes can be hard. I’ve worked on multiple agile teams, some successful, some not. The two principal differences between the teams were their discipline and their knowledge of the process. Discipline, that comes from within. But knowledge, ah, well there I can help. Red Gate is bringing a series of free instructional events to the United States in a few weeks time focused primarily on SQL Server (click here right now to register while there’s still space). We’re also offering some .NET instruction too. That’s a full day, free, with top experts in the business. But, the next day, there’s a full day session introducing Agile. You can go to this and learn how to do Agile. Develop that knowledge that will enable you to successfully use the Agile process. Go to this web site to check it out. No, this event is not free, but not everything can be. And it’s not just for developers. DBAs, you need to learn this stuff too. Management could also benefit from understanding these processes (because you guys can help to enforce discipline). It’s really for everyone involved in the development process.

    Read the article

  • Is "no installation" software a good thing?

    - by Yaron Naveh
    I am building an application that will, hopefully, be used by developers. To be appealing to developers I want it to be lightweight, small in size, and with no installation (e.g. xcopy). I trust more an application without installation to not put garbage in my registry, to be lightweight etc. My friend thinks the opposite: An installer puts shortcuts on the desktop / menu for me, it ensures cleanup via the uninstaller, and seems more official. I'm curious - what is everyone's take on this?

    Read the article

  • Ubuntu studio/xubuntu 12.04 instead of Ubuntu 12.04/12.10 for CAD/ arhcitectural workflow. Worth it?

    - by gabriel
    I am currently using Ubuntu 12.10. So, as described in the title I am planning to install Ubuntu studio. The programs i use are Blender, Maya 2013, NukeX, Bricscad, Sketchup (with wine) and also i am planning to install revit architecture through VirtualBox. Well, I am using a quad-core CPU and i want to have all the power of my system for rendering/modelling. So, i decided to try a more lightweight desktop than unity. Also, what made me to decide this, is that when i tried to install Bricscad v12 the program does not work. So, i thought that if i want something more professional for my work i should have only LTS versions of lightweight Ubuntu. So, my 2 questions are :1) Worth it? 2) Can i have global menu(close,minimize,maximize buttons, menu) like ubuntu/unity? Thanks

    Read the article

  • Apache2/mod_fcgid/PHP Process Limits Not Respected

    - by Daniel
    I've recently moved to Apache2 / mod_fcgid / PHP from nginx / php_fpm. This is the second server on which I've made this migration, but it's used much less frequently than the first, which is working like a charm. The problem is in the PHP processes that it's spawning. In looking at the mod_fcgid documentation, it appears that the default for killing idle processes is 300 seconds; I've changed that to 20. At this point, I'd be fine if 300 would work - but it's not happening. It's been running for nearly a day now, and server-status shows 12 active processes: Process name: php5 Pid Active Idle Accesses State 19243 84879 14420 11 Ready Process name: php5 Pid Active Idle Accesses State 20954 82143 149 22 Ready 20947 82149 149 22 Ready 20953 82143 149 13 Ready Process name: php5 Pid Active Idle Accesses State 20589 82765 23644 72 Ready Process name: php5 Pid Active Idle Accesses State 17663 86103 2034 117 Ready Process name: php5 Pid Active Idle Accesses State 19862 83961 1976 91 Ready Process name: php5 Pid Active Idle Accesses State 18495 85825 5164 18 Ready Process name: php5 Pid Active Idle Accesses State 25463 75109 23948 24 Ready Process name: php5 Pid Active Idle Accesses State 2466 60019 60016 2 Ready Process name: php5 Pid Active Idle Accesses State 20729 82541 12592 23 Ready Process name: php5 Pid Active Idle Accesses State 22135 80616 46361 6 Ready PHP applications are not being served at this point - Apache is returning a 503. However, it is still serving the server-status module, and mod_mono/Mono 2.10 applications are still being served. The problem is with the PHP. /etc/apache2/mods-available/fcgid.conf... <IfModule mod_fcgid.c> AddHandler fcgid-script .fcgi FcgidConnectTimeout 10 FcgidMaxRequestsPerProcess 500 FcgidIdleTimeout 20 FcgidFixPathinfo 1 FcgidMaxProcesses 10 </IfModule> (heh - Max Processes isn't being respected either...) Of course, fcgid.conf is smylinked in mods-enabled.

    Read the article

  • "Password Server: Stopped" on Mac OS Lion Server. Stops with error -1 during startup

    - by V1ru8
    Since I've restored the Open Directory from an archive because my Server crashed and the DB was corrupt. The password server does not start anymore. The log looks like this: Feb 14 2012 21:41:20 156746us Mac OS X Password Service version 376.1 (pid = 2438) was started at: Tue Feb 14 21:41:20 2012. Feb 14 2012 21:41:20 156801us RunAppThread Created Feb 14 2012 21:41:20 156852us RunAppThread Started Feb 14 2012 21:41:20 156879us Initializing Server Globals ... Feb 14 2012 21:41:20 163094us Initializing Networking ... Feb 14 2012 21:41:20 163196us Initializing TCP ... Feb 14 2012 21:41:20 191790us SASL is using realm "SERVER.HOME.POST-NET.CH" Feb 14 2012 21:41:20 191847us Starting Central Thread ... Feb 14 2012 21:41:20 191860us Starting other server processes ... Feb 14 2012 21:41:20 191873us StartCentralThreads: 1 threads to stop Feb 14 2012 21:41:20 191905us Initializing TCP ... Feb 14 2012 21:41:20 191954us Starting TCP/IP Listener on ethernet interface, port 106 Feb 14 2012 21:41:20 192012us Starting TCP/IP Listener on ethernet interface, port 3659 Feb 14 2012 21:41:20 192048us Starting TCP/IP Listener on interface lo0, port 106 Feb 14 2012 21:41:20 192082us Starting TCP/IP Listener on interface lo0, port 3659 Feb 14 2012 21:41:20 192117us StartCentralThreads: Created 4 TCP/IP Connection Listeners Feb 14 2012 21:41:20 192132us Starting UNIX domain socket listener /var/run/passwordserver Feb 14 2012 21:41:20 193034us CRunAppThread::StartUp: caught error -1. Feb 14 2012 21:41:20 193056us ** ERROR: The Server received an error during startup. See error log for details. Feb 14 2012 21:41:20 193075us RunAppThread::StartUp() returned: 4294967295 Feb 14 2012 21:41:20 193107us Stopping server processes ... Feb 14 2012 21:41:20 193119us Stopping Network Processes ... Feb 14 2012 21:41:20 193131us Deinitializing networking ... Feb 14 2012 21:41:20 193149us Server Processes Stopped ... Feb 14 2012 21:41:20 193165us RunAppThread Stopped Feb 14 2012 21:41:20 193202us Aborting Password Service. See error log. The error log repeats the following: Feb 14 2012 21:41:50 409022us Server received error -1 during startup. Feb 14 2012 21:41:50 409141us Aborting Password Service. Anyone an idea what's wrong here and how I can fix this?

    Read the article

  • How to tell which process is hogging my CPU when they don't add up to 100%?

    - by endolith
    Ubuntu's System Monitor applet shows 100% CPU usage continuously. If I click it, the resources tab shows it at 100% continuously, too. If I go to processes, though, to find out which process is the culprit, there is nothing above 10%. If I run top there is nothing above 10%. The individual processes do not add up to 100%. I try killing lots of processes, but the overall usage continues to be 100%. How can I find out what's hogging the CPU? This is an unusual situation on a computer I use daily, which is never anywhere near 100% CPU unless I'm doing something that requires it (like loading 32 Firefox tabs), after which it goes back to a normal idle level. It's not a new install or anything. There is no reason the processor should be maxed out. I'm not sure when it started or if I changed something that caused it to happen. Normally I would use top or System Monitor and find the process that had gone out of control, but I can't find anything with those tools this time. It persists after reboots and everything. And the processor is obviously hot, so it's not an erroneous reading. Update: I tried killing every process, one at a time, until the problem went away, and killing vino-server finally fixed it, even though that process never went above 5%. I had enabled Remote Desktop a few days ago (and have obviously now disabled it). But the question remains: How did a single process manage to use 100% CPU while top only showed that process at 5%? How do I identify culprits like this in the future? Looks like I'm not the only one who's had this problem: Still a problem in both jaunty & karmic. Interestingly, both System Monitor & htop do not show the sum of individual processes being anywhere near 100% cpu.

    Read the article

  • Google Chrome is running my system out of memory

    - by jasondavis
    I am running Windows 7 x64 with 12GB of RAM I often have multiple windows and a ton of tabs open. I use the extension Session Buddy to restore all my windows and tabs once the memory gets too high. So my 12gb of ram will get up to around 93% used because of Chrome, now I can close chrome down and restore the same amount of windows and tabs and it will only use about 25% of memory, it then over time increases back up to the 90% zone after several hours. It seems that when I close tabs, instead of freeing that memory up, it doesn't so that is why the huge increase of memory usage as new tabs are opened and closed it just adds up, this sounds like a huge bug in chrome. Just for an example I just re-booted my system, I only have 1 window with 4 tabs open and in the task manager, it shows 29 chrome.exe processes I then killed all chrome processes and opened a chrome window with just 1 tab, it made 27 chrome.exe processes. Is this an issue that others have? More importantly, is there a fix? UPDATE I just read that each plugin and extension creates a chrome.exe process, I then couunted 24 extensions so that helps explain a portion of the large processes. Still not sure about memory not being freed up though!

    Read the article

  • How to determine if my AWS/EC2 server has been compromised / resolution?

    - by ElHaix
    I have recently seen an increase in network in/out activity on my server and am trying to determine if my AWS/EC2 instance has been compromised, and if so, how to resolve? In my security group I have: Inbound: 80 (HTTP) 0.0.0.0/0 Outbound: 80 (HTTP) 0.0.0.0/0 443 (HTTPS) 0.0.0.0/0 Using TCP-UDP Endpoint Viewer: I see a lot of w3wp.exe TCP processes with varying local ports http and numbered, as well as varying remote ports. Some processes go red/yellow/green on updates . I see Remote address for most w3wp processes are my ec2 instance, however I am seeing several to *.deploy.akamaitechnologies.com and *.deploy.static.akamaitechnologies.com with received bytes varying between 4-11 megs. I also see Ec2Config.exe, remote address: 169.254.169.254 System Process Remote Address: fetcher4-4.p.mail.ru (how can I get rid of this one?!) local port: http remote port: 33432 I am also seeing some system processes from 114.216-244-93-rdns.wowrack.com: Protocol: TCP local port: http remote port: varying As well as some baiduspider "System Process"'s. I'm afraid that my system may have been compromised, and wondering if these results are any indication of that. If so, how can I get eliminate these possible threats? I have MS Security Essentials installed.

    Read the article

  • Linux-Containers — Part 1: Overview

    - by Lenz Grimmer
    "Containers" by Jean-Pierre Martineau (CC BY-NC-SA 2.0). Linux Containers (LXC) provide a means to isolate individual services or applications as well as of a complete Linux operating system from other services running on the same host. To accomplish this, each container gets its own directory structure, network devices, IP addresses and process table. The processes running in other containers or the host system are not visible from inside a container. Additionally, Linux Containers allow for fine granular control of resources like RAM, CPU or disk I/O. Generally speaking, Linux Containers use a completely different approach than "classicial" virtualization technologies like KVM or Xen (on which Oracle VM Server for x86 is based on). An application running inside a container will be executed directly on the operating system kernel of the host system, shielded from all other running processes in a sandbox-like environment. This allows a very direct and fair distribution of CPU and I/O-resources. Linux containers can offer the best possible performance and several possibilities for managing and sharing the resources available. Similar to Containers (or Zones) on Oracle Solaris or FreeBSD jails, the same kernel version runs on the host as well as in the containers; it is not possible to run different Linux kernel versions or other operating systems like Microsoft Windows or Oracle Solaris for x86 inside a container. However, it is possible to run different Linux distribution versions (e.g. Fedora Linux in a container on top of an Oracle Linux host), provided it supports the version of the Linux kernel that runs on the host. This approach has one caveat, though - if any of the containers causes a kernel crash, it will bring down all other containers (and the host system) as well. For example, Oracle's Unbreakable Enterprise Kernel Release 2 (2.6.39) is supported for both Oracle Linux 5 and 6. This makes it possible to run Oracle Linux 5 and 6 container instances on top of an Oracle Linux 6 system. Since Linux Containers are fully implemented on the OS level (the Linux kernel), they can be easily combined with other virtualization technologies. It's certainly possible to set up Linux containers within a virtualized Linux instance that runs inside Oracle VM Server for Oracle VM Virtualbox. Some use cases for Linux Containers include: Consolidation of multiple separate Linux systems on one server: instances of Linux systems that are not performance-critical or only see sporadic use (e.g. a fax or print server or intranet services) do not necessarily need a dedicated server for their operations. These can easily be consolidated to run inside containers on a single server, to preserve energy and rack space. Running multiple instances of an application in parallel, e.g. for different users or customers. Each user receives his "own" application instance, with a defined level of service/performance. This prevents that one user's application could hog the entire system and ensures, that each user only has access to his own data set. It also helps to save main memory — if multiple instances of a same process are running, the Linux kernel can share memory pages that are identical and unchanged across all application instances. This also applies to shared libraries that applications may use, they are generally held in memory once and mapped to multiple processes. Quickly creating sandbox environments for development and testing purposes: containers that have been created and configured once can be archived as templates and can be duplicated (cloned) instantly on demand. After finishing the activity, the clone can safely be discarded. This allows to provide repeatable software builds and test environments, because the system will always be reset to its initial state for each run. Linux Containers also boot significantly faster than "classic" virtual machines, which can save a lot of time when running frequent build or test runs on applications. Safe execution of an individual application: if an application running inside a container has been compromised because of a security vulnerability, the host system and other containers remain unaffected. The potential damage can be minimized, analyzed and resolved directly from the host system. Note: Linux Containers on Oracle Linux 6 with the Unbreakable Enterprise Kernel Release 2 (2.6.39) are still marked as Technology Preview - their use is only recommended for testing and evaluation purposes. The Open-Source project "Linux Containers" (LXC) is driving the development of the technology behind this, which is based on the "Control Groups" (CGroups) and "Name Spaces" functionality of the Linux kernel. Oracle is actively involved in the Linux Containers development and contributes patches to the upstream LXC code base. Control Groups provide means to manage and monitor the allocation of resources for individual processes or process groups. Among other things, you can restrict the maximum amount of memory, CPU cycles as well as the disk and network throughput (in MB/s or IOP/s) that are available for an application. Name Spaces help to isolate process groups from each other, e.g. the visibility of other running processes or the exclusive access to a network device. It's also possible to restrict a process group's access and visibility of the entire file system hierarchy (similar to a classic "chroot" environment). CGroups and Name Spaces provide the foundation on which Linux containers are based on, but they can actually be used independently as well. A more detailed description of how Linux Containers can be created and managed on Oracle Linux will be explained in the second part of this article. Additional links related to Linux Containers: OTN Article: The Role of Oracle Solaris Zones and Linux Containers in a Virtualization Strategy Linux Containers on Wikipedia - Lenz Grimmer Follow me on: Personal Blog | Facebook | Twitter | Linux Blog |

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >