Search Results

Search found 26093 results on 1044 pages for 'process monitor'.

Page 38/1044 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Dual monitor not working after an update

    - by Nimonika
    I did a package manager update yesterday and it turns out that my dual monitor setup has stopped working. I have poor vision so I really need to connect to a much bigger screen, but since yesterday, when I connect the screen to my laptop, the screen does not automatically reset itself to the laptop display. Even after lots of trial and error with the display settings, I am getting different dispalys on the laptop and external screen and right now only the big screen is active while the laptop has blanked out. Please can someone help me setup my dual screens for 11.10 properly. lspci -v | grep -i vga output 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) (prog-if 00 [VGA controller]

    Read the article

  • 12.04 monitor brightness commands ignored

    - by jarvisschultz
    I cannot change the brightness of the monitor on a laptop either from the command line or from keyboard shortcuts. I have verified that the file in /sys/class/backlight/acpi_video0/brightness is changing. I have also verified that I am running the nvidia drivers. Things I have tried: Adding entries to /etc/default/grub The GRUB_CMDLINE_LINUX_DEFAULT and GRUB_CMDLINE_LINUX entries now read GRUB_CMDLINE_LINUX_DEFAULT="quite nosplash acpi_backlight=vendor" GRUB_CMDLINE_LINUX="noapic" I have also tried various commands here (with an update grub and reboot each time) but nothing has helped I enabled brightness control in /etc/X11/xorg.conf so that it now looks like Section "Device" Identifier "Default Device" Driver "nvidia" Option "NoLogo" "True" Option "RegistryDwords" "EnableBrightnessControl=1" EndSection Investigated installing linux-kamal-mjgbacklight and determined it was not applicable to my system Nothing seems to have made any difference. I am using an Nvidia GeForce GT 330M with driver version 295.40. Does anyone have any other suggestions?

    Read the article

  • Windows move onto second monitor when I turn it on

    - by user1002379
    I'm using a small monitor for my main display I also have an LCD TV hooked up through HDMI as my secondary display. The problem is whenever I turn on the TV most but usually not all of my currently running program windows jump to the secondary display. This is very annoying, because I don't like to keep my TV on unless I'm using it. So when I do turn it on I have to drag and drop each of the windows back on to my main display. I have played around with some Compiz settings to no avail please help.

    Read the article

  • Monitor resolution can't be saved

    - by Iztok
    Today I installed Lubuntu 13.10 on Vmware Player (inside Windows). I change the Monitor setting (resolution) from default 800x600 to 1680x1050. It works. Beside Apply, I also press Save button. "Changes are saved" appears. But - after restart, the resolution is again in 800x600. I also opened /etc/xdg/lxsession/Lubuntu/autostart and add (it was empty before) one line: @xrandr --mode 1680x1050 After restart the default resolution is back again. Any idea?

    Read the article

  • How can I tell which "explorer.exe" process is the main one?

    - by HodofHod
    I have a batch file that changes a few registry files, and then restarts explorer.exe so that they take effect. I'm using the command taskkill /f /im explorer.exe and then explorer.exe which of course kills all the explorer.exe processes, including the explorer windows I have open. Obviously, I am using the option to Launch folder windows in a separate process. Is there any way I can determine which instance of explorer.exe is the main one, and just kill that?

    Read the article

  • Why does starting a process in the background prevent me from recalling previous commands in the command prompt?

    - by leeand00
    I'm running asynchronous command prompt commands in cmd on Windows 7 64-bit and for some reason it effects my ability to recall previous commands just after I run them; for example: I might run something like: start /B rmdir /Q /S .\some_massive_directory And next I try to press the up arrow to recall the text of the previous command..but nothing happens...is this because whatever data structure is holding my commands is locked by the process I sent to the background, or is it because I am using a network mapped drive to run my command on?

    Read the article

  • Is an average RAM usage per Apache process of 43 MB "normal" for a Social Networking site? [closed]

    - by Programmer
    I have a Social Networking site that runs on a single LAMP Server that handles everything. The average RAM usage per Apache process is 43 MB. Is that amount roughly within the expected range for a Social Networking site, or is it too high? If it's too high, where and how can I look to bring that average number down? (If you need more details to determine whether it's within the expected range or not, just let me know and I'll edit my question to provide them as best I can.)

    Read the article

  • Whats the difference between Process and ProcessStartInfo in C#?

    - by JimDel
    Whats the difference between Process and ProcessStartInfo? Ive used both to launch external programs but there has to be a reason there are two ways to do it. Here are two examples. Process notePad = new Process(); notePad.StartInfo.FileName = "notepad.exe"; notePad.StartInfo.Arguments = "ProcessStart.cs"; notePad.Start(); and ProcessStartInfo startInfo = new ProcessStartInfo(); startInfo.FileName = "notepad.exe"; startInfo.Arguments = " ProcessStart.cs "; Process.Start(startInfo) Thanks

    Read the article

  • How to pass filename to StandardInput (Process) in C#?

    - by Cosmo
    Hello Guys! I'm using the native windows application spamc.exe (SpamAssassin - sawin32) from command line as follows: C:\SpamAssassin\spamc.exe -R < C:\email.eml Now I'd like to call this process from C#: Process p = new Process(); p.StartInfo.UseShellExecute = false; p.StartInfo.RedirectStandardOutput = true; p.StartInfo.RedirectStandardInput = true; p.StartInfo.FileName = @"C:\SpamAssassin\spamc.exe"; p.StartInfo.Arguments = @"-R"; p.Start(); p.StandardInput.Write(@"C:\email.eml"); p.StandardInput.Close(); Console.Write(p.StandardOutput.ReadToEnd()); p.WaitForExit(); p.Close(); The above code just passes the filename as string to spamc.exe (not the content of the file). However, this one works: Process p = new Process(); p.StartInfo.UseShellExecute = false; p.StartInfo.RedirectStandardOutput = true; p.StartInfo.RedirectStandardInput = true; p.StartInfo.FileName = @"C:\SpamAssassin\spamc.exe"; p.StartInfo.Arguments = @"-R"; p.Start(); StreamReader sr = new StreamReader(@"C:\email.eml"); string msg = sr.ReadToEnd(); sr.Close(); p.StandardInput.Write(msg); p.StandardInput.Close(); Console.Write(p.StandardOutput.ReadToEnd()); p.WaitForExit(); p.Close(); Could someone point me out why it's working if I read the file and pass the content to spamc, but doesn't work if I just pass the filename as I'd do in windows command line?

    Read the article

  • How do i launch a process with low priority? C#

    - by acidzombie24
    I want to execute a cmd line tool to process data. It does not need to be blocking. I want it to be low priority. So i wrote the below Process app = new Process(); app.StartInfo.FileName = @"bin\convert.exe"; app.StartInfo.Arguments = TheArgs; app.PriorityClass = ProcessPriorityClass.BelowNormal; app.Start(); However i get a System.InvalidOperationException with the msg "No process is associated with this object." Why? how do i properly launch this app in low priority? PS: Without the line app.PriorityClass = ProcessPriorityClass.BelowNormal; the app runs fine.

    Read the article

  • Task vs. process, is there really any difference?

    - by DASKAjA
    Hi there, I'm studying for my final exams in my CS major on the subject distributed systems and operating systems. I'm in the need for a good definition for the terms task, process and threads. So far I'm confident that a process is the representation of running (or suspended, but initiated) program with its own memory, program counter, registers, stack, etc (process control block). Processes can run threads which share memory, so that communication via shared memory is possible in contrast to processes which have to communicate via IPC. But what's the difference between tasks and process. I often read that they're interchangable and that the term task isn't used anymore. Is that really true?

    Read the article

  • How do you force a Linux process (Java webstart App) to stop locking a Filesystem (CD-ROM) WITHOUT k

    - by Blake
    In Linux (CentOS 5.4), how do you force a process to stop locking a file system without killing the process? I am trying to get my Java Webstart Application, running locally, to eject a CD. I do not have this problem if I am just browsing through the files using a JFileChooser, but once I read the contents of a file, I can no longer eject the CD...even after removing ALL references to any files. Hitting the eject button will give the error (Title - "Cannot Eject Volume"): "An application is preventing the volume 'volume name' from being ejected" Thus, my goal is to tell the process to stop targeting the CD-ROM in order to free it up. Thank you for any help or direction!! Attempted Fix: -running the commands: sudo umount -l /media/Volume_Name //-l Lazy Unmount forces the unmount sudo eject Problem: When a new CD is inserted, it is no longer mounted automatically probably because the process is still "targeting" it.

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Process Power to the People that Create Engagement

    - by Michael Snow
    Organizations often speak about their engagement problems as if the problem is the people they are trying to engage - employees,  partners, customers and citizens.  The reality of most engagement problems is that the processes put in place to engage are impersonal, inflexible, unintuitive, and often completely ignorant of the population they are trying to serve. Life, Liberty and the Pursuit of Delight? How appropriate during this short week of the US Independence Day Holiday that we're focusing on People, Process and Engagement. As we celebrate this holiday in the US and the historic independence we gained (sorry Brits!) - it's interesting to think back to 1776 to the creation of that pivotal document, the Declaration of Independence. What tremendous pressure to create an engaging document and founding experience they must have felt. "On June 11, 1776, in anticipation of the impending vote for independence from Great Britain, the Continental Congress appointed five men — Thomas Jefferson, John Adams, Benjamin Franklin, Roger Sherman, and Robert Livingston — to write a declaration that would make clear to people everywhere why this break from Great Britain was both necessary and inevitable. The committee then appointed Jefferson to draft a statement. Jefferson produced a "fair copy" of his draft declaration, which became the basic text of his "original Rough draught." The text was first submitted to Adams, then Franklin, and finally to the other two members of the committee. Before the committee submitted the declaration to Congress on June 28, they made forty-seven emendations to the document. During the ensuing congressional debates of July 1-4, 1776, Congress adopted thirty-nine further revisions to the committee draft. (http://www.constitution.org) If anything was an attempt for engaging the hearts and minds of the 13 Colonies at the time, this document certainly succeeded in its mission. ...Their tools at the time were pen and ink and parchment. Although the final document would later be typeset with lead type for a printing press to distribute to the colonies, all of the original drafts were hand written. And today's enterprise complains about using "Review and Track Changes" at times.  Can you imagine the manual revision control process? or lack thereof?  Collaborative process? Time delays? Would  implementing a better process have helped our founding fathers collaborate better? Declaration of Independence rough draft below. One of many during the creation process. Great comparison across multiple versions of the document here. (from http://www.ushistory.org/): While you may not be creating a new independent nation, getting your employees to engage is crucial to your success as a company in today's world. Oracle WebCenter provides the tools that power engagement. Employees that have better tools for communication, collaboration and getting their job done are more engaged employees. Better engaged employees create more engaged customers and partners. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";}

    Read the article

  • Capgemini Global Business Process Management Report

    - by JuergenKress
    Welcome to the Capgemini Global Business Process Management (BPM) Report. This report is an exploration of key trends in BPM as seen by CXOs across a broad selection of sectors and geographies. BPM is perhaps at a tipping point - it’s certainly at an exciting stage in its evolution. As both an engineer and an Operational Research practitioner in my early career, and subsequently as a consultant, I have seen BPM through its development over the last 26 years. BPM has its roots in management practices such as Total Quality Management, Business Process Reengineering & Model Based Development; but the advent of the new generation of sophisticated modelling and process execution technologies has greatly enhanced BPM’s power to truly transform businesses. This has created one of the most rapidly growing and attractive market sectors for both services and technology. We see BPM as a critical management discipline that when executed against clear, cross organizational business objectives, can deliver exceptional value to that organization. However, we also see that the potential for BPM is not well understood. Our decision to conduct this global survey stemmed from discussions with our clients. We sought to gain a better impression of their understanding of BPM, how they measure its value, and how far it is prioritized within their Business and Technology Transformation efforts. This research confirms our belief that BPM needs to be a jointly owned Business and IT discipline. It also demonstrates that it is starting to gain significant traction in the market and investments are starting to pay dividends to the early adopters. At Capgemini we are being asked by our clients to help them simplify and improve their business models and the technology that supports them and we are already seeing BPM become an integral and key part of this proposition. Business Process Management is becoming ever more relevant to both large and small organizations in the current economic climate. At a time when many different market sectors are facing slow revenue growth, customer churn and increased pressures on costs, BPM becomes a critical weapon in the battle for efficiency and effectiveness in processes. Furthermore, in a challenging and changing business environment that is characterized by uncertainty, it allows organizations to adapt, be more agile and fleet of foot. Capgemini is seeing strong demand for BPM services in markets such as the USA, the UK, the Netherlands and France; and there are clear signs of increased interest in other geographies such as, Germany, Sweden, Spain, Italy and Australia. In sector terms, the financial services industry has led the way in BPM adoption over the recent past, driven by increased focus on customer- centricity and regulatory compliance. Other sectors, public sector, utilities, telco, retail and manufacturing are now not only catching up, but are starting o use BPM in new ways to create new business models to serve customers and outsmart the competition. The research findings also show however that this is a complex landscape, and we are not seeing adoption of BPM in a clear and consistent way. This report also looks at some of the barriers to adoption, with organizational silos being a major obstacle. Waters are further muddied by fragmented budgets, lack of clear governance and ownership and internal politics. The objective of our investment in this research project was to shed some light on these elements with a view to assisting organizations to create strategies that avoid or at least mitigate some of these barriers to success. Management of change in such endea vours is a key part in enabling the appropriate alignment of business and technology to support their transformation efforts. I hope that you find this report of benefit in the further adoption of Business Process Management. Get the full report here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Capgemini,bpm report,bpm market,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • radeon display driver clones monitors while using Xinerama

    - by gregmuellegger
    I'm trying to get my two Radeon HD 4770 cards working with three monitors. Xinerama works so far in the way that I have two fully working monitors were I can move windows from one to the other. My problem now is that my third monitor is a clone of my second monitor (displaying the exact same thing). These monitors are connected to the same graphic card ("Screen Middle" and "Screen Right" in the xorg.conf below). Here is my xorg.conf: Section "ServerLayout" Identifier "ThreeMonitors" Screen "Screen Left" 0 0 Screen "Screen Middle" RightOf "Screen Left" Screen "Screen Right" RightOf "Screen Middle" Option "Xinerama" EndSection Section "Monitor" Identifier "Monitor Left" Option "DPMS" EndSection Section "Monitor" Identifier "Monitor Middle" Option "DPMS" EndSection Section "Monitor" Identifier "Monitor Right" Option "DPMS" EndSection Section "Device" Identifier "Device Left" Driver "radeon" VendorName "ATI Technologies Inc" BoardName "ATI Radeon HD 4770 [RV740]" BusID "PCI:3:0:0" Screen 0 EndSection Section "Screen" Identifier "Screen Left" Device "Device Left" Monitor "Monitor Left" SubSection "Display" Depth 24 EndSubSection EndSection Section "Device" Identifier "Device Middle" Driver "radeon" VendorName "ATI Technologies Inc" BoardName "ATI Radeon HD 4770 [RV740]" BusID "PCI:2:0:0" Screen 0 EndSection Section "Screen" Identifier "Screen Middle" Device "Device Middle" Monitor "Monitor Middle" SubSection "Display" Depth 24 EndSubSection EndSection Section "Device" Identifier "Device Right" Driver "radeon" VendorName "ATI Technologies Inc" BoardName "ATI Radeon HD 4770 [RV740]" BusID "PCI:2:0:1" Screen 1 EndSection Section "Screen" Identifier "Screen Right" Device "Device Right" Monitor "Monitor Right" SubSection "Display" Depth 24 EndSubSection EndSection I'm using a fresh Kubuntu 10.10 installation with propsed-updates enabled since this repo contains a xorg fix for using multiple graphic cards. I hope someone can help me out. Very many thanks!!

    Read the article

  • Best way to process a queue in C# (PDF treatment)

    - by Bartdude
    First of all let me expose what I would like to do : I already dispose of a long-time running webapp developed in ASP.NET (C#) 2.0. In this app, users can upload standard PDF files (text+pics). The app is running in production on a Windows Server 2003 and has a dedicated database server (SQL server 2008) also running Windows Server 2003. I myself am a quite experienced web developer, but never actually programmed anything non-web (or at least nothing serious). I plan on adding a functionality to the webapp for which I would need a jpg snapshot of each page of the PDF. Creating these "thumbnails" isn't the big deal as such, I already do it inside my webapp using ghostscript. I've only done it on 1 page documents for now though, and the new functionality will need to process bigger documents. In order for this process to be transparent aswell for the admins as the final users, I would like to implement some kind of queue to delay the processing of the thumbnails. There again, no problem to create the queue, it will consist of records in a table, with enough info to find the pdf document back. Then I will need to process this queue, and that's were my interrogations start. Obviously the best solution to process it isn't an ASP script or so, so I will have to get out of my known environment. No problem, but I have no idea which direction to go. Therefore, a few questions : What should I develop ? I presumably need something that is "standby" on the server, runs when needed, then returns to idle state until further notice.Should I be looking into Windows service ? Is there another more appropriate type of project ? Depending on the first answer, what will be the approach ? Should I have somehow SQL server "tell" the program/service/... to process the queue, or should I have that program/service/... periodically check the state of the queue and treat new items. In both case, which functionality can I use ? we're not talking about hundreds of PDF a day (max 50 maybe), I can totally afford to treat the queue 1 item at a time. Can you confirm I don't have to look much further on threads and so ? (I found a lot of answers talking about threads in queue treatment, but it looks quite overkill for my needs) Maybe linked to the previous question : what about concurrent call to the program, whatever it is ? Let's suppose it is currently running, and a new record comes in the queue, what should be the behaviour ? I don't need much detailed answers and would already be happy with answers like "You can do the processing with a service, and yes it's possible to have sqlserver on machine A trigger a service start on machine B" or "You have to develop xxx and then use the scheduler to run it every xxx minutes". I don't mind reading articles and so, but I can hardly afford to spend too much time learning stuff to finally realize I went the wrong way for this project, so basically I'm trying to narrow down the scope of matters I need to investigate. Thanks for reading me, I hope I'll find some helping hands on here :-)

    Read the article

  • Write TSQL, win a Kindle.

    - by Fatherjack
    So recently Red Gate launched sqlmonitormetrics.red-gate.com and showed the world how to embed your own scripts harmoniously in a third party tool to get the details that you want about your SQL Server performance. The site has a way to submit your own metrics and take a copy of the ones that other people have submitted to build a library of code to keep track of key metrics of your servers performance. There have been several submissions already but they have now launched a competition to provide an incentive for you to get creative and show us what you can do with a bit of TSQL and the SQL Monitor framework*. What’s it worth? Well, if you are one of the 3 winners then you get to choose either a Kindle Fire or $199. How do you win? Simply write the T-SQL for a SQL Monitor custom metric and the relevant description and introduction for it and submit it via  sqlmonitormetrics.red-gate.com before 14th Sept 2012 and then sit back and wait while the judges review your code and your aims in writing the metric. Who are the judges and how will they judge the metrics? There are two judges for this competition, Steve Jones (Microsoft SQL Server MVP, co-founder of SQLServerCentral.com, author, blogger etc) and Jonathan Allen (um, yeah, Steve has done all the good stuff, I’m here by good fortune). We will be looking to rate the metrics on each of 3 criteria: how the metric can help with performance tuning SQL Server. how having the metric running enables DBA’s to meet best practice. how interesting /original the idea for the metric is. Our combined decision will be final etc etc **  What happens to my metric? Any metrics submitted to the competition will be automatically entered into the site library and become available for sharing once the competition is over. You’ll get full credit for metrics you submit regardless of the competition results. You can enter as many metrics as you like. How long does it take? Honestly? Once you have the T-SQL sorted then so long as you can type your name and your email address you are done : http://sqlmonitormetrics.red-gate.com/share-a-metric/ What can I monitor? If you really really want a Kindle or $199 (and let’s face it, who doesn’t? ) and are momentarily stuck for inspiration, take a look at these example custom metrics that have been written by Stuart Ainsworth, Fabiano Amorim, TJay Belt, Louis Davidson, Grant Fritchey, Brad McGehee and me  to start the library off. There are some great pieces of TSQL in those metrics gathering important stats about how SQL Server is performing.   * – framework may not be the best word here but I was under pressure and couldnt think of a better one. If you prefer try ‘engine’, or ‘application’? I don’t know, pick something that makes sense to you. ** – for the full (legal) version of the rules check the details on sqlmonitormetrics.red-gate.com or send us an email if you want any point clarified. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Is big (as much as big) size display (Monitor) always better for Development?

    - by Jitendra Vyas
    Is bigger size display ( Monitor) always better for Development? I'm going to buy a new LCD Monitor. I mostly work in Adobe Photoshop, HTML, CSS, jQuery and Wordpress. Budget is not a problem. Many options are there for LCD Monitor SIZE My questions are Would it better for maximum size, or large size monitor are not good always? Would it better to buy 21.5 inch x 2 than one 30 inch monitor? Which monitor size would you would prefer between the size of 21.5 inch - 30 inch, if bugdet is not a problem?

    Read the article

  • Keeping an Eye on Your Storage

    - by Fatherjack
    There are plenty of resources that advise you about looking for signs that your storage hardware is having problems. SQL Server Alerts for 823, 824 and 825 are covered here by Paul Randall of SQL Skills: http://www.sqlskills.com/blogs/paul/a-little-known-sign-of-impending-doom-error-825/ and here by me: https://www.simple-talk.com/blogs/2011/06/27/alerts-are-good-arent-they/. Now until very recently I wasn’t aware that there was a different way to track the 823 + 824 errors. It was by complete chance that I happened to be searching about in the msdb database when I found the suspect_pages table. Running a query against it I got zero rows. This, as it turns out is a good thing. Highlighting the table name and pressing F1 got me nowhere – Is it just me or does Books Online fail to load properly for no obvious reason sometimes? So I typed the table name into the search bar and got my local version of http://msdn.microsoft.com/en-us/library/ms174425.aspx. From that we get the following description: Contains one row per page that failed with a minor 823 error or an 824 error. Pages are listed in this table because they are suspected of being bad, but they might actually be fine. When a suspect page is repaired, its status is updated in the event_type column. So, in the table we would, on healthy hardware, expect to see zero rows but on disks that are having problems the event_type column would show us what is going on. Where there are suspect pages on the disk the rows would have an event_type value of 1, 2 or 3, where those suspect pages have been restored, repaired or deallocated by DBCC then the value would be 4, 5 or 7. Having this table means that we can set up SQL Monitor to check the status of our hardware as we can create a custom metric based on the query below: USE [msdb] go SELECT COUNT(*) FROM [dbo].[suspect_pages] AS sp All we need to do is set the metric to collect this value and set an alert to email when the value is not 1 and we are then able to let SQL Monitor take care of our storage. Note that the suspect_pages table does not have any updates concerning Error 825 which the links at the top of the page cover in more detail. I would suggest that you set SQL Monitor to alert on the suspect_pages table in addition to other taking other measures to look after your storage hardware and not have it as your only precaution. Microsoft actually pass ownership and administration of the suspect_pages table over to the database administrator (Manage the suspect_pages Table (SQL Server)) and in a surprising move (to me at least) advise DBAs to actively update and archive data in it. The table will only ever contain a maximum of 1000 rows and once full, new rows will not be added. Keeping an eye on this table is pretty important, although In my opinion, if you get to 1000 rows in this table and are not already waiting for new disks to be added to your server you are doing something wrong but if you have 1000 rows in there then you need to move data out quickly because you may be missing some important events on your server.

    Read the article

  • How to obtain the correct physical size of the monitor?

    - by KindDragon
    How can I get the size of the display in centimeters or inches? This code does not always works correctly: HDC hdc = CreateDC(_T("DISPLAY"),dd.DeviceName,NULL,NULL); int width = GetDeviceCaps(hdc, HORZSIZE); int height = GetDeviceCaps(hdc, VERTSIZE); ReleaseDC(0, hdc) Especially for multi-monitor configuration. Update: I need to get the size just for ordinary monitors, which have a constant physical size.

    Read the article

  • Upstart restart process time

    - by user567938
    A question about upstart. I understand the restart command does a stop and restart of the process. What happens if the process will not stop and check for the signals for a long time (20min or 2h)? Will it still restart after such a long time ? Example my-process: until stop_signal_received work for 2h end So the script will check for stop signals every 2 hours. $sudo restart my-process Will the process restart after 2 hours or not?

    Read the article

  • PyGTK/GIO: monitor directory for changes recursively

    - by detly
    Take the following demo code (from the GIO answer to this question), which uses a GIO FileMonitor to monitor a directory for changes: import gio def directory_changed(monitor, file1, file2, evt_type): print "Changed:", file1, file2, evt_type gfile = gio.File(".") monitor = gfile.monitor_directory(gio.FILE_MONITOR_NONE, None) monitor.connect("changed", directory_changed) import glib ml = glib.MainLoop() ml.run() After running this code, I can then create and modify child nodes and be notified of the changes. However, this only works for immediate children (I am aware that the docs don't say otherwise). The last of the following shell commands will not result in a notification: touch one mkdir two touch two/three Is there an easy way to make it recursive? I'd rather not manually code something that looks for directory creation and adds a monitor, removing them on deletion, etc. The intended use is for a VCS file browser extension, to be able to cache the statuses of files in a working copy and update them individually on changes. So there might by anywhere from tens to thousands (or more) directories to monitor. I'd like to just find the root of the working copy and add the file monitor there. I know about pyinotify, but I'm avoiding it so that this works under non-Linux kernels such as FreeBSD or... others. As far as I'm aware, the GIO FileMonitor uses inotify underneath where available, and I can understand not emphasising the implementation to maintain some degree of abstraction, but it suggested to me that it should be possible. (In case it matters, I originally posted this on the PyGTK mailing list.)

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >