Search Results

Search found 41038 results on 1642 pages for 'personal software process'.

Page 379/1642 | < Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >

  • WebDAV "PROPFIND" exception in IIS due to network share?

    - by jacko
    Hi all, We're finding continuous exceptions in our event viewer on our live box to the following exception: [snippet] Process information: Process ID: 3916 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: HttpException Exception message: Path 'PROPFIND' is forbidden. Thread information: Thread ID: 14 Thread account name: OURDOMAIN\Account Is impersonating: True Stack trace: at System.Web.HttpMethodNotAllowedHandler.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Other Specs: Windows Server 2003 R2 & IIS 6.0 We've narrowed it down to occuring when people try to access shares on the box from within the network, and have discovered (we think) that its due to the WebDAV web services extension being previously disabled by past staff. The exceptions are being thrown when trying to access directories that are virtual dirs in IIS, and plain old UNC network shares What the implications for enabling the WebDAV extensions on our live web server? And will this solve our problems with the exceptions in our event log?

    Read the article

  • How to Run Pam Face Authentication

    - by Supriyo Banerjee
    I am using Ubuntu 11.10. I went to the following URL to download the software 'Pam Face Authentication': http://ppa.launchpad.net/antonio.chiurazzi/ppa/ubuntu/pool/main/p/pam-face-authentication/ and downloaded the version for natty narhwall. I installed the software using the following commands: sudo apt-get install build-essential cmake qt4-qmake libx11-dev libcv-dev libcvaux-dev libhighgui2.1 libhighgui-dev libqt4-dev libpam0g-dev checkinstall cd /tmp && wget http://pam-face-authentication.googlecode.com /files/pam-face-authentication-0.3.tar.gz sudo add-apt-repository ppa:antonio.chiurazzi sudo apt-get update sudo apt-get install pam-face-authentication cat << EOF | sudo tee /usr/share/pam-configs/face_authentication /dev/null Name: face_authentication profile Default: yes Priority: 900 Auth-Type: Primary Auth: [success=end default=ignore] pam_face_authentication.so enableX EOF sudo pam-auth-update --package face_authentication The software installed and I can run the qt-facetrainer. But the problem is when I restarted my system, I saw that the default login screen is appearing where I should put my password to login. The webcam is not starting at all. And I cannot login with my face. Which means I think that pam face authentication programme is not starting at all. Please let me know how I can login with my face using pam face authentication programme.

    Read the article

  • Jenkins swarm-plugin jar file, won't run in background

    - by JeanMertz
    We're working on an automation script for our Jenkins slaves on a local Unix server. To connect the slaves to the Jenkins master, we use the swarm plugin. Setting up the master was easy, and connecting clients is also easy with a single command. However, I am trying to get the slave command (a java application) to run in the background without stalling the current process, this doesn't seem to work. I've created an init.d file and added it to update-rc.d but that doesn't work. #!/bin/bash /usr/bin/java -jar /root/swarm-client-1.7-jar-with-dependencies.jar -executors 4 I've also tried to run it with an ampersand & to start the process in the background, but that doesn't work either because - from looking at the source - the jar file actually boots another process that is then started in the foreground. Any ideas on how to make this jar file start without stopping the bootstrap script?

    Read the article

  • What things should I run daily, weekly, monthly on my Windows machine?

    - by Jitendra vyas
    What things should I run daily, weekly, monthly on my Windows XP machine? De fragmentation Scan-disk Full system scan Disk clean up Trojan checker malware remover or any other thing... And should I run all these things in Safe-mode always? I want to perform all mentioned things automatically at night. How to set schedule and What would be the best plan? For daily weekly and monthly? I've 300 GB SATA HDD with 6 partition. I would like to know in above mentioned list: Which process should I run daily? Which process should I run weekly? Which process should I run on monthly basis How to set schedule for all in Windows task scheduler?

    Read the article

  • What causes random clicking/navigation sound while PC is idle?

    - by Jim Linny
    On my Dell D630 laptop running Windows XP Pro SP3, I randomly hear the "Windows Navigation Start.wav" played. This happens even if the only application active on the PC at the time is Microsoft Outlook and no one is touching the keyboard or touchpad. The start navigation sound IS enabled for Internet Explorer (I have IE 7 installed), but it is not running (no window open, anyway). The clicks are rapid, as if being executed automatically by a script or process, and they always come in threes: "click-click, click". I don't see any changes on the display and I don't know how to figure out what's running that could cause this. I have tried to leave Windows Task Manager open in the hope that I'll see a process name jump to the top of the list briefly, but it doesn't happen that often and I guess I'm not quick enough. The PC is slower than I think it should be, but System Idle Process is at 96 - 99 during the event, and for most of the time when I'm not actually using the machine. Any ideas?

    Read the article

  • Updating Banshee to 2.4

    - by Lucasguy11
    I have banshee 2.2.1 with Ubuntu 11.10 I have been trying to update banshee to 2.4 (released yesterday) but it just isnt working, I have been using sudo add-apt-repository ppa:banshee-team/ppa in terminal, from the Banshee.fm website. but after running through terminal it says this: sudo add-apt-repository ppa:banshee-team/ppa You are about to add the following PPA to your system: PPA for Banshee Team This PPA contains the latest stable debs of Banshee for Ubuntu. To install Banshee, you must first enable the PPA on your system: 1. Open Software Sources (System->Administration->Software Sources) 2. Navigate to the "Third Party Sources" tab. 3. Click "Add" 4. Enter the APT line below that corresponds to your Ubuntu version that starts with "deb". 5. Click "Add Source" 6. Click "Close" 7. It will prompt you to reload your software cache. Click "Reload". 8. Now install the package "banshee" from Synaptic, or using the command below: sudo apt-get install banshee For those who wish to compile from trunk, add the deb-src line and then run "sudo apt-get build-dep" to install all required dependencies before starting to compile. Unstable (version which have odd minor version numbers) debs of Banshee can be found here: https://launchpad.net/~banshee-team/+archive/banshee-unstable More info: https://launchpad.net/~banshee-team/+archive/ppa Press [ENTER] to continue or ctrl-c to cancel adding it Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.OPAjxemDQr --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv 9D2C2E0A3C88DD807EC787D74874D3686E80C6B7 gpg: requesting key 6E80C6B7 from hkp server keyserver.ubuntu.com gpg: key 6E80C6B7: "Launchpad PPA for Banshee Team" not changed gpg: Total number processed: 1 gpg: unchanged: 1 I believe I have the ppa but, im not sure. I need a step by step process to get this, ive been trying to figure it out for quite a while now...

    Read the article

  • Coders For Charities

    - by Robz / Fervent Coder
    Last weekend I had the opportunity to give back to the community doing what I love. As geeks we don’t usually have this opportunity. The event is called Coders 4 Charities (C4C) and it’s a grueling weekend of coding for nearly 30 hours over the weekend. When you finish you get to present to the charity and all of the other groups what you have completed. From the site: Coders For Charities is a 3-day charity event that pairs charities and local software developers. Charities often do not have the funds to implement a new website or intranet or database solution. Software developers often do not volunteer for charities because their skills do not apply. This event is the perfect marriage of these two needs; software developers volunteering their time to help charities better serve their community though the latest technology! The actual event was lined with multiple charities and about 50 developers, designers, business analysts, etc, each working with a different charity to come up with a solution that they could implement in less than 3 days. C4C provided a place and food for us so that we wouldn’t have to leave much during the time we had to implement our solution. They also provided games like Rock Band so we could get away and clear our minds for a few moments if necessary. I don’t think we made it down there to play, but the food and drinks were a huge help for us. The charity we we picked was Harvest Home. They had a need for an online intranet site where they could track membership and gardening. Over the next few days we worked on a site we could give them. Below is a screen shot with private data marked out. It was an awesome and humbling experience to be able to give back to a charity and I’m happy I was a part of it. I would definitely do it again. How often do we get to use our abilities to volunteer our time to a charity?

    Read the article

  • ArchBeat Link-o-Rama for 2012-03-27

    - by Bob Rhubart
    Deploying OAM "correctly" | Chris Johnson fusionsecurity.blogspot.com Chris Johnson's concise blog post will help you to deploy Oracle Access Manager "for real." Oracle BPM: Suspend and alter process | Martijn van der Kamp www.nl.capgemini.com "There’s one tricky part with intervening in the run time behavior of a process, and that is compliance," says Martijn van der Kamp. "Make sure your solution covers the compliance regulations by the regulatory department, including the option of intervening in the process." Red Samurai Tool Announcement - MDS Cleaner V2.0 | Andrejus Baranovskis andrejusb.blogspot.com Oracle ACE Director Andrejus Baranovskis shares news about an upcoming free product for MDS administrators. Oracle bulk insert or select from Java with Eclipselink | Edwin Biemond biemond.blogspot.com Oracle ACE Edwin Biemond shows you how to retrieve all the departments from the HR demo schema, add a new department, and do a multi insert. WebLogic Server Weekly for March 26th, 2012 | Steve Button blogs.oracle.com Steve Button share information on: WLS 1211 Update, Java 7 Certification, Galleria, WebLogic for DBAs, REST and Enterprise Architecture, Singleton Services. Northeast Ohio Oracle Users Group 2 Day Seminar - May 14-15 - Cleveland, OH www.neooug.org May 14-15 - Cleveland, OH.More than 20 sessions over 4 tracks, featuring 18 speakers, including Oracle ACE Director Cary Millsap, Oracle ACE Director Rich Niemiec, and Oracle ACE Stewart Brand. Register before April 15 and save. Thought for the Day "With good program architecture debugging is a breeze, because bugs will be where they should be." — David May

    Read the article

  • Microsoft&rsquo;s new technical computing initiative

    - by Randy Walker
    I made a mental note from earlier in the year.  Microsoft literally buys computers by the truckload.  From what I understand, it’s a typical practice amongst large software vendors.  You plug a few wires in, you test it, and you instantly have mega tera tera flops (don’t hold me to that number).  Microsoft has been trying to plug away at their cloud services (named Azure).  Which, for the layman, means Microsoft runs your software on their computers, and as demand increases you can allocate more computing power on the fly. With this in mind, it doesn’t surprise me that I was recently sent an executive email concerning Microsoft’s new technical computing initiative.  I find it to be a great marketing idea with actual substance behind their real work.  From the programmer academic perspective, in college we dreamed about this type of processing power.  This has decades of computer science theory behind it. A copy of the email received.  (note that I almost deleted this email, thinking it was spam due to it’s length) We don't often think about how complex life really is. Take the relatively simple task of commuting to and from work: it is, in fact, a complicated interplay of variables such as weather, train delays, accidents, traffic patterns, road construction, etc. You can however, take steps to shorten your commute - using a good, predictive understanding of a few of these variables. In fact, you probably are already taking these inputs and instinctively building a predictive model that you act on daily to get to your destination more quickly. Now, when we apply the same method to very complex tasks, this modeling approach becomes much more challenging. Recent world events clearly demonstrated our inability to process vast amounts of information and variables that would have helped to more accurately predict the behavior of global financial markets or the occurrence and impact of a volcano eruption in Iceland. To make sense of issues like these, researchers, engineers and analysts create computer models of the almost infinite number of possible interactions in complex systems. But, they need increasingly more sophisticated computer models to better understand how the world behaves and to make fact-based predictions about the future. And, to do this, it requires a tremendous amount of computing power to process and examine the massive data deluge from cameras, digital sensors and precision instruments of all kinds. This is the key to creating more accurate and realistic models that expose the hidden meaning of data, which gives us the kind of insight we need to solve a myriad of challenges. We have made great strides in our ability to build these kinds of computer models, and yet they are still too difficult, expensive and time consuming to manage. Today, even the most complicated data-rich simulations cannot fully capture all of the intricacies and dependencies of the systems they are trying to model. That is why, across the scientific and engineering world, it is so hard to say with any certainty when or where the next volcano will erupt and what flight patterns it might affect, or to more accurately predict something like a global flu pandemic. So far, we just cannot collect, correlate and compute enough data to create an accurate forecast of the real world. But this is about to change. Innovations in technology are transforming our ability to measure, monitor and model how the world behaves. The implication for scientific research is profound, and it will transform the way we tackle global challenges like health care and climate change. It will also have a huge impact on engineering and business, delivering breakthroughs that could lead to the creation of new products, new businesses and even new industries. Because you are a subscriber to executive e-mails from Microsoft, I want you to be the first to know about a new effort focused specifically on empowering millions of the world's smartest problem solvers. Today, I am happy to introduce Microsoft's Technical Computing initiative. Our goal is to unleash the power of pervasive, accurate, real-time modeling to help people and organizations achieve their objectives and realize their potential. We are bringing together some of the brightest minds in the technical computing community across industry, academia and science at www.modelingtheworld.com to discuss trends, challenges and shared opportunities. New advances provide the foundation for tools and applications that will make technical computing more affordable and accessible where mathematical and computational principles are applied to solve practical problems. One day soon, complicated tasks like building a sophisticated computer model that would typically take a team of advanced software programmers months to build and days to run, will be accomplished in a single afternoon by a scientist, engineer or analyst working at the PC on their desktop. And as technology continues to advance, these models will become more complete and accurate in the way they represent the world. This will speed our ability to test new ideas, improve processes and advance our understanding of systems. Our technical computing initiative reflects the best of Microsoft's heritage. Ever since Bill Gates articulated the then far-fetched vision of "a computer on every desktop" in the early 1980's, Microsoft has been at the forefront of expanding the power and reach of computing to benefit the world. As someone who worked closely with Bill for many years at Microsoft, I am happy to share with you that the passion behind that vision is fully alive at Microsoft and is carried out in the creation of our new Technical Computing group. Enabling more people to make better predictions We have seen the impact of making greater computing power more available firsthand through our investments in high performance computing (HPC) over the past five years. Scientists, engineers and analysts in organizations of all sizes and sectors are finding that using distributed computational power creates societal impact, fuels scientific breakthroughs and delivers competitive advantages. For example, we have seen remarkable results from some of our current customers: Malaria strikes 300,000 to 500,000 people around the world each year. To help in the effort to eradicate malaria worldwide, scientists at Intellectual Ventures use software that simulates how the disease spreads and would respond to prevention and control methods, such as vaccines and the use of bed nets. Technical computing allows researchers to model more detailed parameters for more accurate results and receive those results in less than an hour, rather than waiting a full day. Aerospace engineering firm, a.i. solutions, Inc., needed a more powerful computing platform to keep up with the increasingly complex computational needs of its customers: NASA, the Department of Defense and other government agencies planning space flights. To meet that need, it adopted technical computing. Now, a.i. solutions can produce detailed predictions and analysis of the flight dynamics of a given spacecraft, from optimal launch times and orbit determination to attitude control and navigation, up to eight times faster. This enables them to avoid mistakes in any areas that can cause a space mission to fail and potentially result in the loss of life and millions of dollars. Western & Southern Financial Group faced the challenge of running ever larger and more complex actuarial models as its number of policyholders and products grew and regulatory requirements changed. The company chose an actuarial solution that runs on technical computing technology. The solution is easy for the company's IT staff to manage and adjust to meet business needs. The new solution helps the company reduce modeling time by up to 99 percent - letting the team fine-tune its models for more accurate product pricing and financial projections. Our Technical Computing direction Collaborating closely with partners across industry and academia, we must now extend the reach of technical computing even further to help predictive modelers and data explorers make faster, more accurate predictions. As we build the Technical Computing initiative, we will invest in three core areas: Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable 'just-in-time' processing. This platform will help ensure processing resources are available whenever they are needed-reliably, consistently and quickly. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today's modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop... to the cluster... to the cloud. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. Thinking bigger There is so much left to be discovered and so many questions yet to be answered in the fascinating world around us. We believe the technical computing community will show us that we have not seen anything yet. Imagine just some of the breakthroughs this community could make possible: Better predictions to help improve the understanding of pandemics, contagion and global health trends. Climate change models that predict environmental, economic and human impact, accessible in real-time during key discussions and debates. More accurate prediction of natural disasters and their impact to develop more effective emergency response plans. With an ambitious charter in hand, this new team is ready to build on our progress to-date and execute Microsoft's technical computing vision over the months and years ahead. We will steadily invest in the right technologies, tools and talent, and work to bring together the technical computing community. I invite you to visit www.modelingtheworld.com today. We welcome your ideas and feedback. I look forward to making this journey with you and others who want to answer the world's biggest questions, discover solutions to problems that seem impossible and uncover a host of new opportunities to change the world we live in for the better. Bob

    Read the article

  • Links to detailed instructions on building a DIY NAS

    - by Kaushik Gopal
    I'm looking for good links with detailed instructions on how to build a DIY NAS (Network Access Storage). I'm planning on doing it cheap (old PC config + open source software). I did a fair bit of searching and found these links (so please suggest others). Ubuntu Setting up a Home NAS DIY NAS Smackdown How to Configure an $80 File Server in 45 Minutes FreeNAS Build a NAS Device With an Old PC and Free Software Build Your Own NAS Device While these links are great they delve more on the hardware side. I'm looking for more instructions in the software side.

    Read the article

  • How to learn & introduce scrum in small startup?

    - by Jens Bannmann
    In a few months, a friend will establish his startup software company, and I will be the software architect with one additional developer. Though we have no real day-to-day experience with agile methods, I have read much "overview" type of material on them, and I firmly believe they are a good - if not the only - way to build software. So with this company, I want to go for iterative, agile development from day 1, preferably something light-weight. I was thinking of Scrum, but the question is: what is the best way for me and my colleagues to learn about it, to introduce it (which techniques when etc) and to evaluate whether we should keep it? Background which might be relevant: we're all experienced developers around the same age with similar professional mindset. We have worked together in the past and afterwards at several different companies, mostly with a Java/.NET focus. Some are a bit familiar with general ideas from the agile movement. In this startup, I have great power over tools, methods and process. The startup's product will be developed from scratch and could be classified as middleware. We have some "customer" contacts in the industry who could provide input as soon as we get to an alpha stage.

    Read the article

  • Get Started using Build-Deploy-Test Workflow with TFS 2012

    - by Jakob Ehn
    TFS 2012 introduces a new type of Lab environment called Standard Environment. This allows you to setup a full Build Deploy Test (BDT) workflow that will build your application, deploy it to your target machine(s) and then run a set of tests on that server to verify the deployment. In TFS 2010, you had to use System Center Virtual Machine Manager and involve half of your IT department to get going. Now all you need is a server (virtual or physical) where you want to deploy and test your application. You don’t even have to install a test agent on the machine, TFS 2012 will do this for you! Although each step is rather simple, the entire process of setting it up consists of a bunch of steps. So I thought that it could be useful to run through a typical setup.I will also link to some good guidance from MSDN on each topic. High Level Steps Install and configure Visual Studio 2012 Test Controller on Target Server Create Standard Environment Create Test Plan with Test Case Run Test Case Create Coded UI Test from Test Case Associate Coded UI Test with Test Case Create Build Definition using LabDefaultTemplate 1. Install and Configure Visual Studio 2012 Test Controller on Target Server First of all, note that you do not have to have the Test Controller running on the target server. It can be running on another server, as long as the Test Agent can communicate with the test controller and the test controller can communicate with the TFS server. If you have several machines in your environment (web server, database server etc..), the test controller can be installed either on one of those machines or on a dedicated machine. To install the test controller, simply mount the Visual Studio Agents media on the server and browse to the vstf_controller.exe file located in the TestController folder. Run through the installation, you might need to reboot the server since it installs .NET 4.5. When the test controller is installed, the Test Controller configuration tool will launch automatically (if it doesn’t, you can start it from the Start menu). Here you will supply the credentials of the account running the test controller service. Note that this account will be given the necessary permissions in TFS during the configuration. Make sure that you have entered a valid account by pressing the Test link. Also, you have to register the test controller with the TFS collection where your test plan is located (and usually the code base of course) When you press Apply Settings, all the configuration will be done. You might get some warnings at the end, that might or might not cause a problem later. Be sure to read them carefully.   For more information about configuring your test controllers, see Setting Up Test Controllers and Test Agents to Manage Tests with Visual Studio 2. Create Standard Environment Now you need to create a Lab environment in Microsoft Test Manager. Since we are using an existing physical or virtual machine we will create a Standard Environment. Open MTM and go to Lab Center. Click New to create a new environment Enter a name for the environment. Since this environment will only contain one machine, we will use the machine name for the environment (TargetServer in this case) On the next page, click Add to add a machine to the environment. Enter the name of the machine (TargetServer.Domain.Com), and give it the Web Server role. The name must be reachable both from your machine during configuration and from the TFS app tier server. You also need to supply an account that is a local administration on the target server. This is needed in order to automatically install a test agent later on the machine. On the next page, you can add tags to the machine. This is not needed in this scenario so go to the next page. Here you will specify which test controller to use and that you want to run UI tests on this environment. This will in result in a Test Agent being automatically installed and configured on the target server. The name of the machine where you installed the test controller should be available on the drop down list (TargetServer in this sample). If you can’t see it, you might have selected a different TFS project collection. Press Next twice and then Verify to verify all the settings: Press finish. This will now create and prepare the environment, which means that it will remote install a test agent on the machine. As part of this installation, the remote server will be restarted. 3-5. Create Test Plan, Run Test Case, Create Coded UI Test I will not cover step 3-5 here, there are plenty of information on how you create test plans and test cases and automate them using Coded UI Tests. In this example I have a test plan called My Application and it contains among other things a test suite called Automated Tests where I plan to put test cases that should be automated and executed as part of the BDT workflow. For more information about Coded UI Tests, see Verifying Code by Using Coded User Interface Tests   6. Associate Coded UI Test with Test Case OK, so now we want to automate our Coded UI Test and have it run as part of the BDT workflow. You might think that you coded UI test already is automated, but the meaning of the term here is that you link your coded UI Test to an existing Test Case, thereby making the Test Case automated. And the test case should be part of the test suite that we will run during the BDT. Open the solution that contains the coded UI test method. Open the Test Case work item that you want to automate. Go to the Associated Automation tab and click on the “…” button. Select the coded UI test that you corresponds to the test case: Press OK and the save the test case For more information about associating an automated test case with a test case, see How to: Associate an Automated Test with a Test Case 7. Create Build Definition using LabDefaultTemplate Now we are ready to create a build definition that will implement the full BDT workflow. For this purpose we will use the LabDefaultTemplate.11.xaml that comes out of the box in TFS 2012. This build process template lets you take the output of another build and deploy it to each target machine. Since the deployment process will be running on the target server, you will have less problem with permissions and firewalls than if you were to remote deploy your solution. So, before creating a BDT workflow build definition, make sure that you have an existing build definition that produces a release build of your application. Go to the Builds hub in Team Explorer and select New Build Definition Give the build definition a meaningful name, here I called it MyApplication.Deploy Set the trigger to Manual Define a workspace for the build definition. Note that a BDT build doesn’t really need a workspace, since all it does is to launch another build definition and deploy the output of that build. But TFS doesn’t allow you to save a build definition without adding at least one mapping. On Build Defaults, select the build controller. Since this build actually won’t produce any output, you can select the “This build does not copy output files to a drop folder” option. On the process tab, select the LabDefaultTemplate.11.xaml. This is usually located at $/TeamProject/BuildProcessTemplates/LabDefaultTemplate.11.xaml. To configure it, press the … button on the Lab Process Settings property First, select the environment that you created before: Select which build that you want to deploy and test. The “Select an existing build” option is very useful when developing the BDT workflow, because you do not have to run through the target build every time, instead it will basically just run through the deployment and test steps which speeds up the process. Here I have selected to queue a new build of the MyApplication.Test build definition On the deploy tab, you need to specify how the application should be installed on the target server. You can supply a list of deployment scripts with arguments that will be executed on the target server. In this example I execute the generated web deploy command file to deploy the solution. If you for example have databases you can use sqlpackage.exe to deploy the database. If you are producing MSI installers in your build, you can run them using msiexec.exe and so on. A good practice is to create a batch file that contain the entire deployment that you can run both locally and on the target server. Then you would just execute the deployment batch file here in one single step. The workflow defines some variables that are useful when running the deployments. These variables are: $(BuildLocation) The full path to where your build files are located $(InternalComputerName_<VM Name>) The computer name for a virtual machine in a SCVMM environment $(ComputerName_<VM Name>) The fully qualified domain name of the virtual machine As you can see, I specify the path to the myapplication.deploy.cmd file using the $(BuildLocation) variable, which is the drop folder of the MyApplication.Test build. Note: The test agent account must have read permission in this drop location. You can find more information here on Building your Deployment Scripts On the last tab, we specify which tests to run after deployment. Here I select the test plan and the Automated Tests test suite that we saw before: Note that I also selected the automated test settings (called TargetServer in this case) that I have defined for my test plan. In here I define what data that should be collected as part of the test run. For more information about test settings, see Specifying Test Settings for Microsoft Test Manager Tests We are done! Queue your BDT build and wait for it to finish. If the build succeeds, your build summary should look something like this:

    Read the article

  • Oracle Linux Partner Pavilion Spotlight

    - by Ted Davis
    With the first day of Oracle OpenWorld starting in less than a week, we wanted to showcase some of our premier partners exhibiting in the Oracle Linux Partner Pavilion ( Booth #1033) this year. We have Independent Hardware Vendors, Independent Software Vendors and Systems Integrators that show the breadth of support in the Oracle Linux and Oracle VM ecosystem. We'll be highlighting partners all week so feel free to come back check us out. Centrify delivers integrated software and cloud-based solutions that centrally control, secure and audit access to cross-platform systems, mobile devices and applications by leveraging the infrastructure organizations already own. From the data center and into the cloud, more than 4,500 organizations, including 40 percent of the Fortune 50 and more than 60 Federal agencies, rely on Centrify's identity consolidation and privilege management solutions to reduce IT expenses, strengthen security and meet compliance requirements. Visit Centrify at Oracle OpenWorld 2102 for a look at Centrify Suite and see how you can streamline security management on Oracle Linux.  Unify identities across the enterprise and remove the pain and security issues associated with managing local user accounts by leveraging Active Directory Implement a least-privilege security model with flexible, role-based controls that protect privileged operations while still granting users the privileges they need to perform their job Get a central, global view of audited user sessions across your Oracle Linux environment  "Data Intensity's cloud infrastructure leverages Oracle VM and Oracle Linux to provide highly available enterprise application management solutions.  Engineers will be available to answer questions about and demonstrate the technology, including management tools, configuration do's and don'ts, high availability, live migration, integrating the technology with Oracle software, and how the integrated support process works."    Mellanox’s end-to-end InfiniBand and Ethernet server and storage interconnect solutions deliver the highest performance, efficiency and scalability for enterprise, high-performance cloud and web 2.0 applications. Mellanox’s interconnect solutions accelerate Oracle RAC query throughput performance to reach 50Gb/s compared to TCP/IP based competing solutions that cap off at less than 12Gb/s. Mellanox solutions help Oracle’s Exadata to deliver 10X performance boost at 50% Hardware cost making it the world’s leading database appliance. Thanks for reviewing today's Partner spotlight. We will highlight new partners each day this week leading up to Oracle OpenWorld.

    Read the article

  • Difficulty in running Tomcat v7.0 with Eclipse Juno

    - by user1673718
    I get the following error when I run my JSP file in Eclipse-Juno with Tomcat v7: 'starting Tomcat v7.0 server at localhost' has encountered a problem. Port 8080 required by Tomcat v7.0 server at localhost is already in use. The server may already be running in another process, or a system process may be using the port. To start this server you will need to stop the other process or change the port number(s). I have Oracle 10g installed in my System. When I type "http://localhost:8080" it opens the Oracle 10g license agreement so I think Oracle 10g is already running in that port. To change the port of Tomcat I tried Google, which said to change the port in the "C:\Program Files\Apache Software Foundation\Apache Tomcat 7.0.14\conf\httpd.conf" file But at "C:\Program Files\Apache Software Foundation\Apache Tomcat 7.0.14\conf" there was no httpd.conf file. I only have "catalina.policy,catalina.properties,context,logging.properties,server,tomcat-users,web" files in that conf folder. I use windows XP.

    Read the article

  • Intellectual Property for in house development

    - by Kyle Rogers
    My company is a sub contractor on a major government contract. Over the past 5 years we've been developing in house applications to help support our company and streamline our work. Apparently in 2008 our president of the company at that time signed a continuation of services contract with the company we subcontract with on this project. In the contract amendment various things were discussed such as intellectual property and the creation of new and existing tools. The contract states that all the subcontractor's tools/scripts/etc... become the intellectual property of the main contractor holder. Basically all tools that were created in support of the project which we work on are no longer ours exclusively and they have rights to them. My company really doesn't do software development specifically but because of this contract these tools helped tremendously with our daily tasking. Does my company have any sort of recourse or actions to help keep our tools? My team of developers were completely unaware of any of these negotiations and until recently were kept in the dark about the agreements that were made. Do we as developers have any rights to the software? Since our company is not a software development shop, we have created all these tools without any sort of agreements or contracts within the company stating that we give our company full rights to our creations? I was reading an article by Joel Spolsky on this topic and was just wonder if there is any advice out there to help assist us? Thank you Joel Spolsky's Article

    Read the article

  • How can I read data from a generated pointcloud, but from an other perspective of the camera? [migrated]

    - by Vlad Lata
    Basically what I'm trying to do is as follows: I have a software that generates and shows a pointcloud by analyzing Time-of-Flight data (Z-Data). This software has a GUI that delivers this pointcloud on a grid, and you can watch it and adjust the camera to change the perspective, or apply filtering to it and so on. Since the Z-data was recorder through a stereoscopic system, I want to obtain a perspective transformation. My idea was to simply change the position of the camera in the GUI and than add a button that sais (ex. New Perspective) that calls a function that would measure the distances from the existing pointcloud to the camera I'm viewing it from. Of course this would generate some occluded areas, but I want this to happen. And now the main question is: How can I do that? Are there any functions in OpenGL that measure the distance from an object to a camera, or is it even possible to do something like his? Or has someone some other idea? P.S. The software uses the qt sdk and opengl

    Read the article

  • How to prepare and secure a Macbook Pro for work/office?

    - by sunpech
    I plan to use my Macbook Pro at work/office. Before I do so, I will need to speak to my manager on how to properly prepare and secure it since this is the first Mac that will be regularly used on the network in the office and company intranet. The intranet comprises mostly of PCs running Microsoft Windows XP, Server 2003, and Windows 7. So there's definitely a Microsoft-only culture in the office, and the infrastructure/networking team are mostly unfamiliar with non-Microsoft technology and software. What steps and software would I need to prepare and secure my Macbook Pro for work/office? Antivirus/Spyware software for Mac required/necessary? What options do I have to encrypt files, or possibly the whole drive/partition? What network/firewall settings should be enabled?

    Read the article

  • Improve speed of "start menu" in Linux Mint 10 - Ubuntu 10.10 derivative [closed]

    - by Gabriel L. Oliveira
    I have a global menu (including application, administration and system tabs) that is taking too much time (for me) to load (about 2.5 seconds). Of course, this time is taken only during first start. After it have loaded, next times are better ( less than 0.2 miliseconds) The menu was taking more time before (about 5 seconds), and I found that was because of the 'Other' part of the menu, that included many applications installed with Wine, so I removed all of them (I didn't need them at all). I have a "normal" knowledge of programming, and I think that the process of starting the menu for the first time has some kind of "cache function", that tries to find which apps are present that need to be placed under menu to be shown to user. But didn't found this function so that I could analyze in details what he is doing (if searching for files under "~/.local/share/applications" or anything else). Also, I found that hitting "Alt-F2" also fires this "cache function", because after waiting it to load, the process of opening the menu took less than 0.2 miliseconds. So, could anyone help me in order to reduce this time? I found on internet that some user could reduce the time by resizing the icons of applications. But found here that most of my icons are already at 25x25 size. Any other idead? Maybe a multiprocess to load it, or include it under startup... don't know. Ps: Sorry if this is an awkward question, but I just do not like waiting for things to happen, and think that this process should be smoother than it's now. Also, thanks in advance!

    Read the article

  • Great Debugging skills weak problem solving

    - by Mahmoud
    For the 5 years I worked for various companies, I worked in large software like computer vision kits, embedded, games. I found myself very good at debuggins skills, I've even found and fixed bugs in frameworks and I solved them. The problem is that I'm very weak at problem solving. I got interview with Qualcomm, and they said you're fine at software, but you have a limited problem solving, I also had the same results with Google. I'm very bad at solving puzzles and brain teasers. During the interviews I solve all of the software related problems on the blackboard, but when I went to the GM and face math problems and probabilities, I struggle. How can I improve my problem solving skills? Edit Some of the problems: A cake that is cut from anywhere and needs just one cut to halved in equal. I told him cut it horizontally, he said No, consider it as a 2D Problem!. Consider a concenteric 3 circles, each one can get a color, but not matched with the other circle, how many blobs you can make out of those circles ? this was with the GM ( Augmented Reality SDK) Consider a train, an infinite one, and you looked at the window, and there are two cars, one big, and one small, what is the probability of having only a big car, I said 50%, he said, what if that two cars you dont know their length, and you want to get the probability of getting the biggest one, I struggled, didn't solve it... I was really exahusted after long day of interviews prob of having a number divisible by 5 in numbers from 1 to 100.. struggled!! All coding questions I solved them like reverse a string, detect a cycle in a linked list,..etc.

    Read the article

  • ArchBeat Link-o-Rama for 11/14/2011

    - by Bob Rhubart
    InfoQ: Developer-Driven Threat Modeling Threat modeling is critical for assessing and mitigating the security risks in software systems. In this IEEE article, author Danny Dhillon discusses a developer-driven threat modeling approach to identify threats using the dataflow diagrams. Managing the Virtual World | Philip J. Gill "The killer app for virtualization has been server consolidation," says Al Gillen, program vice president for systems software at market research firm International Data Corporation (IDC). Solaris X86 AESNI OpenSSL Engine | Dan Anderson "Having X86 AESNI hardware crypto instructions is all well and good, but how do we access it? The software is available with Solaris 11 and is used automatically if you are running Solaris x86 on a AESNI-capable processor," says Anderson. WebLogic Access Management | René van Wijk "This post is a continuation of the post WebLogic Identity Management. In this post we will present the steps involved to integrate WebLogic and Oracle Access Manager," says Oracle ACE René van Wijk. OTN Developer Days in the Nordics - Helsinki, Oslo, Stockholm, and Copenhagen OTN Developer days head for the land of the midnight sun. Podcast: Information Integration Part 2/3 In part two of a three-part program, Oracle Information Integration, Migration, and Consolidation authors Jason Williamson, Tom Laszewsk, and Marc Hebert offer examples of some of the most daunting information integration challenges. Measuring the Human Task activity in Oracle BPM | Leon Smiers Leon Smiers discusses using Oracle BPM to get answer to important questions about what's happening with business process. Architecture all day. Oracle Technology Network Architect Day - Phoenix, AZ- Dec 14 Spend the day with your peers learning from experts in Cloud computing, engineered systems, and Oracle Fusion Middleware. The Heroes of Java: Michael Hüttermann | Markus Eisele Oracle ACE Director Markus Eisele interviews Java Champion Michael Hüttermann on his role, his process, and on why he uses Java.

    Read the article

  • Deploy binary hex registry via GPO or PowerShell

    - by Prashanth Sundaram
    I am trying to deploy a custom registry entry which I exported from a test machine. It looks like below. I came across THIS similar request on another site, but I couldn't make it to work. "TextFontSimple"=hex:3c,00,00,00,1f,00,00,f8,00,00,00,40,dc,00,00,00,00,00,00,\ 00,00,00,00,ff,00,31,43,6f,75,72,69,65,72,20,4e,65,77,00,00,00,00,00,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00 As per the other solution, my PS command below, throws error."A parameter cannot be found that matches parameter name" Set-ItemProperty -Path "HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Common\MailSettings" -Name "TextFontSimple" -PropertyType Binary -Value ([byte[]] (0x3c,0x00,0x00,0x00,0x1f....0x00)) Any ideas? ====EDIT===== The key & value already exists. When I use Get-ItemProperty PSPath : Microsoft.PowerShell.Core\Registry::HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Common\MailSettings PSParentPath : Microsoft.PowerShell.Core\Registry::HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Common PSChildName : MailSettings PSProvider : Microsoft.PowerShell.Core\Registry TextFontSimple : {60, 0, 0, 0...}

    Read the article

  • As a solo programmer, of what use can Gerrit be?

    - by s.d
    Disclaimer: I'm aware of the questions How do I review my own code? and What advantages do continuous integration tools offer on a solo project?. I do feel that this question aims at a different set of answers though, as it is about a specific software, and not about general musings about the use of this or that practice or tech stack. I'm a solo programmer for an Eclipse RCP-based project. To be able to control myself and the quality of my software, I'm in the process of setting up a CSI stack. In this, I follow the Eclipse Common Build Infrastructure guidelines, with the exception that I will use Jenkins rather than Hudson. Sticking to these guidelines will hopefully help me in getting used to the Eclipse way of doing things, and I hope to be able to contribute some code to Eclipse in the future. However, CBI includes Gerrit, a code review software. While I think it's really helpful for teams, and I will employ it as soon as the team grows to at least two developers, I'm wondering how I could use it while I'm still solo. Question: Is there a use case for Gerrit that takes into account solo developers? Notes: I can imagine reviewing code myself after a certain amount of time to gain a little distance to it. This does complicate the workflow though, as code will only be built once it's been passed through code review. This might prove to be a "trap" as I might be tempted to quickly push bad code through Gerrit just to get it built. I'm very interested in hearing how other solo devs have made use of Gerrit.

    Read the article

  • Reduce the I/O priority of Windows Backup (Windows Server 2008 R2)

    - by HelloSam
    I have a PostgreSQL running on Windows Server 2008 R2 x64 box. And I have scheduled a backup everyday from the RAID 1 DB disk to a dedicated standalone disk. They are SAS 15k on Dell PERC 6i. I am using the built-in Windows Server Backup for purpose. The problem is, whenever the backup process is kicked in, the database performance is hogged. I would say almost a 10x of performance reduction. From the resource monitor, the disk queue is in the double digit range when backing up, and less than 1 during the day. The disk activity is like ~30-50MB/s during backup, so I guess the hardware is acting normally, though wbengine.exe takes up most of the portions. I think reduce the IO priority of the backup process would be an answer, but I couldn't find a way to. Tuning process CPU priority does not seems to help.

    Read the article

  • Amanda Todd&ndash;What Parents Can Learn From Her Story

    - by D'Arcy Lussier
    Amanda Todd was a bullied teenager who committed suicide this week. Her story has become headline news due in part to her You Tube video she posted telling her story:   The story is heartbreaking for so many reasons, but I wanted to talk about what we as parents can learn from this. Being the dad to two girls, one that’s 10, I’m very aware of the dangers that the internet holds. When I saw her story, one thing jumped out at me – unmonitored internet access at an early age. My daughter (then 9) came home from a friends place once and asked if she could be in a YouTube video with her friend. Apparently this friend was allowed to do whatever she wanted on the internet, including posting goofy videos. This set off warning bells and we ensured our daughter realized the dangers and that she was not to ever post videos of herself. In looking at Amanda’s story, the access to unmonitored internet time along with just being a young girl and being flattered by an online predator were the key events that ultimately led to her suicide. Yes, the reaction of her classmates and “friends” was horrible as well, I’m not diluting that. But our youth don’t fully understand yet that what they do on the internet today will follow them potentially forever. And the people they meet online aren’t necessarily who they claim to be. So what can we as parents learn from Amanda’s story? Parents Shouldn’t Feel Bad About Being Internet Police Our job as parents is in part to protect our kids and keep them safe, even if they don’t like our measures. This includes monitoring, supervising, and restricting their internet activities. In our house we have a family computer in the living room that the kids can watch videos and surf the web. It’s in plain view of everyone, so you can’t hide what you’re looking at. If our daughter goes to a friend’s place, we ask about what they did and what they played. If the computer comes up, we ask about what they did on it. Luckily our daughter is very up front and honest in telling us things, so we have very open discussions. Parents Need to Be Honest About the Dangers of the Internet I’m sure every generation says that “kids grow up so fast these days”, but in our case the internet really does push our kids to be exposed to things they otherwise wouldn’t experience. One wrong word in a Google search, a click of a link in a spam email, or just general curiosity can expose a child to things they aren’t ready for or should never be exposed to (and I’m not just talking about adult material – have you seen some of the graphic pictures from war zones posted on news sites recently?). Our stance as parents has been to be open about discussing the dangers with our kids before they encounter any content – be proactive instead of reactionary. Part of this is alerting them to the monsters that lurk on the internet as well. As kids explore the world wide web, they’re eventually going to encounter some chat room or some Facebook friend invite or other personal connection with someone. More than ever kids need to be educated on the dangers of engaging with people online and sharing personal information. You can think of it as an evolved discussion that our parents had with us about using the phone: “Don’t say ‘I’m home alone’, don’t say when mom or dad get home, don’t tell them any information, etc.” Parents Need to Talk Self Worth at Home Katie makes the point better than I ever could (one bad word towards the end): Our children need to understand their value beyond what the latest issue of TigerBeat says, or the media who continues flaunting physical attributes over intelligence and character, or a society that puts focus on status and wealth. They also have to realize that just because someone pays you a compliment, that doesn’t mean you should ignore personal boundaries and limits. What does this have to do with the internet? Well, in days past if you wanted to be social you had to go out somewhere. Now you can video chat with any number of people from the comfort of wherever your laptop happens to be – and not just text but full HD video with sound! While innocent children head online in the hopes of meeting cool people, predators with bad intentions are heading online too. As much as we try to monitor their online activity and be honest about the dangers of the internet, the human side of our kids isn’t something we can control. But we can try to influence them to see themselves as not needing to search out the acceptance of complete strangers online. Way easier said than done, but ensuring self-worth is something discussed, encouraged, and celebrated is a step in the right direction. Parental Wake Up Call This post is not a critique of Amanda’s parents. The reality is that cyber bullying/abuse is happening every day, and there are millions of parents that have no clue its happening to their children. Amanda’s story is a wake up call that our children’s online activities may be putting them in danger. My heart goes out to the parents of this girl. As a father of daughters, I can’t imagine what I would do if I found my daughter having to hide in a ditch to avoid a mob or call 911 to report my daughter had attempted suicide by drinking bleach or deal with a child turning to drugs/alcohol/cutting to cope. It would be horrendous if we as parents didn’t re-evaluate our family internet policies in light of this event. And in the end, Amanda’s video was meant to bring attention to her plight and encourage others going through the same thing. We may not be kids, but we can still honour her memory by helping safeguard our children.

    Read the article

  • Tool to annotate pictures (screenshots) for documentation purposes?

    - by René Nyffenegger
    Long time ago, I saw someone use a software (on Windows) that was specifically created to annotate pictures. It made it simple to add arrows, boxes, circles in "outstanding" colors to the image. Unfortunatly, I don't remember what program that was. Now, I have to document a GUI and I'd like to use this software in order to annotate screenshots of the software so that I can show the order of flow and dependencies between various aspects of the GUI. I'd be very happy if someone could point me into the right direction.

    Read the article

< Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >