Search Results

Search found 48340 results on 1934 pages for 'microsoft test manager'.

Page 425/1934 | < Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >

  • Reboot without sudoer privileges?

    - by Lincoln
    Hi together, I've been trying to get my ubuntu restart without having to edit the sudoers. This has been possible before (in lucid I think) using a dbus command: dbus-send –system –print-reply –dest=org.freedesktop.ConsoleKit /org/freedesktop/ConsoleKit/Manager org.freedesktop.ConsoleKit.Manager.Restart But this gives me an error. Looks like things have changed. In KDE (which I don't use) one has something similar (see this answer) Could anyone show me an alternative way to make my machine reboot from a script (without adjusting rights)

    Read the article

  • Linux Mint 14 disponible en Release Candidate, "Nadia" sort avec Nemo le fork de Nautilus, les bureaux Cinnamon 1.6 et Mate 1.4

    "Nadia" : Linux Mint 14 disponible en Release Candidate avec Nemo, le fork de Nautilus les bureaux Cinnamon 1.6 et Mate 1.4 [IMG]http://www.franck-depan.fr/images/logo/systemes-exploitation/linux/distribution-mint/mint-logo.png[/IMG] L'équipe de développement de GNU/Linux Mint annonce la Release Candidate de la quatorzième version de sa distribution fondée sur Ubuntu Voici une brève liste des nouveautés :Mate 1.4 Cinnamon 1.6 Mint Desktop Manager Software Manager améliorations système Mate 1.4 Mate 1.4 renforce non seulement la...

    Read the article

  • QA - Developer communication

    - by exiter2000
    I am a developer and have worked at this company 4~5 years by now. We have been practicing scrum for about 2 years. I think, I have been worked well with QAs. I believe QAs/developers/technical writers are all one team. We are also actively hiring new team members. As a legacy member of the team, I have faced to assist new member(including developers and testers) with my business knowledge. We work on 2 weeks base scrum. I usually deliver my user story completely by the first date of second week and do some qa build with partial functionality of my user story so that QA has a good idea about my implementation and flow. Recently, I have met some QAs. In first week, the QAs do not talk... In stand up meeting, they say they are developing test cases regardless I deliver the user story or not. In second week, I do not have a single defect till Thursday afternoon and suddenly I have a major defect with several minor UI defect, which I delivered one week ago. Or I have one or two minor defects on second week however major defects on Thursday afternoon or Friday morning. This eventually make the story rolls over to the next sprint. Major defect takes time to fix and more importantly it would trigger the regression test for the story... Even if I worked Thursday evening and fixed it, the testing will not finish. And this happens multiple times with certain QAs. As a same team member, I talked to the QAs if they could test major defect with higher priority... Rejected... Because I do not understand QA process.. So I asked roughly how many major test cases are covered so far in the stand up meeting on 2nd week Wednesday.. The response is I should not ask this to the QA in the stand up meeting... What do I do?

    Read the article

  • Change the Powershell $profile directory

    - by Swoogan
    I would like to know how to change my the location my $profile variable points to. PS H:\> $profile H:\WindowsPowerShell\Microsoft.PowerShell_profile.ps1 H:\ is a network share, so when I create my profile file, and load powershell I get the following: Security Warning Run only scripts that you trust. While scripts from the Internet can be useful, this script can potentially harm your computer. Do you want to run H:\WindowsPowerShell\Microsoft.PowerShell_profile.ps1? [D] Do not run [R] Run once [S] Suspend [?] Help (default is "D"): According to Microsoft, the location of the $profile is determined by the %USERPROFILE% environment variable. This is not true: PS H:\> $env:userprofile C:\Users\username For example, I have an XP machine working how I want: PS H:\> $profile C:\Documents and Settings\username\My Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1 PS H:\> $env:userprofile C:\Documents and Settings\username PS H:\> $env:homedrive H: PS H:\> $env:homepath \ Here's the same output from the Vista machine where the $profile points to the wrong place: PS H:\> $profile H:\WindowsPowerShell\Microsoft.PowerShell_profile.ps1 PS H:\> $env:userprofile C:\Users\username PS H:\> $env:homedrive H: PS H:\> $env:homepath \ Since $profile isn't actually determined by %USERPROFILE% how do I change it? Clearly anything that involves changing the homedrive or homepath is not the solution I'm looking for.

    Read the article

  • iis7 .net webservice 404 error

    - by agilenoob
    I have a webservice /test/Service1.asmx in the same folder as a page /test/test.aspx. The page works fine but I get the message bellow for the services in the same location. I know the file is there and the url is correct, and I have added the script module and managed handler as well. If anyone knows what I'm missing here I'd appreciate it Server Error in '/' Application. The resource cannot be found. Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. Requested URL: /test/Service1.asmx Version Information: Microsoft .NET Framework Version:2.0.50727.4200; ASP.NET Version:2.0.50727.4016 FAILED REQUEST LOG: ModuleName ManagedPipelineHandler Notification 128 HttpStatus 404 HttpReason Not Found HttpSubStatus 0 ErrorCode 0 ConfigExceptionInfo Notification EXECUTE_REQUEST_HANDLER ErrorCode The operation completed successfully. (0x0)

    Read the article

  • apache mod_proxy or mod_rewrite for hide a root of a webserver behind a path

    - by Giovanni Nervi
    I have 2 apache 2.2.21 one external and one internal, I need to map the internal apache behind a path in external apache, but I have some problems with absolute url. I tried these configurations: RewriteEngine on RewriteRule ^/externalpath/(.*)$ http://internal-apache.test.com/$1 [L,P,QSA] ProxyPassReverse /externalpath/ http://internal-apache.test.com/ or <Location /externalpath/> ProxyPass http://internal-apache.test.com/ ProxyPassReverse http://internal-apache.test.com/ </Location> My internal apache use absolute path for search resources as images, css and html and I can't change it now. Some suggestions? Thank you

    Read the article

  • FREE goodies if you are a UK based software house already live on the Windows Azure Platform

    - by Eric Nelson
    In the UK we have seen some fantastic take up around the Windows Azure Platform and we have lined up some great stuff in 2011 to help companies fully exploit the Cloud – but we need you to tell us what you are up to! Once you tell us about your plans around Windows Azure, you will get access to FREE benefits including email based developer support and free monthly allowance of Windows Azure, SQL Azure and AppFabric from Jan 2011 – and more! (This offer is referred to as Cloud Essentials and is explained here) And… we will be able to plan the right amount of activity to continue to help early adopters through 2011. Step 1: Sign up your company to Microsoft Platform Ready (you will need a windows live id to do this) Step 2: Add your applications For each application, state your intention around Windows Azure (and SQL etc if you so wish) Step 3: Verify your application works on the Windows Azure Platform Step 4 (Optional): Test your application works on the Windows Azure Platform Download the FREE test tool. Test your application with it and upload the successful results. Step 5: Revisit the MPR site in early January to get details of Cloud Essentials and other benefits P.S. You might want some background on the “fantastic take up” bit: We helped over 3000 UK companies deploy test applications during the beta phase of Windows Azure We directly trained over 1000 UK developers during 2010 We already have over 100 UK applications profiled on the Microsoft Platform Ready site And in a recent survey of UK ISVs you all look pretty excited around Cloud – 42% already offer their solution on the Cloud or plan to.

    Read the article

  • PowerShell 3.0 x64 bit broken after installing KB2506143

    - by Dave Parker
    I have searched using all kinds of variations on relevant terms and I cannot find a single other instance of someone else having this excact same problem, so I am hoping someone here may have a clue. Problem I installed Windows Management Framework 3.0 (KB2506143) by downloading and running Windows6.1-KB2506143-x64.msu from Microsoft.com. Once completed I rebooted my machine as requested. After rebooting and logging in, I try to run the 64-bit PowerShell command shell and it comes up for a second then goes away. The 32-bit shell seems to work fine, it is just the 64-bit one that fails. Looking in the Fusion logs, I found: *** Assembly Binder Log Entry (10/4/2012 @ 1:51:48 PM) *** The operation failed. Bind result: hr = 0x80070002. The system cannot find the file specified. Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework64\v2.0.50727\mscorwks.dll Running under executable C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe --- A detailed error log follows. === Pre-bind state information === LOG: User = ********\***** LOG: DisplayName = Microsoft.PowerShell.ConsoleHost, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL <remainder omitted> GacUtil reveals that there is a Microsoft.PowerShell.ConsoleHost, Version=1.0.0.0, but not 3.0.0.0. I tried uninstalling KB2506143 (which removed MSVCRT90.dll and caused Windows Live Messenger to fail on load after rebooting again, so I ran a repair in stall on Windows Live Essentials and that fixed the Messenger problem) and then re-installing it, but nothing changed. If it helps, here are what I think may be the relevant parts of my hardware/software environment. Environment Dell Latitude E6510, 8GB RAM Windows 7 Professional 64-bit with SP1 Visual Studio 2010 Professional installed (includes .NET 4.0) Visual Studio 2012 Professional installed Microsoft Forefront Client Security Any clues out there? Thanks, Dave

    Read the article

  • WEBCAST: Strategies for Managing the Oracle Database Lifecycle

    - by Scott McNeil
    Thursday November 110:00 a.m. PST / 1:00 p.m. EST Join us for a live Webcast and see how Oracle Enterprise Manager 12c makes database lifecycle management easier. You’ll learn how to: Simplify database configurations thanks to extensive automation for discovery and change detection Improve IT service levels with Oracle’s next-generation database patching and provisioning automation Ensure consistency and compliance with comprehensive database change management Register today. Stay Connected: Twitter | Facebook | YouTube | Linkedin | NewsletterDownload the Oracle Enterprise Manager Cloud Control12c Mobile app

    Read the article

  • Cannot update 12.04 due to "Failed to fetch" warning

    - by harsha
    I can't update my ubuntu 12.04. I tried it from terminal, update manager and also from the synaptic package manager. The error is W: Failed to fetch http://in.archive.ubuntu.com/ubuntu/dists/precise-backports/universe/i18n/Translation-en_IN Unable to connect to 172.20.0.100:8080: W: Failed to fetch http://in.archive.ubuntu.com/ubuntu/dists/precise-backports/universe/i18n/Translation-en Unable to connect to 172.20.0.100:8080: E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • Monitoring Visual Studio 2010 Performance Problems

    - by TATWORTH
    At http://visualstudiogallery.msdn.microsoft.com/fa85b17d-3df2-49b1-bee6-71527ffef441, Microsoft have provided a tool for Visual Studio to provide reports on Visual Studio 2010 performance problems. The use of it has been discussed at http://blogs.msdn.com/b/visualstudio/archive/2011/05/02/perfwatson.aspx as follows: "Would you like Visual Studio 2010 to be even faster? Would you like any performance issue you see to be  reported automatically without any hassle? Well now you can, with the new Visual Studio PerfWatson extension! Install this extension and help us deliver a faster Visual Studio experience. We’re constantly working to improve the performance of Visual Studio and take feedback about it very seriously. Our investigations into these issues have found that there are a variety of scenarios where a long running task can cause the UI thread to hang or become unresponsive. Visual Studio PerfWatson is a low overhead telemetry system that helps us capture these instances of UI unresponsiveness and report them back to Microsoft automatically and anonymously. We then use this data to drive performance improvements that make Visual Studio faster." Now instead of complaining you too can help Microsoft locate and fix performance problems with Visual Studio 2010. The requirements are: "Following are the pre-requisites for installing Visual Studio PerfWatson: Windows Vista/2008/2008 R2/7 (Note: PerfWatson is not supported for Windows XP) Visual Studio 2010 SP1 (Professional, Premium, or Ultimate)"

    Read the article

  • Migrating Printers from XP to 7 (using USMT 4.0)

    - by Luís Mendes
    I'm attempting to migrate a user from an XP machine to a 7 machine, using USMT 4.0 (User State Migration Tool, from Microsoft). For some reason the migration of printers isn't working, and none of the topics at Microsoft's TechNet are helping me out. I'll list the topics, just for the record: http://social.technet.microsoft.com/Forums/en-US/w7itproinstall/thread/ab8d6d70-9d1b-419c-8149-37387d4eba6d http://social.technet.microsoft.com/Forums/en-US/w7itproinstall/thread/e92d9ce9-ad9f-4ba0-b326-c7ae6dc50a58 There was a suggestion to use a customized XML configuration file, to force the migration printers, but it didn't work as well. They also mention a folder named DIManifests, which has to be present in the USMT directory. I've tried everything, but with no success. Can anyone please help me sort this one out? Apparently it's a bug, but there's gotta be some workaround. Thank you!

    Read the article

  • Introduction to Human Workflow 11g

    - by agiovannetti
    Human Workflow is a component of SOA Suite just like BPEL, Mediator, Business Rules, etc. The Human Workflow component allows you to incorporate human intervention in a business process. You can use Human Workflow to create a business process that requires a manager to approve purchase orders greater than $10,000; or a business process that handles article reviews in which a group of reviewers need to vote/approve an article before it gets published. Human Workflow can handle the task assignment and routing as well as the generation of notifications to the participants. There are three common patterns or usages of Human Workflow: 1) Approval Scenarios: manage documents and other transactional data through approval chains . For example: approve expense report, vacation approval, hiring approval, etc. 2) Reviews by multiple users or groups: group collaboration and review of documents or proposals. For example, processing a sales quote which is subject to review by multiple people. 3) Case Management: workflows around work management or case management. For example, processing a service request. This could be routed to various people who all need to modify the task. It may also incorporate ad hoc routing which is unknown at design time. SOA 11g Human Workflow includes the following features: Assignment and routing of tasks to the correct users or groups. Deadlines, escalations, notifications, and other features required for ensuring the timely performance of a task. Presentation of tasks to end users through a variety of mechanisms, including a Worklist application. Organization, filtering, prioritization and other features required for end users to productively perform their tasks. Reports, reassignments, load balancing and other features required by supervisors and business owners to manage the performance of tasks. Human Workflow Architecture The Human Workflow component is divided into 3 modules: the service interface, the task definition and the client interface module. The Service Interface handles the interaction with BPEL and other components. The Client Interface handles the presentation of task data through clients like the Worklist application, portals and notification channels. The task definition module is in charge of managing the lifecycle of a task. Who should get the task assigned? What should happen next with the task? When must the task be completed? Should the task be escalated?, etc Stages and Participants When you create a Human Task you need to specify how the task is assigned and routed. The first step is to define the stages and participants. A stage is just a logical group. A participant can be a user, a group of users or an application role. The participants indicate the type of assignment and routing that will be performed. Stages can be sequential or in parallel. You can combine them to create any usage you require. See diagram below: Assignment and Routing There are different ways a task can be assigned and routed: Single Approver: task is assigned to a single user, group or role. For example, a vacation request is assigned to a manager. If the manager approves or rejects the request, the employee is notified with the decision. If the task is assigned to a group then once one of managers acts on it, the task is completed. Parallel : task is assigned to a set of people that must work in parallel. This is commonly used for voting. For example, a task gets approved once 50% of the participants approve it. You can also set it up to be a unanimous vote. Serial : participants must work in sequence. The most common scenario for this is management chain escalation. FYI (For Your Information) : task is assigned to participants who can view it, add comments and attachments, but can not modify or complete the task. Task Actions The following is the list of actions that can be performed on a task: Claim : if a task is assigned to a group or multiple users, then the task must be claimed first to be able to act on it. Escalate : if the participant is not able to complete a task, he/she can escalate it. The task is reassigned to his/her manager (up one level in a hierarchy). Pushback : the task is sent back to the previous assignee. Reassign :if the participant is a manager, he/she can delegate a task to his/her reports. Release : if a task is assigned to a group or multiple users, it can be released if the user who claimed the task cannot complete the task. Any of the other assignees can claim and complete the task. Request Information and Submit Information : use when the participant needs to supply more information or to request more information from the task creator or any of the previous assignees. Suspend and Resume :if a task is not relevant, it can be suspended. A suspension is indefinite. It does not expire until Resume is used to resume working on the task. Withdraw : if the creator of a task does not want to continue with it, for example, he wants to cancel a vacation request, he can withdraw the task. The business process determines what happens next. Renew : if a task is about to expire, the participant can renew it. The task expiration date is extended one week. Notifications Human Workflow provides a mechanism for sending notifications to participants to alert them of changes on a task. Notifications can be sent via email, telephone voice message, instant messaging (IM) or short message service (SMS). Notifications can be sent when the task status changes to any of the following: Assigned/renewed/delegated/reassigned/escalated Completed Error Expired Request Info Resume Suspended Added/Updated comments and/or attachments Updated Outcome Withdraw Other Actions (e.g. acquiring a task) Here is an example of an email notification: Worklist Application Oracle BPM Worklist application is the default user interface included in SOA Suite. It allows users to access and act on tasks that have been assigned to them. For example, from the Worklist application, a loan agent can review loan applications or a manager can approve employee vacation requests. Through the Worklist Application users can: Perform authorized actions on tasks, acquire and check out shared tasks, define personal to-do tasks and define subtasks. Filter tasks view based on various criteria. Work with standard work queues, such as high priority tasks, tasks due soon and so on. Work queues allow users to create a custom view to group a subset of tasks in the worklist, for example, high priority tasks, tasks due in 24 hours, expense approval tasks and more. Define custom work queues. Gain proxy access to part of another user's tasks. Define custom vacation rules and delegation rules. Enable group owners to define task dispatching rules for shared tasks. Collect a complete workflow history and audit trail. Use digital signatures for tasks. Run reports like Unattended tasks, Tasks productivity, etc. Here is a screenshoot of what the Worklist Application looks like. On the right hand side you can see the tasks that have been assigned to the user and the task's detail. References Introduction to SOA Suite 11g Human Workflow Webcast Note 1452937.2 Human Workflow Information Center Using the Human Workflow Service Component 11.1.1.6 Human Workflow Samples Human Workflow APIs Java Docs

    Read the article

  • Do best practices to avoid vendor lock-in exist?

    - by user1598390
    Is there a set of community approved rules to avoid vendor lock-in ? I mean something one can show to a manager or other decision maker that is easy to understand and easily verifiable. Are there some universally accepted set of rules, checklist or conditions that help detect and prevent vendor lock-in in an objective, measurable way ? Have any of you warned a manager about the danger of vendor lock-in during the initial stages of a project ?

    Read the article

  • Windows Embedded Compact 7

    - by Valter Minute
    This will be the official name of the new release of Windows CE. Windows Embedded Compact 7 is available as a public CTP and it already supports a wide range of CPUs and both the device emulator and VirtualPC emulated environments. So I’ll have to learn a new (and longer) name for my favorite OS… but I (and all my two readers!) will be able to test it as soon as the download from connect web site completes (I'm sorry for my readers, but you'll have to download it by yourselves). Here’s a link for the download (it's free but you’ll have to register on connect with a valid LiveId): https://connect.microsoft.com/windowsembeddedce Remember that this is still a beta (or “Community Technology Preview” if you speak marketing language) and so it’s better to not install it on your main development PC (or, at least, backup everything before installation) and that the features and performances you’ll get from this beta may not be the same ones of the final release of the OS. You can discover the new features of Windows Embedded Compact on the new “official” webpage on microsoft website: http://www.microsoft.com/windowsembedded/en-us/products/windowsce/compact7.mspx or on Olivier’s blog: http://blogs.msdn.com/b/obloch/archive/2010/06/01/windows-embedded-compact-7-announced-and-public-ctp-available.aspx I hope to be able to post some interesting content about Windows Embedded Compact 7 soon (and maybe be able to shorten it’s name in CE7 in my blog posts, when I'll ensure that both my readers are not worketing for Microsoft's marketing department …). Technorati Tags: "Windows Embedded Compact 7"

    Read the article

  • Virtualhost Wildcard Subdomains

    - by Khuram
    We have one static IP on which we have routed our company website. We have setup a local machine on windows with WAMP to run our testing server. We want virtual hosts to test our different apps. However, when creating subdomains, we have a new project which uses wildcard subdomains. How can we create the wildcard subdomains in VirtualHosts. We use, NameVirtualHost * <VirtualHost *> ServerAdmin admin@test DocumentRoot "E:/Wamp/www/corporate" ServerName companysite.com </VirtualHost> <VirtualHost *> ServerAdmin admin@test DocumentRoot "E:/Wamp/www/project" ServerName project.companysite.com </VirtualHost> <VirtualHost *> ServerAdmin admin@test DocumentRoot "E:/Wamp/www/project" ServerName *.project.companysite.com </VirtualHost> However, the last * wildcard does not work. Any help?

    Read the article

  • IIS 6 ASP.NET default handler-mappings and virtual directories

    - by Mark Lauter
    I'm having a problem with setting a default mapping in IIS 6. I want to secure *.HTML files with ASP.NET forms authentication. The problem seems to have something to do with using virtual directories to hold the html files. Here's how it's setup: sample directory tree c:/inetpub/ (nothing in here) d:/web_files/my_web_apps d:/web_files/my_web_apps/app1/ d:/web_files/my_web_apps/app2/ d:/web_files/my_web_apps/html_files/ app1 and app2 both access the same html_files directory, so html_files is set as a virtual directory in the web apps in IIS... sample web directory tree //app1/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) //app2/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) If I put a file called test.html in the root of //app1/ and then add the default mapping to the asp.net dll and setup my security on the root folder with deny="?", then accessing test.html works exactly as expected. If I'm not authenticated, it takes me to the login.aspx page, and if I am authenticated then it displays test.html. If I put the test.html file in the html_files directory I get a totally different behavior. Now the login.aspx page loads and I stuck some code in to check if I was still authenticated: <p>autheticated: <%=User.Identity.IsAuthenticated%></p> I figured it would say false because why else would it bother to load the login page? Nope, it says true - so it knows i'm authenticated, but it won't give me access to the test.html file. I've spent several hours on this and haven't been able to solve it. I'm going to spend some more time on google to see if I've missed something. Fingers crossed.

    Read the article

  • Mocking concrete class - Not recommended

    - by Mik378
    I've just read an excerpt of "Growing Object-Oriented Software" book which explains some reasons why mocking concrete class is not recommended. Here some sample code of a unit-test for the MusicCentre class: public class MusicCentreTest { @Test public void startsCdPlayerAtTimeRequested() { final MutableTime scheduledTime = new MutableTime(); CdPlayer player = new CdPlayer() { @Override public void scheduleToStartAt(Time startTime) { scheduledTime.set(startTime); } } MusicCentre centre = new MusicCentre(player); centre.startMediaAt(LATER); assertEquals(LATER, scheduledTime.get()); } } And his first explanation: The problem with this approach is that it leaves the relationship between the objects implicit. I hope we've made clear by now that the intention of Test-Driven Development with Mock Objects is to discover relationships between objects. If I subclass, there's nothing in the domain code to make such a relationship visible, just methods on an object. This makes it harder to see if the service that supports this relationship might be relevant elsewhere and I'll have to do the analysis again next time I work with the class. I can't figure out exactly what he means when he says: This makes it harder to see if the service that supports this relationship might be relevant elsewhere and I'll have to do the analysis again next time I work with the class. I understand that the service corresponds to MusicCentre's method called startMediaAt. What does he mean by "elsewhere"? The complete excerpt is here: http://www.mockobjects.com/2007/04/test-smell-mocking-concrete-classes.html

    Read the article

  • SCOM, 90 Days In, I

    - by merrillaldrich
    At my office we’re about 90 days into our implementation of System Center Operations Manager for Windows Server and SQL Server monitoring. All in all it’s been a good experience, and I’m really excited to have access to this tool. I’ve logged a fair number of years as a DBA on products like Idera’s SQL Diagnostic Manager and Quest Spotlight on SQL Server Enterprise (and “roll-your-own” solutions) in smaller environments, and liked them, but they always, in my experience, struggled with really large...(read more)

    Read the article

  • How to add delay in autologin

    - by raj
    I enabled autologin in my system (CentOS 6.2) for that I edited this file /etc/gdm/custom.conf In that I entered this code [daemon] AutomaticLoginEnable=true AutomaticLogin=test Here test means one account name, for that account autologin is working but the problem is not possible to logout. That is because everytime while I logout it will go to gdm(graphical display manager) and there it Again checks for account test. It is available right so it will again login to same account. Here I want add delay, that means it should wait for sometime, If no one login to any other account, then only test account will log. how to add delay?

    Read the article

  • How can Agile methodologies be adapted to High Volume processing system development?

    - by luckyluke
    I am developing high volume processing systems. Like mathematical models that calculate various parameters based on millions of records, calculated derived fields over milions of records, process huge files having transactions etc... I am well aware of unit testing methodologies and if my code is in C# I have no problem in unit testing it. Problem is I often have code in T-SQL, C# code that is a SQL stored assembly, and SSIS workflow with a good amount of logic (and outcomes etc) or some SAS process. What is the approach YOu use when developing such systems. I usually develop several tests as Stored procedures in a designed schema(TEST) and then automatically run them overnight and check out the results. But this is only for T-SQL. And Continous integration IS hard. But the problem is with testing SSIS packages. How do You test it? What is Your preferred approach for stubbing data into tables (especially if You need a lot data initialization). I have some approach derived over the years but maybe I am just not reading enough articles. So Banking, Telecom, Risk developers out there. How do You test your mission critical apps that process milions of records at end day, month end etc? What frameworks do You use? How do You validate that Your ssis package is Correct (as You develop it)/ How do You achieve continous integration in such an environment (Personally I never got there)? I hope this is not to open-ended question. How do You test Your map-reduce jobs for example (i do not use hadoop but this is quite similar). luke Hope that this is not too open ended

    Read the article

  • How do I separate codes with classes?

    - by Trycon
    I have this main class: package javagame; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.SlickException; import org.newdawn.slick.state.BasicGameState; import org.newdawn.slick.state.StateBasedGame; public class tests extends BasicGameState{ public boolean render=false; tests1 test = new tests1(); public tests(int test) { // TODO Auto-generated constructor stub } @Override public void init(GameContainer arg0, StateBasedGame arg1) throws SlickException { // TODO Auto-generated method stub } @Override public void render(GameContainer arg0, StateBasedGame arg1, Graphics g) throws SlickException { // TODO Auto-generated method stub if(render==true) { g.drawString("Hello",100,100); } } @Override public void update(GameContainer gc, StateBasedGame s, int delta) throws SlickException { // TODO Auto-generated method stub test.render=render; test.update(gc, s, delta); } @Override public int getID() { // TODO Auto-generated method stub return 1000; } } and its sub-class: package javagame; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Input; import org.newdawn.slick.state.StateBasedGame; public class tests1 { public boolean render; public void update(GameContainer gc, StateBasedGame s, int delta) { Input input = gc.getInput(); if(input.isKeyPressed(Input.KEY_X)) { render=true; } } } I was finding a way to prevent many codes in one class. I'm new to java. When I try running my game, then when I press X, it does not work. How am I suppose to fix that?

    Read the article

  • How can I check if PHP was compiled with the UNICODE version of the Win32 API?

    - by Wesley Murch
    This is related to this Stack Overflow post: glob() can't find file names with multibyte characters on Windows? I'm having issues with PHP and files that have multibyte characters on Windows. Here's my test case: print_r(scandir('./uploads/')); print_r(glob('./uploads/*')); Correct Output on remote UNIX server: Array ( [0] => . [1] => .. [2] => filename-äöü.jpg [3] => filename.jpg [4] => test?test.jpg [5] => ??? ?????.jpg [6] => ?????????.jpg [7] => ???.jpg ) Array ( [0] => ./uploads/filename-äöü.jpg [1] => ./uploads/filename.jpg [2] => ./uploads/test?test.jpg [3] => ./uploads/??? ?????.jpg [4] => ./uploads/?????????.jpg [5] => ./uploads/???.jpg ) Incorrect Output locally on Windows: Array ( [0] => . [1] => .. [2] => ??? ?????.jpg [3] => ???.jpg [4] => ?????????.jpg [5] => filename-äöü.jpg [6] => filename.jpg [7] => test?test.jpg ) Array ( [0] => ./uploads/filename-äöü.jpg [1] => ./uploads/filename.jpg ) Here's a relevant excerpt from the answer I chose to accept (which actually is a quote from an article that was posted online over 2 years ago): From the comments on this article: http://www.rooftopsolutions.nl/blog/filesystem-encoding-and-php The output from your PHP installation on Windows is easy to explain : you installed the wrong version of PHP, and used a version not compiled to use the Unicode version of the Win32 API. For this reason, the filesystem calls used by PHP will use the legacy "ANSI" API and so the C/C++ libraries linked with this version of PHP will first try to convert yout UTF-8-encoded PHP string into the local "ANSI" codepage selected in the running environment (see the CHCP command before starting PHP from a command line window) Your version of Windows is MOST PROBABLY NOT responsible of this weird thing. Actually, this is YOUR version of PHP which is not compiled correctly, and that uses the legacy ANSI version of the Win32 API (for compatibility with the legacy 16-bit versions of Windows 95/98 whose filesystem support in the kernel actually had no direct support for Unicode, but used an internal conversion layer to convert Unicode to the local ANSI codepage before using the actual ANSI version of the API). Recompile PHP using the compiler option to use the UNICODE version of the Win32 API (which should be the default today, and anyway always the default for PHP installed on a server that will NEVER be Windows 95 or Windows 98...) I can't confirm whether this is my problem or not. I used phpinfo() and did not find anything interesting, but I wasn't sure what to look for. I've been using XAMPP for easy installations, so I'm really not sure exactly how it was installed. I'm using Windows 7, 64 bit - so forgive my ignorance, but I'm not even sure if "Win32" is relevant here. How can I check if my current version of PHP was compiled with the configuration mentioned above? PHP Version: 5.3.8 System: Windows NT WES-PC 6.1 build 7601 (Windows 7 Home Premium Edition Service Pack 1) i586 Build Date: Aug 23 2011 11:47:20 Compiler: MSVC9 (Visual C++ 2008) Architecture: x86 Configure Command: cscript /nologo configure.js "--enable-snapshot-build" "--disable-isapi" "--enable-debug-pack" "--disable-isapi" "--without-mssql" "--without-pdo-mssql" "--without-pi3web" "--with-pdo-oci=D:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8=D:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8-11g=D:\php-sdk\oracle\instantclient11\sdk,shared" "--enable-object-out-dir=../obj/" "--enable-com-dotnet" "--with-mcrypt=static" "--disable-static-analyze"

    Read the article

  • Point subdirectory to another domain is IIS 6

    - by Liviu
    Is it possible to rewrite somehow www.mysite.com/Pictures to point to test.mysite.com/Pictures or even more broadly www.someothersite.com/Pictures ? Note: www.mysite.com and test.mysite.com are on separate machines and built with different technologies (one is ASP.NET and the other is PHP) but I have access to both of them. I want that when I reference a picture like www.mysite.com/Pictures/pic12345.png the picture to display correctly, even though there is not /Pictures folder on that server and the pictures has to be retrieved by going to test.mysite.com/Picture/pic12345.png Ideally I want to do this in IIS6 to test it. However I am interested if it possible to do in any webserver (Apache, IIS7)

    Read the article

  • root folder php scripts not running in nginx

    - by Thermionix
    nginx with php-fpm on ubuntu 12.04 server. attempting to access /var/www/test.php (via https://example.net/test.php) downloads the script instead of executing it. if I place the test.php in a subdirectory, i.e. /var/www/test/test.php it executes. root.conf; root /var/www; include php-fpm.conf; location ~ /\. { access_log off; log_not_found off; deny all; } php-fpm.conf; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.socket; include fastcgi_params; } fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_index index.php; fastcgi_param HTTPS on; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; #fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200;

    Read the article

< Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >