Search Results

Search found 1829 results on 74 pages for 'automated'.

Page 54/74 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Zenoss Setup for Windows Servers

    - by Jay Fox
    Recently I was saddled with standing up Zenoss for our enterprise.  We're running about 1200 servers, so manually touching each box was not an option.  We use LANDesk for a lot of automated installs and patching - more about that later.The steps below may not necessarily have to be completed in this order - it's just the way I did it.STEP ONE:Setup a standard AD user.  We want to do this so there's minimal security exposure.  Call the account what ever you want "domain/zenoss" for our examples.***********************************************************STEP TWO:Make the following local groups accessible by your zenoss account.Distributed COM UsersPerformance Monitor UsersEvent Log Readers (which doesn't exist on pre-2008 machines)Here's the Powershell script I used to setup access to these local groups:# Created to add Active Directory account to local groups# Must be run from elevated prompt, with permissions on the remote machine(s).# Create txt file should contain the names of the machines that need the account added, one per line.# Script will process machines line by line.foreach($i in (gc c:\tmp\computers.txt)){# Add the user to the first group$objUser=[ADSI]("WinNT://domain/zenoss")$objGroup=[ADSI]("WinNT://$i/Distributed COM Users")$objGroup.PSBase.Invoke("Add",$objUser.PSBase.Path)# Add the user to the second group$objUser=[ADSI]("WinNT://domain/zenoss")$objGroup=[ADSI]("WinNT://$i/Performance Monitor Users")$objGroup.PSBase.Invoke("Add",$objUser.PSBase.Path)# Add the user to the third group - Group doesn't exist on < Server 2008#$objUser=[ADSI]("WinNT://domain/zenoss")#$objGroup=[ADSI]("WinNT://$i/Event Log Readers")#$objGroup.PSBase.Invoke("Add",$objUser.PSBase.Path)}**********************************************************STEP THREE:Setup security on the machines namespace so our domain/zenoss account can access itThe default namespace for zenoss is:  root/cimv2Here's the Powershell script:#Grant account defined below (line 11) access to WMI Namespace#Has to be run as account with permissions on remote machinefunction get-sid{Param ($DSIdentity)$ID = new-object System.Security.Principal.NTAccount($DSIdentity)return $ID.Translate( [System.Security.Principal.SecurityIdentifier] ).toString()}$sid = get-sid "domain\zenoss"$SDDL = "A;;CCWP;;;$sid" $DCOMSDDL = "A;;CCDCRP;;;$sid"$computers = Get-Content "c:\tmp\computers.txt"foreach ($strcomputer in $computers){    $Reg = [WMIClass]"\\$strcomputer\root\default:StdRegProv"    $DCOM = $Reg.GetBinaryValue(2147483650,"software\microsoft\ole","MachineLaunchRestriction").uValue    $security = Get-WmiObject -ComputerName $strcomputer -Namespace root/cimv2 -Class __SystemSecurity    $converter = new-object system.management.ManagementClass Win32_SecurityDescriptorHelper    $binarySD = @($null)    $result = $security.PsBase.InvokeMethod("GetSD",$binarySD)    $outsddl = $converter.BinarySDToSDDL($binarySD[0])    $outDCOMSDDL = $converter.BinarySDToSDDL($DCOM)    $newSDDL = $outsddl.SDDL += "(" + $SDDL + ")"    $newDCOMSDDL = $outDCOMSDDL.SDDL += "(" + $DCOMSDDL + ")"    $WMIbinarySD = $converter.SDDLToBinarySD($newSDDL)    $WMIconvertedPermissions = ,$WMIbinarySD.BinarySD    $DCOMbinarySD = $converter.SDDLToBinarySD($newDCOMSDDL)    $DCOMconvertedPermissions = ,$DCOMbinarySD.BinarySD    $result = $security.PsBase.InvokeMethod("SetSD",$WMIconvertedPermissions)     $result = $Reg.SetBinaryValue(2147483650,"software\microsoft\ole","MachineLaunchRestriction", $DCOMbinarySD.binarySD)}***********************************************************STEP FOUR:Get the SID for our zenoss account.Powershell#Provide AD User get SID$objUser = New-Object System.Security.Principal.NTAccount("domain", "zenoss") $strSID = $objUser.Translate([System.Security.Principal.SecurityIdentifier]) $strSID.Value******************************************************************STEP FIVE:Modify the Service Control Manager to allow access to the zenoss AD account.This command can be run from an elevated command line, or through Powershellsc sdset scmanager "D:(A;;CC;;;AU)(A;;CCLCRPRC;;;IU)(A;;CCLCRPRC;;;SU)(A;;CCLCRPWPRC;;;SY)(A;;KA;;;BA)(A;;CCLCRPRC;;;PUT_YOUR_SID_HERE_FROM STEP_FOUR)S:(AU;FA;KA;;;WD)(AU;OIIOFA;GA;;;WD)"******************************************************************In step two the script plows through a txt file that processes each computer listed on each line.  For the other scripts I ran them on each machine using LANDesk.  You can probably edit those scripts to process a text file as well.That's what got me off the ground monitoring the machines using Zenoss.  Hopefully this is helpful for you.  Watch the line breaks when copy the scripts.

    Read the article

  • Oracle GoldenGate 11gR2 Event Marker System

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Oracle GoldenGate 11gR2 includes a number of refinements to the Event Marker system. Using event markers enables GoldenGate processes to take a defined action based on an event in the data stream. This feature within Oracle GoldenGate simplifies methods to embed specific custom processing in the areas of error handling, alerts, and notification. The event marker system effectively allows for DML driven workflows to be created within GoldenGate and enables customers to craft non-standard processing based on special events. There are a number of supported event actions including: trace, log, checkpoint before, suspend, abort, and several others. With 11gR1 events can now be triggered by DDL operations, plus variables can be passed in and out of the system to shell scripts. Some good use cases for this feature are Automatic switchover to the secondary system during planned outages Better monitoring over source systems’ performance and automated switchover to the standby system in case of an outage with the primary system Automatic switchover from initial load to changed data movement Automatic synchronization of any type of batch processing taking place on both the source and target databases for database consistency Automatic stoppage of the Delivery module to allow end-of-day reporting Finding, tracking, and reporting on transactions that are of interest including the ones that do not have primary keys or transaction record numbers If you would like to see a demo, please visit our youtube channel (http://youtube.com/oraclegoldengate)  To learn more about the new features of Oracle GoldenGate 11gR2 and to ask questions to the PM team, please join us on September 12th  8am or 10am PST for our live webcast. Click here to register.

    Read the article

  • Announcing Key Functional White Papers for SIM and ReIM

    - by Oracle Retail Documentation Team
    Oracle Retail has published two new documents on My Oracle Support (https://support.oracle.com)  that provide partners and retailers with deeper functional information about two products: Oracle Retail Store Inventory Management (SIM) and Oracle Retail Invoice Matching. Oracle Retail Store Inventory Management Item Configuration White Paper (Doc ID 1507221.1) There is functionality within the Store Inventory Management system related to item configuration that spans across multiple concepts that apply to the application as a whole rather than to a specific area. This white paper covers numerous topics around item configuration including: Item Transaction Levels Item Long Description Pack Size Standard Unit of Measure Standard Unit of Measure Conversion Pack Items Simple Pack Conversion Items (Notional Packs) Ranging Items Item Status Non-Sellable Items Type-2 Item Recognition UPC-E Barcodes Non-Inventory Items Consignment and Concession Items Quick Response Codes Oracle Retail Invoice Matching Financial Transactions (Doc ID 1500209.1) This document explains the financial transactions that are posted by Oracle Retail Invoice Matching (ReIM). The scope of the document is limited to ReIM transactions only, and does not explain Retail Merchandising System (RMS), Finance, or Account Receivable transactions. ReIM follows the double-entry accounting standard, which works by recording the debit and credit of each financial transaction belonging to each party involved. Each transaction means a profit to one account (debit) and a loss to another account (credit). Full invoice match processing is completed in ReIM with payment recommendations communicated to Oracle Accounts Payable. ReIM matches merchandise orders and receipts against merchandise invoices, performing automated and manual matching, as well as discrepancy-resolution processing. Matched invoices are posted to interface staging tables specifying the amount and date to pay, vendor, site ID, General Ledger Chart of Accounts (GL CoA) information, and payment terms. Other payables documents, including debit memos, credit memos and credit notes are also interfaced to Accounts Payable through the ReIM staging tables (IM_AP_STAGE_HEAD and IM_AP_STAGE_DETAIL). For information about how ReIM engages in this processing, see the latest Oracle Retail Invoice Matching Operations Guide. Certain ReIM transactions are not interfaced to Oracle Payables, but instead are interfaced to Oracle General Ledger through the IM_FINANCIAL_STAGE table. When analyzing transactions posted through the staging tables, retailers should note the transaction type, Standard/Credit, as well as the sign in the amount field. Technically, a negative sign on a credit transaction changes the transaction to a debit entry, and vice versa. This document is concerned about the financial meaning of the transactions, and will avoid a discussion of negative numbers in T-charts.

    Read the article

  • S11 launched

    - by unixman
    Now that Oracle Solaris 11 is out, its time to do 2 things -- 1) Its time to see what's in it, what's new and why its important, and then assess why it might make sense to begin evaluating it for your needs and 2) Its time to acknowledge, give thanks to and congratulate all the R&D personnel, architects, engineers, designers and testers who've put in so much effort and energy into helping make Solaris 11 (and SunOS 5.11) what it has become -- starting way back circa 2004 and, more importantly, culminating in the recent years and months -- staying focused on the execution, unwavering in the face of various challenges. For #1 above, here are a few good things to get going with - Watch the product launch replay - Visit the Solaris 11 Spotlight section on oracle.com - Get comfortable through introductory videos and detailed "how-to" guides (ex: how to create and publish IPS packages), white papers on the new default root file system, ZFS, and reap the benefits brought on by the fundamental shift in easing the administration experience - Look at the next level of software lifecycle management that is enabled by technologies such as Automated Installer and Image Packaging System -- that dramatically address patch management-related challenges - Understand how we continue to innovate in areas of service intelligence, reliability and availability - Start to evaluate enhancements in virtualization capabilities -- whether influenced by the need to consolidate or motivated by the need to have increased service mobility across physical systems, leveraging hardware-level abstractions - Gain more control over your network-centric services through enhancements in network resource management, observability and I/O performance - Look beyond your existing infrastructure with confidence that you can re-host and transition to newer systems with the use of Solaris 10 zones running on top of Solaris 11 - Relish in the fact that you can do all this, get your data to be secure and encrypted and more, on both, SPARC and x86-based systems. - Stay informed by keeping an eye on relevant blogs, which we've begun turning up recently. - Go through a hands-on lab - Sign up to take a class or just opt to watch various videos to begin to raise your comfort level with these technologies For #2 above -- There are many ways to do that. One way is to just say "thanks" with an email, a post, or a simple card,  similar to this one seen at a Barnes and Noble store recently.  The front of the card is followed by what's inside... and as the saying goes, now more then ever "it's what's inside that counts" And here's the inside of the card: So, what are you waiting for ? Go download and try it out, and please let us know what you think of it!

    Read the article

  • Changing Your Design for Testability

    Sometimes I come across a way of putting something that it is pithy good, not Hallmark trite, but an impactful and concise way of clarifying a previously obscure concept. A recent one of these happy occurrences was when I was reading the excellent Art of Unit Testing by Roy Osherove. After going through the basics of why youd want to test code and how to do it, Roy confronts a frequent objection to having unit tests, that it ends up changing how you design your components: When we write unit tests for our code, we are adding another end user (the test) to the object model. That end user is just as important as the original one, but it has different goals when using the model.  The test has specific requirements from the object model that seem to defy the basic logic behind a couple of object-oriented principles, mainly encapsulation. [emphasis added by me] When I read this, something clicked for me. I used to find it persuasive that because unit tests caused you to change your design they were more disruptive than they were worth. The counter argument I heard is that the disruption was OK, because testable design was just obviously better. That argument was not convincing as it seemed like delusional arrogance to suggest that any one of type of design was just inherently better for the particular applications I was building. What was missing was that I was not thinking of unit tests as an additional and equal end user to my design. If I accepted that proposition, than it was indeed obvious that a testable design was better because now all users of my component would be satisfied. Have I accepted that proposition? Id phrase it slightly different. I find more and more that having unit tests helps me write better, less buggy code before it gets to production or QA. As I write more unit tests, it gets easier to see how to create testable components, so I dont feel like its taking me as much extra time up front. I pick and choose components that seem most likely to benefit from automated tests and it is working out nicely. If you already implement Test Driven Development, this whole post was probably a waste of your time <g> If you hate the idea of unit tests, well, probably not a great value prop for you either. However, if you are somewhere in between, at least take a minute and check out a sample chapter from Roys book at: http://www.manning.com/osherove/.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How far should an entity take care of its properties values by itself?

    - by Kharlos Dominguez
    Let's consider the following example of a class, which is an entity that I'm using through Entity Framework. - InvoiceHeader - BilledAmount (property, decimal) - PaidAmount (property, decimal) - Balance (property, decimal) I'm trying to find the best approach to keep Balance updated, based on the values of the two other properties (BilledAmount and PaidAmount). I'm torn between two practices here: Updating the balance amount every time BilledAmount and PaidAmount are updated (through their setters) Having a UpdateBalance() method that the callers would run on the object when appropriate. I am aware that I can just calculate the Balance in its getter. However, it isn't really possible because this is an entity field that needs to be saved back to the database, where it has an actual column, and where the calculated amount should be persisted to. My other worry about the automatically updating approach is that the calculated values might be a little bit different from what was originally saved to the database, due to rounding values (an older version of the software, was using floats, but now decimals). So, loading, let's say 2000 entities from the database could change their status and make the ORM believe that they have changed and be persisted back to the database the next time the SaveChanges() method is called on the context. It would trigger a mass of updates that I am not really interested in, or could cause problems, if the calculation methods changed (the entities fetched would lose their old values to be replaced by freshly recalculated ones, simply by being loaded). Then, let's take the example even further. Each invoice has some related invoice details, which also have BilledAmount, PaidAmount and Balance (I'm simplifying my actual business case for the sake of the example, so let's assume the customer can pay each item of the invoice separately rather than as a whole). If we consider the entity should take care of itself, any change of the child details should cause the Invoice totals to change as well. In a fully automated approach, a simple implementation would be looping through each detail of the invoice to recalculate the header totals, every time one the property changes. It probably would be fine for just a record, but if a lot of entities were fetched at once, it could create a significant overhead, as it would perform this process every time a new invoice detail record is fetched. Possibly worse, if the details are not already loaded, it could cause the ORM to lazy-load them, just to recalculate the balances. So far, I went with the Update() method-way, mainly for the reasons I explained above, but I wonder if it was right. I'm noticing I have to keep calling these methods quite often and at different places in my code and it is potential source of bugs. It also has a detrimental effect on data-binding because when the properties of the detail or header changes, the other properties are left out of date and the method has no way to be called. What is the recommended approach in this case?

    Read the article

  • Understanding the SQL Server 2008 R2 Installation Center

    - by Enrique Lima
    What is available to us through those links?  Have you taken the time to explore and identify what could be useful to you? One of many gems that has come to my attention is the possibility of provisioning SQL Server to work in an image based environment (hint: Virtualization Template perhaps?!?).   Planning: Includes requirements information, documentation, how to guides, online documentation installation and other tools. Among the other tools you will find the System Configuration checker and The Upgrade Advisor. Both tools very important to ensure your deployment and installation would be successful.     Installation:  This sections focuses on getting installations going, from standalone to cluster when it comes to new instances.  Add new nodes to an existing cluster, and also perform upgrades (in this case to SQL Server 2008 R2).  Also part of this is the option to find updates available.   Maintenance: We find in this section, options that will assist us in tasks like repairing corrupt installations to removing nodes from a cluster. An option that is interesting (and we should discuss benefits later in another post) is to be able to do an Edition Upgrade, this is a feature expansion and addition based on your product installation (Developer to Enterprise, for example)   Tools:  From the System Configuration Checker to identify readiness for deployment in a successful manner, to being able to report on features installed.  And being able to run upgrades of existing packages developed in the 2005 offering to the 2008 R2 release for SSIS.   Resources: Useful and essential links to gather information and guidance.   Advanced: Here is where it gets interesting.  I break this down into 3 main groups: Installation Automation: When you install SQL Server there is a configuration file that gets dropped (ConfigurationFile.ini) that would allow for you to perform automated installations.  There are switches and options that go with this to have that process working. Cluster configuration for Sysprep: Create images that are cluster ready, 2 options, start the prep work, and then the complete once at the final destination. Stand-alone configuration for Sysprep:  Like the clustering counterpart, 2 options, prep and complete.  Giving you the option to create standard templates for your SQL Server deployments. I find it fitting that the 3 topics listed here should (and will) be additional topics I will discuss.   Options: Very clear and specific about what this means. Select the Processor Type or the Installation Media Root Path.

    Read the article

  • What are different ways to reduce latency between a server and a web application? [closed]

    - by jjoensuu
    this is a question about a web application that provides SOAP web services. For the sake of this question, this web app is hosted on a server SERVER B which is located in California. We have an automated, scheduled, process running on a server SERVER A, located in New York. This scheduled process is supposed to send SOAP messages to SERVER B every so often, but this process typically dies soon after starting. We have now been told by the vendor that reason why the process dies is because of the latency between SERVER A and SERVER B. The data traffic is routed through many diffent public networks. There is no dedicated line between SERVER A and SERVER B. As a result I have been asked to look into ways to reduce the latency between SERVER A and SERVER B. So I wanted to ask, what are the different ways to reduce latency in a situation like this? For example, would it help to switch from HTTPS to some other secure protocol? (the thought here is that perhaps some other alternative would require fewer handshakes than HTTPS). Or would a VPN help? If a VPN would reduce the latency, how would it do that? NOTE: I am not looking for an explicit answer that would work in my specific situation. I am more like looking just for a simple list of what technologies could be used for this. I will still have to evaluate the technologies and discuss them internally with others, so the list would just be a starting point. Here I am assuming that there exists very few ways to reduce latency between two servers that communicate across public networks using HTTPS. Feel free to correct me if this assumption is wrong and please ask if there is a need for specific information. NOTE 2: A list of technologies is a specific answer to the question I stated in the title. NOTE 3: Its rather dumb to close this question when it is after all about me looking for information and furthermore this information can clearly be useful for others. Anyway luckily there are other sites where I can ask around. StackExchange seems to attached to their own philosophical principles. Many thanks

    Read the article

  • What ever happened to the Defense Software Reuse System (DSRS)?

    - by emddudley
    I've been reading some papers from the early 90s about a US Department of Defense software reuse initiative called the Defense Software Reuse System (DSRS). The most recent mention of it I could find was in a paper from 2000 - A Survey of Software Reuse Repositories Defense Software Repository System (DSRS) The DSRS is an automated repository for storing and retrieving Reusable Software Assets (RSAs) [14]. The DSRS software now manages inventories of reusable assets at seven software reuse support centers (SRSCs). The DSRS serves as a central collection point for quality RSAs, and facilitates software reuse by offering developers the opportunity to match their requirements with existing software products. DSRS accounts are available for Government employees and contractor personnel currently supporting Government projects... ...The DoD software community is trying to change its software engineering model from its current software cycle to a process-driven, domain-specific, architecture-based, repository-assisted way of constructing software [15]. In this changing environment, the DSRS has the highest potential to become the DoD standard reuse repository because it is the only existing deployed, operational repository with multiple interoperable locations across DoD. Seven DSRS locations support nearly 1,000 users and list nearly 9,000 reusable assets. The DISA DSRS alone lists 3,880 reusable assets and has 400 user accounts... The far-term strategy of the DSRS is to support a virtual repository. These interconnected repositories will provide the ability to locate and share reusable components across domains and among the services. An effective and evolving DSRS is a central requirement to the success of the DoD software reuse initiative. Evolving DoD repository requirements demand that DISA continue to have an operational DSRS site to support testing in an actual repository operation and to support DoD users. The classification process for the DSRS is a basic technology for providing customer support [16]. This process is the first step in making reusable assets available for implementing the functional and technical migration strategies. ... [14] DSRS - Defense Technology for Adaptable, Reliable Systems URL: http://ssed1.ims.disa.mil/srp/dsrspage.html [15] STARS - Software Technology for Adaptable, Reliable Systems URL: http://www.stars.ballston.paramax.com/index.html [16] D. E. Perry and S. S. Popovitch, “Inquire: Predicate-based use and reuse,'' in Proceedings of the 8th Knowledge-Based Software Engineering Conference, pp. 144-151, September 1993. ... Is DSRS dead, and were there any post-mortem reports on it? Are there other more-recent US government initiatives or reports on software reuse?

    Read the article

  • Why do some user agents have spam urls in them (and why are they always Opera/Presto User-Agents)?

    - by Erx_VB.NExT.Coder
    If you go to (say) the last 100 entries (visits) to the botsvsbrowsers.com website (exact link, feel free to take a look: http://www.botsvsbrowsers.com/recent/listings/index.html ), you'd notice that almost every User Agent that has the keywords "Opera" and "Presto" inside them, will almost certainly have a web link (URL/Web Address) inside it, and it won't just be a normal web address, but a HTML anchor tag/link to that address. Why is this so, I could not even find a single discussion about it on the internet, nowhere, I tried varying my search terms many times. If the user agent contains the words "Opera" and "Presto" it doesnt mean it will have this weblink, but it means there is about an 80% change that it will. A typical anchor tag/link inside a user agent will look like this: Mozilla/4.0 <a href="http://osis-uk.co.uk/disabled-equipment">disability equipment</a> (Windows NT 5.1; U; en) Presto/2.10.229 Version/11.60 If you check it out at the website, http://www.botsvsbrowsers.com/recent/listings/index.html you will notice that the back and forward arrows are in there unescaped format. This isn't just true for botsvsbrowsers, but several other user agent listing sites. I'm really confused and feel line I'm in a room full of 10,000 people and am the only one seeing this ghost :). If I'm doing statistical analysis, should I include or exclude this type of user agent from my listing (ie: are these just normal users who've set their user agents to attempt to drive some traffic to their sites as they browser the web), or is there something else going on? The fact that it is so consistent in terms of its format leads me to believe that it is an automated process (the setting or alteration of the user agent) so I cannot decide or understand the process by which this change is made (I know how to change a user agent), but unsure which program or facility is doing this, especially since it is exclusive to Opera (Presto) user agents that are beyond I think an 8 or 9 point something browser version. I've run some statistical tests, parsing entries from all over the place, writing custom programs, to get a better understanding of this. Keep in mind that I see normal URL's in user agents infrequently, they are just text such as +http://www.someSite.com appended to a user agent normally, especially if its a crawler or bot it provided its service URL, this is normal and isnt done with an embedded link (A HREF=) etc, so I'm not talking about "those".

    Read the article

  • The Minimalist Approach to Content Governance - Manage Phase

    - by Kellsey Ruppel
    Originally posted by John Brunswick. Most people would probably agree that creating content is the enjoyable part of the content life cycle. Management, on the other hand, is generally not. This is why we thankfully have an opportunity to leverage meta data, security and other settings that have been applied or inherited in the prior parts of our governance process. In the interests of keeping this process pragmatic, there is little day to day activity that needs to happen here. Most of the activity that happens post creation will occur in the final "Retire" phase in which content may be archived or removed. The Manage Phase will focus on updating content and the meta data associated with it - specifically around ownership. Often times the largest issues with content ownership occur when a content creator leaves and organization or changes roles within an organization.   1. Update Content Ownership (as needed and on a quarterly basis) Why - Without updating content to reflect ownership changes it will be impossible to continue to meaningfully govern the content. How - Run reports against the meta data (creator and department to which the creator belongs) on content items for a particular user that may have left the organization or changed job roles. If the content is without and owner connect with the department marked as responsible. The content's ownership should be reassigned or if the content is no longer needed by that department for some reason it can be archived and or deleted. With a minimal investment it is possible to create reports that use an LDAP or Active Directory system to contrast all noted content owners versus the users that exist in the directory. The delta will indicate which content will need new ownership assigned. Impact - This implicitly keeps your repository and search collection clean. A very large percent of content that ends up no longer being useful to an organization falls into this category. This management phase (automated if possible) should be completed every quarter or as needed. The impact of actually following through with this phase is substantial and will provide end users with a better experience when browsing and searching for content.

    Read the article

  • How are software projects 'typically' managed/deployed

    - by rguilbault
    My company is evaluating adopting off-the-shelf ALM products to aid in our development lifecycle; we currently use our own homegrown solutions to manage requirements gathering, specification documentation, testing, etc. One of the issues I am having is that we have what we call a pipeline, which consists of particular stops: [Source] - [QC] - [Production] At the first stop, the developer works out a solution to some requested change and performs individual testing. When that process is complete (and peer review has been performed), our ALM system physically moves the affected programs from the [Source] runtime environment to the [QC] runtime environment. You can think of this as analogous to moving some web pages from the 'test' server to the 'live' server, where QC personnel can bang on the system and complain that the developer has it all wrong ;-) Once QC signs off that the changes are working, the system again moves the code along to the next stage, where additional testing is performed, etc. I have been searching the internet for a few days trying to find how the process is accomplished anywhere else -- I have read a bit about builds, automated testing, various ALM products, etc. but nowhere does any of this state how builds interact with initial change requests, what the triggers are, how dependencies are managed, how the various forms of testing are accommodated (e.g. unit testing, integration testing, regression testing), etc. Can anyone point me to any resources or attempt to explain (generically) how a change could/should be tracked and moved though the development lifecycle? I'd be very appreciative. To keep things consistent, let's say that we have a project called Calculator, which we want to add support for the basic trigonometric functions: sine, cosine and tangent. I'm open to reorganizing the company however we need to in order to accomplish due diligence testing and we can suppose that any tools are available for use (if that helps to illustrate the process). To start things off, I think I understand this much: we document the requirements, e.g.: support sine, cosine and tangent functions we create some type of change request/work order to assign to programming coding takes place, commits are made to version control peer review commences programmer marks the work order as completed? ... now what? How does QC do their thing? Would they perform testing before closing the 'work order'?

    Read the article

  • Agile team with no dedicated Tester members. Insane or efficient?

    - by MetaFight
    I'm a software developer. I've been thinking a lot about the efficiency of the Software Testers I've worked with so far in my career. In fact, I've been thinking a lot about the Software Testers role in general and have reached a potentially contentious conclusion: Non-developer Software Testers staff are less efficient at software testing than developers. Now, before everyone gets upset, hear me out. This isn't mere opinion: Software Testing and Software Development both require a lot of skills in common: Problem solving Thinking about corner cases Analytical skills The ability to define clear and concise step-by-step scenarios What developers have in addition to this is the ability to automate their tests. Yes, I know non-dev testers can automate their tests too, but that often then becomes a test maintenance issue. Because automating UI tests is essentially programming, non-dev members encounter all the same difficulties software developers encounter: Copy-pasta, lack of code reusibility/maintainability, etc. So, I was wondering. Why not replace all non-dev roles with developer roles? Developers have the skills required to perform Software Testing tasks, and they have the skills to automate tests and keep them maintainable. Would the following work: Hire a bunch of developers and split them into 2 roles: Software developers Software developers doing testing (some manual, mostly automated by writing integration tests, unit tests, etc) Software developers doing application support. (I've removed this as it is probably a separate question altogether) And, in our case since we're doing Agile development, rotate the roles every sprint or two. Also, if at all possible, try to have people spend their Developer stints and Testing stints on different projects. Ideally you would want to reduce the turnover rate per rotation. So maybe you could have 2 groups and make sure the rotation cycles of the groups are elided. So, for example, if each rotation was two sprints long, the two groups would have their rotations 1 sprint apart. That way there's only a 50% turn-over rate per sprint. Am I crazy, or could this work? (Obviously a key component to this working is that all devs want to be in the 3 roles. Let's assume I'm starting a new company and I can hire these ideal people) Edit I've removed the phrase "QA", as apparently we are using it incorrectly where I work.

    Read the article

  • Trying to find resources to learn how to test software [closed]

    - by Davek804
    First off, yes this is a general question, and I'd be perfectly happy to move this to another portion of SE, but I didn't see a more fitting sub. Basically, I am hoping a more experienced QA tester can come along and really fill in some basics for me. So far, websites seem to be sparse in terms of explaining languages involved, basic practices, etc. So, I'm sorry in advance if this is too general, but towards the end of this post I ask some specific questions if it's just absolutely unacceptable to speak in general terms. I just landed a position as Junior Systems and QA Engineer with a social media startup. Their QA and testing is almost nonexistent, so if I do a good job, I imagine I'll find a lot of bugs and have a secure role in the business. I'm pretty good with the systems aspect of my role, but I need to learn more about the QA and testing aspects. We run hardware that's touchscreen based - the user can use and interact with the devices. So, in terms of my QA role, in the short term, I need to build scripts to test the hardware/software as a 'user' to try to uncover bugs. First off, what language should these scripts be written in? Does anyone have some examples? What about the longer term 'automated testing'? I'm familiar with regression testing as the developer adds in new features, sure, but the 50,000 other types of testing, not so much. Most of our hardware runs dotnet/C# code, with some of the servers running Java - but I don't expect to need to run tests on the Java side at this point. I hope to meet with one developer today and try to get a good idea of the output from the hardware so that I can 'mock' this data that gets sent to servers, to try to bugtest. Eventually, we will be moving the hardware to be closer to where I live and work, so that I can test virtually and on real hardware. So a lot of the bugs we're dealing with now are like this: the Local Server, which kiosks report their data to gets updated from the kiosks, but the remote server does not. Or, vis versa when the user registers on a kiosk, the remote server updates but the local server does not. But yeah, without much more detail, I imagine a lot of this info isn't helpful. I've bought a book "How Google Tests Software", but it's really a book more about 'how their software testing is different from Microsoft'. It doesn't teach how to test so much as why their methods are better. Does anyone have a good book that I can buy? An ebook maybe? My local Barnes and Noble kinda had a terrible selection. I also figure a book from 2005 is not necessarily that good either.

    Read the article

  • Windows 7 cannot view FAT32 formatted bootable usb drive

    - by NaimK
    I'm having some issue where when I run bootsect with a command line of "bootsect.exe /nt52 : /force /mbr", then Windows 7 (the comp I'm running bootsect on) can no longer view the contents of the usb drive. Explorer tries to look at it, and then fails, and I can't even correctly eject the drive, when I try, it does nothing until I yank it out, and then I get some errors. Bootsect reports success on writing the volume and the drive data to make it bootable, but it doesn't boot after copying on the necessary files (files from a created ISO, it works when it is created on XP). But this may be that I'm not following the same instructions as when building it on XP since some of the command don't seem to always work correctly. The drive is formatted to FAT32 (necessary I think, cause I'm installing a custom version of Win XP embedded). Any ideas? Or perhaps a good or automated way to load a usb with a custom version of win xp and make it bootable from Win 7? I am having some issues, for instance, "ufdprep.exe" rarely works when I'm running it from Windows 7, I don't know why.

    Read the article

  • Windows 7 AIK help

    - by microchasm
    I've just got in a few Windows 7 (64, Windows 7 Professional) machines, and I'm trying to get the AIK 2010 working. I've set up one of the machines, and installed AIK and MDT on it. I've followed the directions at http://technet.microsoft.com/en-us/library/dd349348%28WS.10%29.aspx about 3 times now, and also tried the built-in .chm help files that came with AIK. In AIK I grab the install.wim image off the OEM cd, at which point it asks me which version (I select Professional). I follow the rest of the instructions creating a new Autounattend answer file, and fill in the various bits and pieces according the the step-by-step guides. I verify the Answer file (No warnings or errors), save it, and copy it onto a USB drive. I go to another machine, insert it's OEM Win7 Disk, and power on. I've set BIOS to boot from CD, so it goes directly into the installation. Once The files are loaded, and Setup starts, it immediately asks which version to install (Home basic, Home Premium, Professiona, Ultimate). Ugh, I thought it was supposed to be an automated install, and that selecting the Version when opening the .wim file would answer this question. I looked for an option to set which version to be installed on the net, in the help, and in AIK itself; to no avail. Anyway, just for laughs I select Professional,and hit continue. It copies files for about 10 seconds, then fails with the following error: "Setup was unable to create a new system partition or locate an existing system partition. See the setup log files for more information. [OK]". Clicking OK reboots the box, and obviously there are no log files because the OS isn't installed. It is a Dell Optiplex 380 , Intel Core Duo 2.93 GHzl 4 GB RAM, 64 Bit. Any help would be REALLY appreciated.

    Read the article

  • How to lock Windows7 from installing anything.

    - by Andy
    A family member continually needs me to reinstall his PC after it gets viruses and spyware. He claims that he never downloads anything but evidence suggests otherwise. Unless there is some way that watching 'videos' can get spyware on there. AFAIK the computer is kept up to date. Possible solutions? Make a login where its impossible for him to install anything. Windows 7 standard account doesn't appear to be enough. Is a standard account enough here? I tried this once before and he still seemed to get IE toolbars installed up the wazoo Somehow make an automated image where if he 'messes up' or even on log off the computer restores the whole drive image. Similar to what I've seen in Kinko's Something I've not thought of.... And I know you are all going to say 'stop fixing it you are an enabler'... yes I know but I'm not going to have another fight with my wife over helping her family... right now I'm doing this to keep her happy not the idiot with the 'video' addiction ;-) So the name of the game is minimizing my overhead.

    Read the article

  • Prohibit installers from modifying Windows Firewall rules

    - by Sysadmin
    Some application installers tamper with the Windows Firewall rules. I would like to prohibit such automated modifications of the Windows Firewall rules on Windows 7 machines (which use the Windows 7 version of Windows Firewall). Is there some setting that would accomplish this, or would it be necessary to resort to hooking the Windows Firewall API? I would like to prevent these modifications from being made at all, rather than backing up the firewall rules before the installation and restoring them afterward. A TechNet article indicates that there is no way to prevent installers from accessing the Windows Firewall API, but that article pertains to the Windows XP version of Windows Firewall. The Windows 7 version of Windows Firewall is newer and much-improved over the Windows XP incarnation, so it is unclear whether that advice is still pertinent. A similar SuperUser question had received a couple of responses, but neither response answered the question, likely because they misunderstood that question due to the way it was worded. I hope that I have explained this problem clearly. Don't hesitate to ask if you need any clarification.

    Read the article

  • Problem recreating BCD on Windows 7 64bit - The requested system device cannot be found

    - by Domchi
    NVIDIA drivers upgrade crashed my Windows 7 installation, so I'm working to undo the damage. What I can do: I can boot Windows install from the USB drive, and I can boot the Hiren's Boot CD. Although automated Windows repair fails, I can get to command prompt when I boot Windows install from USB drive, and I can see my drive and all my data. What I cannot do: I cannot boot into Windows - I get this message: Windows failed to start. A recent hardwware or software change might be the cause. To fix the problem: 1.insert windos cd and run a repair your computer option. File: /boot/bcd Status: 0xc000000f Info: an error occured while attempting to read the boot configuration data. It seems that something is wrong with my /Boot/BCD, so I'm trying to recreate it from scratch. I've tried all the methods detailed here (including Windows repair which fails), and I'm left with the last one (near the bottom of that page). When I type the following command as in the tutorial: bcdedit.exe /import c:\boot\bcd.temp ...it fails with the following error: The store import operation has failed. The requested system device cannot be found. Many Google results say that I must use diskpart to set my partition active, however it's already set as active. Also, when I try this: bcdedit /enum It fails with similar message: The boot configuration data store could not be opened. The requested system device cannot be found. Does anyone know what does that error message mean, and what is the requested system device? I'd like to avoid having to reinstall Windows since all the files on disk seem to be fine.

    Read the article

  • Network monitoring tools with API features

    - by Kev
    We use ks-soft's Advanced Hostmonitor package to monitor around 2000 items on our network. We think it's great, the chap that supports it is fantastic, the product is fast, stable and mature but I feel as as we grow as a company it's beginning to show some friction points in the area of integration with our back office admin systems. One of the things we'd like to do is be able to add new tests to whatever monitoring tool we use via an API. For example, when orders for servers come from our retail interface, the server gets built automatically, and as part of the automated build process we'd like to automatically add new tests to the network monitoring systems. Hostmonitor has some support for this via a feature called HM Script but we're starting to encounter some speedbumps - we can't add new operators/users we can't define new "Action Profiles" - these are the actions to be taken when a test goes good or bad. What we love about hostmonitor though are the Action Profiles. For example if a Windows IIS box goes bad our action profile for a bad test does something like: Check host again (one time) Wait another 30 seconds then test again Try restart app pool on remote machine (up to two times) Send an email to ops about the restart failure Try restarting IIS on remote machine (up to four times) Page duty admin (up to 5 times - stops after duty admin ACKS alert) Page backup duty admin (5 times - stops after duty admin ACKS alert) I'm starting to look around at other network monitoring tools and I'm looking for: a comprehensive API to be able to add/remove/control tests/test "action profiles"/operators (not just plugins, we need control and admin interfaces) the ability to have quite detailed action/escalation profiles (and define these via an API) I've looked at Nagios and Icinga but Ican't seem to glean from their documentation whether we could have these features or not, or if we could, how much work would be involved to implement/customise. Can anyone provide any advice, guidance or experiences?

    Read the article

  • Automatically check for Security Updates on CentOS or Scientific Linux?

    - by Stefan Lasiewski
    We have machines running RedHat-based distros such as CentOS or Scientific Linux. We want the systems to automatically notify us if there are any known vulnerabilities to the installed packages. FreeBSD does this with the ports-mgmt/portaudit port. RedHat provides yum-plugin-security, which can check for vulnerabilities by their Bugzilla ID, CVE ID or advisory ID. In addition, Fedora recently started to support yum-plugin-security. I believe this was added in Fedora 16. Scientific Linux 6 did not support yum-plugin-security as of late 2011. It does ship with /etc/cron.daily/yum-autoupdate, which updates RPMs daily. I don't think this handles Security Updates only, however. CentOS does not support yum-plugin-security. I monitor the CentOS and Scientific Linux mailinglists for updates, but this is tedious and I want something which can be automated. For those of us who maintain CentOS and SL systems, are there any tools which can: Automatically (Progamatically, via cron) inform us if there are known vulnerabilities with my current RPMs. Optionally, automatically install the minimum upgrade required to address a security vulnerability, which would probably be yum update-minimal --security on the commandline? I have considered using yum-plugin-changelog to print out the changelog for each package, and then parse the output for certain strings. Are there any tools which do this already?

    Read the article

  • Windows 7 disk backup and clone for deployment to multiple systems

    - by gregmac
    I'm in the process of deploying some new PCs (there's only 8), all identical hardware. What I'd like to do is install Windows 7 (64bit), join to domain etc, install a bunch of other software, and then clone that drive to multiple other machines. I'd also like to be able to use it as a backup image, so the machine can be restored back to that image at some future date. I understand this involves at least sysprep, but I am confused after reading some tutorials that talk about using Windows Automated Installation Kit, or hacks with the registry and custom-build batch files. This process seems overly complex to me: I did something similar 10+ years ago, and and don't remember it being this bad. Surely things have improved in a decade? There's also some products that involve having network servers running deployment software, network boot, etc etc.. this is way more than I want to set up. My systems are all identical hardware. Is there a simplified way to clone PCs? Preferably (since I'm a lazy developer, and not an IT admin) I'd like to find some off-the-shelf product that I can run after I get the machine setup, that will spit out a bootable DVD I can run on all the other systems, which will boot up, ask for a computer name, join it to the domain, and that's it. Does such as product exist?

    Read the article

  • Backing up VMs to a tape drive

    - by Aljoscha Vollmerhaus
    I've got myself one of these fancy tape drives, HP LTO2 with 200/400 GB cartridges. The st driver reports it like this: scsi 1:0:0:0: Sequential-Access HP Ultrium 2-SCSI T65D I can store and retrieve files like a charm using tar, both tar cf /dev/st0 somedirectory and tar xf /dev/st0 work flawless. However, what I really would like to backup are LVM LVs. They contain entire virtual machines with varying partition layouts, so using mount and tar is not an option. I've tried using something like dd if=/dev/VG/LV bs=64k of=/dev/st0 to achieve this, but there seem to be various problems associated with this approach. Firstly, I would like to be able to store more than 1 LV on a single tape. Now I guess I could seek to concatenate the data on the tape, but I think this would not work very well in an automated scenario with many different LVs of various sizes. Secondly, I would like to store a small XML file along with the raw data that contains some information about the VM contained in the LV. I could dump everything to a directory and tar it up - not very desirable, I would have to set aside huge amounts of scratch space. Is there an easier way to achieve this? Thirdly, from googling around it seems like it would be wise to use something like mbuffer when writing to the tape, to prevent what wikipedia calls "shoe-shining" the tape. However, I can't get anything useful done with mbuffer. The mbuffer man page suggests this for writing to a tape device: mbuffer -t -m 10M -p 80 -f -o $TAPE So I've tried this: dd if=/dev/VG/LV | mbuffer -t -m 10M -p 80 -f -d 64k -o /dev/st0 Note the added "-d 64k" to account for the 64k block size of the tape. However, reading data back from a tape written in this way never seems to yield any useful results - dd has been running for ages now, and managed to transfer only 361M of data from the tape. What's wrong here?

    Read the article

  • Free or Open Solution for Storing and Charting CSV data

    - by rrrfusco
    I'm presently storing CSV files, combining them, opening them in open office, creating pivot tables and then generating charts from the spreadsheet. I've looked at OOBase, but appending csv files to base is clunky for some reason. SQLite seems like a good database solution, but I've haven't found a good charting program that connects to it with ease. Although open office (or libreoffice) maintains the references and allows you to update the information, this process is far from efficient. There are too many steps and it seems one program should handle all of these tasks. A better program would be more intuitive, allow you to simply add inserts into a database, and include an interface for standard charting settings. EDIT Simplest Automated Analysis and Chart Generation Tool? The above answer references Spotfire and Tableau, each of which has a free 14 and 30 day trial. Each program is nicely streamlined and designed. I'm looking for a program between this quality and LibreOffice. Can you recommend a better open or free desktop solution for windows?

    Read the article

  • Windows Scheduled Startup Task doesn't appear to be fully working but why?

    - by Devtron
    I originally tried to use Group Policy to enforce a startup script to run at startup. My startup script is a .CMD file, which calls 10 .exe files. Using Group Policy I could never get this to work....so I looked into using Scheduled Tasks. And here I am. I have tried two different versions of my script (for syntax purposes). I originally thought my syntax could be bad, so I tried a few approaches. Neither work. My #1 .CMD file approach commands look similar to this: start "this is my title" /D "C:\Somepathhere\myExecutable.exe" "..\..\published\wc_task.wfc" My #2 .CMD file approach commands look similar to this (it invokes a shortcut file): rundll32 shell32.dll,ShellExec_RunDLL "C:\Somepathhere\bin\Virtual Workflow.lnk" ^ Both of these scripts work fine if I manually run them, either by running the .CMD file, or even by manually forcing the Schedule Task MSC console to "Run" this script. Manual process seems to work fine, but automated it does not. My scheduled task is set for startup and uses "highest privileges" to execute as Admin. At the end of my .CMD script, I added a line to write to a text file, just to prove that the script was being run. That command looks like this: echo foo > C:\foo.txt When I reboot my server, and Schedule Tasks kicks in, I never get my ten .EXE files to run, but I do get the C:\foo.txt on my drive. What gives?

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >