Search Results

Search found 30828 results on 1234 pages for 'information rights manage'.

Page 68/1234 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Use a media player in Linux just to play files from an iPod device (no sync, no manage, just play)?

    - by Somebody still uses you MS-DOS
    I have an ipod classic 160gb, that I sync with my machine at home. I use Linux at work, and want to just plug my ipod and just listen to the tracks, with all the playlists and such. I don't want to sync nothing, I just want to listen to the tracks as if I was using the ipod itself. Why? Because this way I can use the usb port. So, I don't want to manage my ipod in Linux, I just want to listen to the tracks on it in Linux, like it was a local library but it's instead in my ipod. (I've tried gtkpod, it works to show my files, but I can't play, shuffle, etc. It would be interesting to have a complete audio software to handle everything like it was a local library)

    Read the article

  • How to manage SOAP requests to a pool of VM each listening on a HTTP port with a priority value in these requests?

    - by sputnick
    I have a front SOAP web-server under Linux. It will have to communicate with Windows Servers VM listening each on a HTTP port, for a HTTP POST request. The chosen VM should return a report of the task to the SOAP client. In the SOAP requests, there's a special variable : the priority of the request (kind of SLA), and my question is coming right now : I think of using a ha software (nginx, HAProxy, HeartBeat...) that can manage priority in this point of view. Is it relevant or do you think I need to implement a queue by myself with some specific developments? Ex: I have a SOAP requests with low priority in the pipe : the weight priority for these VM should be decreased if I have high priority SOAP requests at the same time. Any clue will be really appreciated.

    Read the article

  • Ways to setup a ZFS pool on a device without possibility to create/manage partitions?

    - by Karl Richter
    I have a NAS where I don't have a possibility to create and manage partitions (maybe I could with some hacks that I don't want to make). What ways to setup multiple ZFS pools with one partition each (for starters - just want to use deduplication) exist? The setup should work with the NAS, i.e. over network (I'd mount the images via NFS or cifs). My ideas and associated issues so far: sparse files mounted over loop device (specifying sparse file directly as ZFS vdev doesn't work, see Can I choose a sparse file as vdev for a zfs pool?): problem that the name/number of the assigned loop device is anything but constant, not sure how increasing the number loop device with kernel parameter affects performance (there has to be a reason to limit it to 8 in the default value, right?)

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • How can I manage AWS VPC ssh access accounts and keys across multiple instances?

    - by deitch
    I am setting up a standard AWS VPC structure: a public subnet some private subnets, hosts on each, ELB, etc. Operational network access will be via either an ssh bastion host or an openvpn instance. Once on the network (bastion or openvpn), admins use ssh to access the individual instances. From what I can tell all of the docs seem to depend on a single user with sudo rights and a single public ssh key. But is that really best practice? Isn't it much better to have each user access each host under their own name? So I can deploy accounts and ssh public keys to each server, but that rapidly gets unmanageable. How do people recommend managing user accounts? I've looked at: IAM: It doesn't like like IAM has a method for automatically distributing accounts and ssh keys to VPC instances. IAM via LDAP: IAM doesn't have an LDAP API LDAP: set up my own LDAP servers (redundant, of course). Bit of a pain to manage, still better than managing on every host, especially as we grow. Shared ssh key: rely on the VPN/bastion to track user activities. I don't love it, but... What do people recommend? NOTE: I moved this over from accidentally posting in StackOverflow.

    Read the article

  • What would you use to keep track of all the random information about your LAN/Enviroment?

    - by agnul
    Say you work in your average anarchy... no appointed IT guy, no Office-Manager whatsoever, just developers working in free-for-all mode and a couple of non technical people around for admin (non IT) stuff. What would you use to keep track of Hardware / Software inventory Backups of OS images, Drivers, and all that LAN configuration information Passwords for all the systems Currently I'm thinking of having some sort of wiki + web server on our only ''server-class'' machine, but then I fear I'd be the only one who would go through the hassle of adding information in the wiki... So, have you been through that? What would you suggest?

    Read the article

  • How setup federated SQL data to display limited information to a Web server in the DMZ?

    - by Pcav
    I have a SQL server behind a firewall. I need to push some limited SQL 2005 information to a Web Server in the DMZ so that I do not have to let database queries come all the way into the database server on our internal network. I want to push a small amount of dynamic data to a Web server in the DMZ and lock it down so that our hosted website does not need to come into the internal network for information; I want to put a server in the DMZ that will be the only connection allowed to the SQL database. This DMZ server will be the only server that can have any sort of connection to the back-end database so the hosting provider just pull the data from our server in the DMZ...

    Read the article

  • WebCenter Customer Spotlight: Indecopi

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryIndecopi Optimizes Patent Approval Management and Accelerates Customer Service Times by 40% Indecopi is a decentralized public agency that promotes the country’s markets and protects consumer rights. It promotes fair and honest competition and safeguards all forms of intellectual property through three directorates: Author’s Rights, Inventions and New Technologies, and Trademarks. The business challenge was to unify the agency’s technology infrastructure to create a business process management strategy, consolidate the organization’s Web platform and improve and automate information services for citizens and businesses, and streamline patent procedures by digitizing documentation. Indecopi optimized patent information services , organized information, provided around-the-clock online access to users, and developed a Web site that provides internal and external users access to DIN information, such as patent documentation, through a user-friendly interface. Indecopi achieved impressive business result by reducing use of paper files by 50%, accelerating transaction approvals,  reduce nonvalue-added activities by 85% and  accelerated customer service times by 40%. Company OverviewPeru’s Instituto Nacional de Defensa de la Competencia y de la Protección de la Propiedad Intelectual (Indecopi), the National Institute for the Defense of Competition and Protection of Intellectual Property, is a decentralized public agency that promotes the country’s markets and protects consumer rights. It promotes fair and honest competition and safeguards all forms of intellectual property through three directorates: Author’s Rights, Inventions and New Technologies, and Trademarks. Business ChallengesIndecopi's challenge was to unify the agency’s technology infrastructure to create a business process management strategy, starting with the Directorate of Inventions and New Technologies (DIN), consolidate the organization’s Web platform to meet new demands for software and process development, such as for patent applications, and improve and automate information services for citizens and businesses and streamline patent procedures by digitizing documentation. Solution DeployedIndecopi optimized patent information services with Oracle Business Process Management, automating processes to deliver expedient searches, and to create new services, such as alerts to users. They organized information and provided around-the-clock online access to users with Oracle WebCenter Content. In addition they used Oracle WebLogic Server to develop a Web site that provides internal and external users access to DIN information, such as patent documentation, through a user-friendly interface. Business Results Indecopi achieved impressive business results Reduced use of paper files by 50% Accelerated transaction approvals  reduce nonvalue-added activities, such as manual document copying to obtain patents, by 85% Accelerated customer service times by 40% by optimizing procedures, such as searches and online information related to granting patents “Oracle Business Process Manager has been a paradigm shift in process management. By digitalizing and automating our patents information services, we can now manage everything in the simplest way possible, expanding our options for the creation of new services.” Sergio Rodríguez, Assistant Director, Inventions and New Technologies Directorate, Instituto Nacional de Defensa de la Competencia y la Propiedad Intelectual Additional Information Indecopi Customer Snapshot Oracle WebCenter Content

    Read the article

  • what web based tool, to allow a non-technical user to manage authorized keys files on a Linux (fedora/centos/ubuntu/debian) server

    - by Tom H
    (Edit: clarification below) We have a number of groups of developers that change frequently, and a security policy to require individual logins to servers using rsa or dsa public keys, which is achieved via the standard method of adding id_dsa.pub to their authorized keys file. I am using chef to sync the user accounts across machines, however our previous method of using webmin to manage the user passwords is not designed for key based auth, and hence is not easy to use for non-technical users. The developers are logging in from the WAN using ssh, they can either provide their own key, or an administrator will send them a private key. The development machines are located in the cloud and we have a single server available to host the master set of accounts. Obviously I could deploy ldap or other centralised authentication system, but that seems a bit over blown when webmin worked well for the simple case. It is easy to achieve synchronised users, groups and passwords across a bunch of low security development boxes using webmin clustered users and groups. However looking at the currently installed webmin it is not so easy to create the authorized keys as it is to create user accounts and passwords. (its possible, but its not easy - some functionality is in the usermin module, or would required some tedious steps) Ideally I'd like a web interface that is pretty much dedicated to creating users and groups, and can generate key pairs on the fly, and can accepted pasted in public keys to add to the users authorized keys file. If the tool sync'ed the users and keys as well, that would be great, but I can use chef to do that part if the accounts are created correctly on the "master" server.

    Read the article

  • What permission(s) does an application pool identity required to manage other application pools?

    - by Mr Shoubs
    I have a web site (used to manage various parts of our software) that needs the permissions required to start/stop other application pools. I've created a user and set the app pool identity to custom, however the web app still can't start/stop the app pools. I get the following Error: System.UnauthorizedAccessException: Filename: redirection.config Error: Cannot read configuration file due to insufficient permissions at Microsoft.Web.Administration.Interop.AppHostWritableAdminManager.GetAdminSection(String bstrSectionName, String bstrSectionPath) at Microsoft.Web.Administration.Configuration.GetSectionInternal(ConfigurationSection section, String sectionPath, String locationPath) at Microsoft.Web.Administration.ServerManager.get_ApplicationPoolsSection() at Microsoft.Web.Administration.ServerManager.get_ApplicationPools() Discussion here suggests setting the application pool to local system or administrator, this does work, but I don't want to do this for security reasons (external support will need access this site). I did give the user higher permissions (as suggested here), starting by making it part of the local administrators group, but initially this didn't work, and giving the user read/write/mod permission on C:\Windows\System32\inetsrv\config also didn't work. I must have done something wrong as local administrator now works, however this still isn't what I want. So can anyone suggest the permissions I need to add to this user, and how can I apply them? An answer my problem (but different question) is here, but to clarify, I think I need to give an individual user "IIS Runtime Operation Permissions", does anyone know how to do this, if indeed this is the permissions I require?

    Read the article

  • Why is XML Deserilzation not throwing exceptions when it should.

    - by chobo2
    Hi Here is some dummy xml and dummy xml schema I made. schema <?xml version="1.0" encoding="utf-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.domain.com" xmlns="http://www.domain.com" elementFormDefault="qualified"> <xs:element name="vehicles"> <xs:complexType> <xs:sequence> <xs:element name="owner" minOccurs="1" maxOccurs="1"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:minLength value="2" /> <xs:maxLength value="8" /> </xs:restriction> </xs:simpleType> </xs:element> <xs:element name="Car" minOccurs="1" maxOccurs="1"> <xs:complexType> <xs:sequence> <xs:element name="Information" type="CarInfo" minOccurs="0" maxOccurs="unbounded" /> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="Truck"> <xs:complexType> <xs:sequence> <xs:element name="Information" type="CarInfo" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="SUV"> <xs:complexType> <xs:sequence> <xs:element name="Information" type="CarInfo" minOccurs="0" maxOccurs="unbounded" /> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> <xs:complexType name="CarInfo"> <xs:sequence> <xs:element name="CarName"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:minLength value="1"/> <xs:maxLength value="50"/> </xs:restriction> </xs:simpleType> </xs:element> <xs:element name="CarPassword"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:minLength value="6"/> <xs:maxLength value="50"/> </xs:restriction> </xs:simpleType> </xs:element> <xs:element name="CarEmail"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:pattern value="\w+([-+.']\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*"/> </xs:restriction> </xs:simpleType> </xs:element> </xs:sequence> </xs:complexType> </xs:schema> xml sample <?xml version="1.0" encoding="utf-8" ?> <vehicles> <owner>Car</owner> <Car> <Information> <CarName>Bob</CarName> <CarPassword>123456</CarPassword> <CarEmail>[email protected]</CarEmail> </Information> <Information> <CarName>Bob2</CarName> <CarPassword>123456</CarPassword> <CarEmail>[email protected]</CarEmail> </Information> </Car> <Truck> <Information> <CarName>Jim</CarName> <CarPassword>123456</CarPassword> <CarEmail>[email protected]</CarEmail> </Information> <Information> <CarName>Jim2</CarName> <CarPassword>123456</CarPassword> <CarEmail>[email protected]</CarEmail> </Information> </Truck> <SUV> <Information> <CarName>Jane</CarName> <CarPassword>123456</CarPassword> <CarEmail>[email protected]</CarEmail> </Information> <Information> <CarName>Jane</CarName> <CarPassword>123456</CarPassword> <CarEmail>[email protected]</CarEmail> </Information> </SUV> </vehicles> Serialization Class using System; using System.Collections.Generic; using System.Xml; using System.Xml.Serialization; [XmlRoot("vehicles")] public class MyClass { public MyClass() { Cars = new List<Information>(); Trucks = new List<Information>(); SUVs = new List<Information>(); } [XmlElement(ElementName = "owner")] public string Owner { get; set; } [XmlElement("Car")] public List<Information> Cars { get; set; } [XmlElement("Truck")] public List<Information> Trucks { get; set; } [XmlElement("SUV")] public List<Information> SUVs { get; set; } } public class CarInfo { public CarInfo() { Info = new List<Information>(); } [XmlElement("Information")] public List<Information> Info { get; set; } } public class Information { [XmlElement(ElementName = "CarName")] public string CarName { get; set; } [XmlElement("CarPassword")] public string CarPassword { get; set; } [XmlElement("CarEmail")] public string CarEmail { get; set; } } Now I think this should all validate. If not assume it is write as my real file does work and this is what this dummy one is based off. Now my problem is this. I want to enforce as much as I can from my schema. Such as the "owner" tag must be the first element and should show up one time and only one time ( minOccurs="1" maxOccurs="1"). Right now I can remove the owner element from my dummy xml file and deseriliaze it and it will go on it's happy way and convert it to object and will just put that property as null. I don't want that I want it to throw an exception or something saying this does match what was expected. I don't want to have to validate things like that once deserialized. Same goes for the <car></car> tag I want that to appear always even if there is no information yet I can remove that too and it will be happy with that. So what tags do I have to add to make my serialization class know that these things are required and if they are not found throw an exception.

    Read the article

  • Does waitpid yield valid status information for a child process that has already exited?

    - by dtrebbien
    If I fork a child process, and the child process exits before the parent even calls waitpid, then is the exit status information that is set by waitpid still valid? If so, when does it become not valid; i.e., how do I ensure that I can call waitpid on the child pid and continue to get valid exit status information after an arbitrary amount of time, and how do I "clean up" (tell the OS that I am no longer interested in the exit status information for the finished child process)? I was playing around with the following code, and it appears that the exit status information is valid for at least a few seconds after the child finishes, but I do not know for how long or how to inform the OS that I won't be calling waitpid again: #include <assert.h> #include <pthread.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/wait.h> int main() { pid_t pid = fork(); if (pid < 0) { fprintf(stderr, "Failed to fork\n"); return EXIT_FAILURE; } else if (pid == 0) { // code for child process _exit(17); } else { // code for parent sleep(3); int status; waitpid(pid, &status, 0); waitpid(pid, &status, 0); // call `waitpid` again just to see if the first call had an effect assert(WIFEXITED(status)); assert(WEXITSTATUS(status) == 17); } return EXIT_SUCCESS; }

    Read the article

  • How to use javascript to get information from the content of another page (same domain)?

    - by hlovdal
    Let's say I have a web page (/index.html) that contains the following <li> <div>item1</div> <a href="/details/item1.html">details</a> </li> and I would like to have some javascript on /index.html to load that /details/item1.html page and extract some information from that page. The page /details/item1.html might contain things like <div id="some_id"> <a href="/images/item1_picture.png">picture</a> <a href="/images/item1_map.png">map</a> </div> My task is to write a greasemonkey script, so changing anything serverside is not an option. To summarize, javascript is running on /index.html and I would like to have the javascript code to add some information on /index.html extracted from both /index.html and /details/item1.html. My question is how to fetch information from /details/item1.html. I currently have written code to extract the link (e.g. /details/item1.html) and pass this on to a method that should extract the wanted information (at first just .innerHTML from the some_id div is ok, I can process futher later). The following is my current attempt, but it does not work. Any suggestions? function get_information(link) { var obj = document.createElement('object'); obj.data = link; document.getElementsByTagName('body')[0].appendChild(obj) var some_id = document.getElementById('some_id'); if (! some_id) { alert("some_id == NULL"); return ""; } return some_id.innerHTML; }

    Read the article

  • How to store and collect data for mining such information as most viewed for last 24 hours, last 7 d

    - by Kirzilla
    Hello, Let's imagine that we have high traffic project (a tube site) which should provide sorting using this options (NOT IN REAL TIME). Number of videos is about 200K and all information about videos is stored in MySQL. Number of daily video views is about 1.5KK. As instruments we have Hard Disk Drive (text files), MySQL, Redis. Views top viewed top viewed last 24 hours top viewed last 7 days top viewed last 30 days top rated last 365 days How should I store such information? The first idea is to log all visits to text files (single file per hour, for example visits_20080101_00.log). At the beginning of each hour calculate views per video for previous hour and insert this information into MySQL. Then recalculate totals (for last 24 hours) and update statistics in tables. At the beginning of every day we have to do the same but recalculate for last 7 days, last 30 days, last 365 days. This method seems to be very poor for me because we have to store information about last 365 days for each video to make correct calculations. Is there any other good methods? Probably, we have to choose another instruments for this? Thank you.

    Read the article

  • Open source app to manage and run commands on cloud servers? [closed]

    - by Mark Theunissen
    I'm creating a SaaS platform, and I need a component / library that can create, delete and store the connection details for cloud servers. It also needs to support executing shell commands on these servers and returning the response to the caller. I want a central database of servers and their configuration, plus the ability to reach out and manage the servers via SSH execution of bash scripts. I don't want something that needs agents on every server like Chef. For example, this command is received by the hypothetical application: CREATE USER server = server12345 name = myuser It's translated into the following set of actions and executed by the app, which knows how to connect to server12345, and how to create a user on that server: $ ssh root@server12345 $ adduser myuser And returns the output from the shell: Added user myuser. I've done research on Google and can't quite quite find something that does this already. I've found: fabric This part handles the executing of the shell commands very elegantly, and can take multiple server definitions, but it's supposed to be a deployment tool so doesn't do everything that would be required above - for example, it doesn't have a daemon mode where it listens for commands - it expects to be executed on the shell. It also can't provide the central database functionality. libcloud This library can handle the server admin (CRUD) part, but doesn't have a command interface daemon either, and doesn't let you execute commands on the servers. I guess I need something that is a combination of libcloud, fabric and django for an API. Or something else that does that same thing regardless of language. Overmind Overmind is a GUI and wrapper around libcloud, but doesn't support the command execution part. What am I missing here?

    Read the article

  • Pain Comes Instantly

    - by user701213
    When I look back at recent blog entries – many of which are not all that current (more on where my available writing time is going later) – I am struck by how many of them focus on public policy or legislative issues instead of, say, the latest nefarious cyberattack or exploit (or everyone’s favorite new pastime: coining terms for the Coming Cyberpocalypse: “digital Pearl Harbor” is so 1941). Speaking of which, I personally hope evil hackers from Malefactoria will someday hack into my bathroom scale – which in a future time will be connected to the Internet because, gosh, wouldn’t it be great to have absolutely everything in your life Internet-enabled? – and recalibrate it so I’m 10 pounds thinner. The horror. In part, my focus on public policy is due to an admitted limitation of my skill set. I enjoy reading technical articles about exploits and cybersecurity trends, but writing a blog entry on those topics would take more research than I have time for and, quite honestly, doesn’t play to my strengths. The first rule of writing is “write what you know.” The bigger contributing factor to my recent paucity of blog entries is that more and more of my waking hours are spent engaging in “thrust and parry” activity involving emerging regulations of some sort or other. I’ve opined in earlier blogs about what constitutes good and reasonable public policy so nobody can accuse me of being reflexively anti-regulation. That said, you have so many cycles in the day, and most of us would rather spend it slaying actual dragons than participating in focus groups on whether dragons are really a problem, whether lassoing them (with organic, sustainable and recyclable lassos) is preferable to slaying them – after all, dragons are people, too - and whether we need lasso compliance auditors to make sure lassos are being used correctly and humanely. (A point that seems to evade many rule makers: slaying dragons actually accomplishes something, whereas talking about “approved dragon slaying procedures and requirements” wastes the time of those who are competent to dispatch actual dragons and who were doing so very well without the input of “dragon-slaying theorists.”) Unfortunately for so many of us who would just get on with doing our day jobs, cybersecurity is rapidly devolving into the “focus groups on dragon dispatching” realm, which actual dragons slayers have little choice but to participate in. The general trend in cybersecurity is that powers-that-be – which encompasses groups other than just legislators – are often increasingly concerned and therefore feel they need to Do Something About Cybersecurity. Many seem to believe that if only we had the right amount of regulation and oversight, there would be no data breaches: a breach simply must mean Someone Is At Fault and Needs Supervision. (Leaving aside the fact that we have lots of home invasions despite a) guard dogs b) liberal carry permits c) alarm systems d) etc.) Also note that many well-managed and security-aware organizations, like the US Department of Defense, still get hacked. More specifically, many powers-that-be feel they must direct industry in a multiplicity of ways, up to and including how we actually build and deploy information technology systems. The more prescriptive the requirement, the more regulators or overseers a) can be seen to be doing something b) feel as if they are doing something regardless of whether they are actually doing something useful or cost effective. Note: an unfortunate concomitant of Doing Something is that often the cure is worse than the ailment. That is, doing what overseers want creates unfortunate byproducts that they either didn’t foresee or worse, don’t care about. After all, the logic goes, we Did Something. Prescriptive practice in the IT industry is problematic for a number of reasons. For a start, prescriptive guidance is really only appropriate if: • It is cost effective• It is “current” (meaning, the guidance doesn’t require the use of the technical equivalent of buggy whips long after horse-drawn transportation has become passé)*• It is practical (that is, pragmatic, proven and effective in the real world, not theoretical and unproven)• It solves the right problem With the above in mind, heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. A little background on PCI before I get too wound up. In 2008, the Payment Card Industry (PCI) Security Standards Council (SSC) introduced the Payment Application Data Security Standard (PA-DSS). That standard requires vendors of payment applications to ensure that their products implement specific requirements and undergo security assessment procedures. In order to have an application listed as a Validated Payment Application (VPA) and available for use by merchants, software vendors are required to execute the PCI Payment Application Vendor Release Agreement (VRA). (Are you still with me through all the acronyms?) Beginning in August 2010, the VRA imposed new obligations on vendors that are extraordinary and extraordinarily bad, short-sighted and unworkable. Specifically, PCI requires vendors to disclose (dare we say “tell all?”) to PCI any known security vulnerabilities and associated security breaches involving VPAs. ASAP. Think about the impact of that. PCI is asking a vendor to disclose to them: • Specific details of security vulnerabilities • Including exploit information or technical details of the vulnerability • Whether or not there is any mitigation available (as in a patch) PCI, in turn, has the right to blab about any and all of the above – specifically, to distribute all the gory details of what is disclosed - to the PCI SSC, qualified security assessors (QSAs), and any affiliate or agent or adviser of those entities, who are in turn permitted to share it with their respective affiliates, agents, employees, contractors, merchants, processors, service providers and other business partners. This assorted crew can’t be more than, oh, hundreds of thousands of entities. Does anybody believe that several hundred thousand people can keep a secret? Or that several hundred thousand people are all equally trustworthy? Or that not one of the people getting all that information would blab vulnerability details to a bad guy, even by accident? Or be a bad guy who uses the information to break into systems? (Wait, was that the Easter Bunny that just hopped by? Bringing world peace, no doubt.) Sarcasm aside, common sense tells us that telling lots of people a secret is guaranteed to “unsecret” the secret. Notably, being provided details of a vulnerability (without a patch) is of little or no use to companies running the affected application. Few users have the technological sophistication to create a workaround, and even if they do, most workarounds break some other functionality in the application or surrounding environment. Also, given the differences among corporate implementations of any application, it is highly unlikely that a single workaround is going to work for all corporate users. So until a patch is developed by the vendor, users remain at risk of exploit: even more so if the details of vulnerability have been widely shared. Sharing that information widely before a patch is available therefore does not help users, and instead helps only those wanting to exploit known security bugs. There’s a shocker for you. Furthermore, we already know that insider information about security vulnerabilities inevitably leaks, which is why most vendors closely hold such information and limit dissemination until a patch is available (and frequently limit dissemination of technical details even with the release of a patch). That’s the industry norm, not that PCI seems to realize or acknowledge that. Why would anybody release a bunch of highly technical exploit information to a cast of thousands, whose only “vetting” is that they are members of a PCI consortium? Oracle has had personal experience with this problem, which is one reason why information on security vulnerabilities at Oracle is “need to know” (we use our own row level access control to limit access to security bugs in our bug database, and thus less than 1% of development has access to this information), and we don’t provide some customers with more information than others or with vulnerability information and/or patches earlier than others. Failure to remember “insider information always leaks” creates problems in the general case, and has created problems for us specifically. A number of years ago, one of the UK intelligence agencies had information about a non-public security vulnerability in an Oracle product that they circulated among other UK and Commonwealth defense and intelligence entities. Nobody, it should be pointed out, bothered to report the problem to Oracle, even though only Oracle could produce a patch. The vulnerability was finally reported to Oracle by (drum roll) a US-based commercial company, to whom the information had leaked. (Note: every time I tell this story, the MI-whatever agency that created the problem gets a bit shirty with us. I know they meant well and have improved their vulnerability handling/sharing processes but, dudes, next time you find an Oracle vulnerability, try reporting it to us first before blabbing to lots of people who can’t actually fix the problem. Thank you!) Getting back to PCI: clearly, these new disclosure obligations increase the risk of exploitation of a vulnerability in a VPA and thus, of misappropriation of payment card data and customer information that a VPA processes, stores or transmits. It stands to reason that VRA’s current requirement for the widespread distribution of security vulnerability exploit details -- at any time, but particularly before a vendor can issue a patch or a workaround -- is very poor public policy. It effectively publicizes information of great value to potential attackers while not providing compensating benefits - actually, any benefits - to payment card merchants or consumers. In fact, it magnifies the risk to payment card merchants and consumers. The risk is most prominent in the time before a patch has been released, since customers often have little option but to continue using an application or system despite the risks. However, the risk is not limited to the time before a patch is issued: customers often need days, or weeks, to apply patches to systems, based upon the complexity of the issue and dependence on surrounding programs. Rather than decreasing the available window of exploit, this requirement increases the available window of exploit, both as to time available to exploit a vulnerability and the ease with which it can be exploited. Also, why would hackers focus on finding new vulnerabilities to exploit if they can get “EZHack” handed to them in such a manner: a) a vulnerability b) in a payment application c) with exploit code: the “Hacking Trifecta!“ It’s fair to say that this is probably the exact opposite of what PCI – or any of us – would want. Established industry practice concerning vulnerability handling avoids the risks created by the VRA’s vulnerability disclosure requirements. Specifically, the norm is not to release information about a security bug until the associated patch (or a pretty darn good workaround) has been issued. Once a patch is available, the notice to the user community is a high-level communication discussing the product at issue, the level of risk associated with the vulnerability, and how to apply the patch. The notices do not include either the specific customers affected by the vulnerability or forensic reports with maps of the exploit (both of which are required by the current VRA). In this way, customers have the tools they need to prioritize patching and to help prevent an attack, and the information released does not increase the risk of exploit. Furthermore, many vendors already use industry standards for vulnerability description: Common Vulnerability Enumeration (CVE) and Common Vulnerability Scoring System (CVSS). CVE helps ensure that customers know which particular issues a patch addresses and CVSS helps customers determine how severe a vulnerability is on a relative scale. Industry already provides the tools customers need to know what the patch contains and how bad the problem is that the patch remediates. So, what’s a poor vendor to do? Oracle is reaching out to other vendors subject to PCI and attempting to enlist then in a broad effort to engage PCI in rethinking (that is, eradicating) these requirements. I would therefore urge all who care about this issue, but especially those in the vendor community whose applications are subject to PCI and who may not have know they were being asked to tell-all to PCI and put their customers at risk, to do one of the following: • Contact PCI with your concerns• Contact Oracle (we are looking for vendors to sign our statement of concern)• And make sure you tell your customers that you have to rat them out to PCI if there is a breach involving the payment application I like to be charitable and say “PCI meant well” but in as important a public policy issue as what you disclose about vulnerabilities, to whom and when, meaning well isn’t enough. We need to do well. PCI, as regards this particular issue, has not done well, and has compounded the error by thus far being nonresponsive to those of us who have labored mightily to try to explain why they might want to rethink telling the entire planet about security problems with no solutions. By Way of Explanation… Non-related to PCI whatsoever, and the explanation for why I have not been blogging a lot recently, I have been working on Other Writing Venues with my sister Diane (who has also worked in the tech sector, inflicting upgrades on unsuspecting and largely ungrateful end users). I am pleased to note that we have recently (self-)published the first in the Miss Information Technology Murder Mystery series, Outsourcing Murder. The genre might best be described as “chick lit meets geek scene.” Our sisterly nom de plume is Maddi Davidson and (shameless plug follows): you can order the paper version of the book on Amazon, or the Kindle or Nook versions on www.amazon.com or www.bn.com, respectively. From our book jacket: Emma Jones, a 20-something IT consultant, is working on an outsourcing project at Tahiti Tacos, a restaurant chain offering Polynexican cuisine: refried poi, anyone? Emma despises her boss Padmanabh, a brilliant but arrogant partner in GD Consulting. When Emma discovers His-Royal-Padness’s body (verdict: death by cricket bat), she becomes a suspect.With her overprotective family and her best friend Stacey providing endless support and advice, Emma stumbles her way through an investigation of Padmanabh’s murder, bolstered by fusion food feeding frenzies, endless cups of frou-frou coffee and serious surfing sessions. While Stacey knows a PI who owes her a favor, landlady Magda urges Emma to tart up her underwear drawer before the next cute cop with a search warrant arrives. Emma’s mother offers to fix her up with a PhD student at Berkeley and showers her with self-defense gizmos while her old lover Keoni beckons from Hawai’i. And everyone, even Shaun the barista, knows a good lawyer. Book 2, Denial of Service, is coming out this summer. * Given the rate of change in technology, today’s “thou shalts” are easily next year’s “buggy whip guidance.”

    Read the article

  • Why is the Active Directory security setting for "Write Personal Information" automatically reset?

    - by Holistic Developer
    In my Small Business Server 2003 environment, I would like to be able to have users manage their own delegate permissions for their Exchange mailboxes. By default, the Outlook delegate feature will not work unless I go to the user object in Active Directory and grant Allow on "Write Personal Information" to SELF. This will work temporarily, but something seems to reset this value shortly afterword. What would cause this automatic reset?

    Read the article

  • SQL SERVER – Capturing Wait Types and Wait Stats Information at Interval – Wait Type – Day 5 of 28

    - by pinaldave
    Earlier, I have tried to cover some important points about wait stats in detail. Here are some points that we had covered earlier. DMV related to wait stats reset when we reset SQL Server services DMV related to wait stats reset when we manually reset the wait types However, at times, there is a need of making this data persistent so that we can take a look at them later on. Sometimes, performance tuning experts do some modifications to the server and try to measure the wait stats at that point of time and after some duration. I use the following method to measure the wait stats over the time. -- Create Table CREATE TABLE [MyWaitStatTable]( [wait_type] [nvarchar](60) NOT NULL, [waiting_tasks_count] [bigint] NOT NULL, [wait_time_ms] [bigint] NOT NULL, [max_wait_time_ms] [bigint] NOT NULL, [signal_wait_time_ms] [bigint] NOT NULL, [CurrentDateTime] DATETIME NOT NULL, [Flag] INT ) GO -- Populate Table at Time 1 INSERT INTO MyWaitStatTable ([wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], [CurrentDateTime],[Flag]) SELECT [wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], GETDATE(), 1 FROM sys.dm_os_wait_stats GO ----- Desired Delay (for one hour) WAITFOR DELAY '01:00:00' -- Populate Table at Time 2 INSERT INTO MyWaitStatTable ([wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], [CurrentDateTime],[Flag]) SELECT [wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], GETDATE(), 2 FROM sys.dm_os_wait_stats GO -- Check the difference between Time 1 and Time 2 SELECT T1.wait_type, T1.wait_time_ms Original_WaitTime, T2.wait_time_ms LaterWaitTime, (T2.wait_time_ms - T1.wait_time_ms) DiffenceWaitTime FROM MyWaitStatTable T1 INNER JOIN MyWaitStatTable T2 ON T1.wait_type = T2.wait_type WHERE T2.wait_time_ms > T1.wait_time_ms AND T1.Flag = 1 AND T2.Flag = 2 ORDER BY DiffenceWaitTime DESC GO -- Clean up DROP TABLE MyWaitStatTable GO If you notice the script, I have used an additional column called flag. I use it to find out when I have captured the wait stats and then use it in my SELECT query to SELECT wait stats related to that time group. Many times, I select more than 5 or 6 different set of wait stats and I find this method very convenient to find the difference between wait stats. In a future blog post, we will talk about specific wait stats. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • How do I Fix SQL Server error: Order by items must appear in the select list if Select distinct is s

    - by Paula DiTallo 2007-2009 All Rights Reserved
    There's more than one reason why you may receive this error, but the most common reason is that your order by statement column list doesn't correlate with the values specified in your column list when you happen to be using DISTINCT. This is usually easy to spot and resolve. A more obscure reason may be that you are using a function around one of the selected columns --but omitting to use the same function around the same selected column name in the order by statement. Here's an example:   select distinct upper(columnA)   from [evaluate].[testTable]    order by columnA  asc   This statement will cause the "Order by items must appear in the select list if SELECT DISTINCT is specified."  error to appear not because distinct was used, but because the order by statement did not utilize the upper() fundtion around colunnA.  To correct this error, do this: select distinct upper(columnA)   from [evaluate].[testTable]    order by upper(columnA) asc

    Read the article

  • Is this information about me as a programmer concise and good enough?

    - by Nick Rosencrantz
    I not only want you to review my resume but please tell me what you think Google means when they answered me: "We don't look at personal letters and we like your resume and we can recommend you internally but we need measurable experience. What is meant with "measurable" here? Do they mean like O(1) compared to O(n), selling an entire company, grades or what? This is what I sent: Curriculum vitae Nick Rosencrantz Competence: System development, web development Technical competence: Java, Javascript, HTML, XML, CSS, AJAX, PHP, SQL, Python Employments: 2012- Mobile Innovation AB System Developer IT consultant (Java programmer) 2011-2012 Bnano International Ltd System Developer Python programming in Google App Engine 2008-2009 Sweden Island AB System Developer Programming C++ and Java EE components 2003-2007 Studies Stockholm School of Economics During studies worked as network technician at Effnet AB 2000-2002 Jadestone AB System Developer System development in Java/J2EE. In 2001: KTH, Assistant. Teaching application server programming in Java Enterprise + weblogic + Informix. 1999-2000 Studies KTH 1996-1998 Spray.se System development, Researcher 1995-1995 Finance broker Backoffice work with financial instruments 1993-1994 Computer & Audio-Technical Systems AB Programming, sommer job Education/Courses: Stockholm School of Economics, Master of Science diploma, KTH, Computer Science undergraduate studies Languages Swedish, English, also some German and French Born 1973, Swedish citizen I also have a project-based CS which is several pages long but the above is about what I was aiming for in the beginning when I was looking for a job, now I have employment as an IT consultant in central Stockholm and I want to make my resume concise and also know what Google meant with their answer (It was a Swedish Google employee that via linkedin recruited from my Stockholm School of Economics groups since that is a small elite economics school where I took my M.Sc. and KTH is one of the largest universities in northern Europe so I sent her a link with my CV and she said she could promote me internally if I added "measurable experience" and I've been thinking for weeks what that may mean?

    Read the article

  • Tool to convert from XML to CSV and back to XML without losing information?

    - by Lanaru
    I would like to convert an XML file into CSV, and then be able to convert that CSV back into the original XML file. Is there a tool out there that makes this possible? The reason I want to do this is that I want to generate a ton of data to be stored in an xml file that my program will parse. I can easily generate a lot of data with a CSV file through the use of Excel macros, so that's why I'm looking for a tool to convert between these two formats. Any alternative suggestions are greatly appreciated.

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >