Search Results

Search found 17610 results on 705 pages for 'specific'.

Page 401/705 | < Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >

  • How do you do HTML form testing without real user input simulation ?

    - by justjoe
    this question is like this one, except it's for PHP testing via browser. It's about testing your form input. Right now, i have a form on a single page. It has 12 input boxes. Every time i test the form, i have write those 12 input boxes in my browser. i know it's not a specific coding question. This question is more about how to do direct testing on your form So, how to do recursive testing without consuming too much of your time ?

    Read the article

  • The Benefits of Smart Grid Business Software

    - by Sylvie MacKenzie, PMP
    Smart Grid Background What Are Smart Grids?Smart Grids use computer hardware and software, sensors, controls, and telecommunications equipment and services to: Link customers to information that helps them manage consumption and use electricity wisely. Enable customers to respond to utility notices in ways that help minimize the duration of overloads, bottlenecks, and outages. Provide utilities with information that helps them improve performance and control costs. What Is Driving Smart Grid Development? Environmental ImpactSmart Grid development is picking up speed because of the widespread interest in reducing the negative impact that energy use has on the environment. Smart Grids use technology to drive efficiencies in transmission, distribution, and consumption. As a result, utilities can serve customers’ power needs with fewer generating plants, fewer transmission and distribution assets,and lower overall generation. With the possible exception of wind farm sprawl, landscape preservation is one obvious benefit. And because most generation today results in greenhouse gas emissions, Smart Grids reduce air pollution and the potential for global climate change.Smart Grids also more easily accommodate the technical difficulties of integrating intermittent renewable resources like wind and solar into the grid, providing further greenhouse gas reductions. CostsThe ability to defer the cost of plant and grid expansion is a major benefit to both utilities and customers. Utilities do not need to use as many internal resources for traditional infrastructure project planning and management. Large T&D infrastructure expansion costs are not passed on to customers.Smart Grids will not eliminate capital expansion, of course. Transmission corridors to connect renewable generation with customers will require major near-term expenditures. Additionally, in the future, electricity to satisfy the needs of population growth and additional applications will exceed the capacity reductions available through the Smart Grid. At that point, expansion will resume—but with greater overall T&D efficiency based on demand response, load control, and many other Smart Grid technologies and business processes. Energy efficiency is a second area of Smart Grid cost saving of particular relevance to customers. The timely and detailed information Smart Grids provide encourages customers to limit waste, adopt energy-efficient building codes and standards, and invest in energy efficient appliances. Efficiency may or may not lower customer bills because customer efficiency savings may be offset by higher costs in generation fuels or carbon taxes. It is clear, however, that bills will be lower with efficiency than without it. Utility Operations Smart Grids can serve as the central focus of utility initiatives to improve business processes. Many utilities have long “wish lists” of projects and applications they would like to fund in order to improve customer service or ease staff’s burden of repetitious work, but they have difficulty cost-justifying the changes, especially in the short term. Adding Smart Grid benefits to the cost/benefit analysis frequently tips the scales in favor of the change and can also significantly reduce payback periods.Mobile workforce applications and asset management applications work together to deploy assets and then to maintain, repair, and replace them. Many additional benefits result—for instance, increased productivity and fuel savings from better routing. Similarly, customer portals that provide customers with near-real-time information can also encourage online payments, thus lowering billing costs. Utilities can and should include these cost and service improvements in the list of Smart Grid benefits. What Is Smart Grid Business Software? Smart Grid business software gathers data from a Smart Grid and uses it improve a utility’s business processes. Smart Grid business software also helps utilities provide relevant information to customers who can then use it to reduce their own consumption and improve their environmental profiles. Smart Grid Business Software Minimizes the Impact of Peak Demand Utilities must size their assets to accommodate their highest peak demand. The higher the peak rises above base demand: The more assets a utility must build that are used only for brief periods—an inefficient use of capital. The higher the utility’s risk profile rises given the uncertainties surrounding the time needed for permitting, building, and recouping costs. The higher the costs for utilities to purchase supply, because generators can charge more for contracts and spot supply during high-demand periods. Smart Grids enable a variety of programs that reduce peak demand, including: Time-of-use pricing and critical peak pricing—programs that charge customers more when they consume electricity during peak periods. Pilot projects indicate that these programs are successful in flattening peaks, thus ensuring better use of existing T&D and generation assets. Direct load control, which lets utilities reduce or eliminate electricity flow to customer equipment (such as air conditioners). Contracts govern the terms and conditions of these turn-offs. Indirect load control, which signals customers to reduce the use of on-premises equipment for contractually agreed-on time periods. Smart Grid business software enables utilities to impose penalties on customers who do not comply with their contracts. Smart Grids also help utilities manage peaks with existing assets by enabling: Real-time asset monitoring and control. In this application, advanced sensors safely enable dynamic capacity load limits, ensuring that all grid assets can be used to their maximum capacity during peak demand periods. Real-time asset monitoring and control applications also detect the location of excessive losses and pinpoint need for mitigation and asset replacements. As a result, utilities reduce outage risk and guard against excess capacity or “over-build”. Better peak demand analysis. As a result: Distribution planners can better size equipment (e.g. transformers) to avoid over-building. Operations engineers can identify and resolve bottlenecks and other inefficiencies that may cause or exacerbate peaks. As above, the result is a reduction in the tendency to over-build. Supply managers can more closely match procurement with delivery. As a result, they can fine-tune supply portfolios, reducing the tendency to over-contract for peak supply and reducing the need to resort to spot market purchases during high peaks. Smart Grids can help lower the cost of remaining peaks by: Standardizing interconnections for new distributed resources (such as electricity storage devices). Placing the interconnections where needed to support anticipated grid congestion. Smart Grid Business Software Lowers the Cost of Field Services By processing Smart Grid data through their business software, utilities can reduce such field costs as: Vegetation management. Smart Grids can pinpoint momentary interruptions and tree-caused outages. Spatial mash-up tools leverage GIS models of tree growth for targeted vegetation management. This reduces the cost of unnecessary tree trimming. Service vehicle fuel. Many utility service calls are “false alarms.” Checking meter status before dispatching crews prevents many unnecessary “truck rolls.” Similarly, crews use far less fuel when Smart Grid sensors can pinpoint a problem and mobile workforce applications can then route them directly to it. Smart Grid Business Software Ensures Regulatory Compliance Smart Grids can ensure compliance with private contracts and with regional, national, or international requirements by: Monitoring fulfillment of contract terms. Utilities can use one-hour interval meters to ensure that interruptible (“non-core”) customers actually reduce or eliminate deliveries as required. They can use the information to levy fines against contract violators. Monitoring regulations imposed on customers, such as maximum use during specific time periods. Using accurate time-stamped event history derived from intelligent devices distributed throughout the smart grid to monitor and report reliability statistics and risk compliance. Automating business processes and activities that ensure compliance with security and reliability measures (e.g. NERC-CIP 2-9). Grid Business Software Strengthens Utilities’ Connection to Customers While Reducing Customer Service Costs During outages, Smart Grid business software can: Identify outages more quickly. Software uses sensors to pinpoint outages and nested outage locations. They also permit utilities to ensure outage resolution at every meter location. Size outages more accurately, permitting utilities to dispatch crews that have the skills needed, in appropriate numbers. Provide updates on outage location and expected duration. This information helps call centers inform customers about the timing of service restoration. Smart Grids also facilitates display of outage maps for customer and public-service use. Smart Grids can significantly reduce the cost to: Connect and disconnect customers. Meters capable of remote disconnect can virtually eliminate the costs of field crews and vehicles previously required to change service from the old to the new residents of a metered property or disconnect customers for nonpayment. Resolve reports of voltage fluctuation. Smart Grids gather and report voltage and power quality data from meters and grid sensors, enabling utilities to pinpoint reported problems or resolve them before customers complain. Detect and resolve non-technical losses (e.g. theft). Smart Grids can identify illegal attempts to reconnect meters or to use electricity in supposedly vacant premises. They can also detect theft by comparing flows through delivery assets with billed consumption. Smart Grids also facilitate outreach to customers. By monitoring and analyzing consumption over time, utilities can: Identify customers with unusually high usage and contact them before they receive a bill. They can also suggest conservation techniques that might help to limit consumption. This can head off “high bill” complaints to the contact center. Note that such “high usage” or “additional charges apply because you are out of range” notices—frequently via text messaging—are already common among mobile phone providers. Help customers identify appropriate bill payment alternatives (budget billing, prepayment, etc.). Help customers find and reduce causes of over-consumption. There’s no waiting for bills in the mail before they even understand there is a problem. Utilities benefit not just through improved customer relations but also through limiting the size of bills from customers who might struggle to pay them. Where permitted, Smart Grids can open the doors to such new utility service offerings as: Monitoring properties. Landlords reduce costs of vacant properties when utilities notify them of unexpected energy or water consumption. Utilities can perform similar services for owners of vacation properties or the adult children of aging parents. Monitoring equipment. Power-use patterns can reveal a need for equipment maintenance. Smart Grids permit utilities to alert owners or managers to a need for maintenance or replacement. Facilitating home and small-business networks. Smart Grids can provide a gateway to equipment networks that automate control or let owners access equipment remotely. They also facilitate net metering, offering some utilities a path toward involvement in small-scale solar or wind generation. Prepayment plans that do not need special meters. Smart Grid Business Software Helps Customers Control Energy Costs There is no end to the ways Smart Grids help both small and large customers control energy costs. For instance: Multi-premises customers appreciate having all meters read on the same day so that they can more easily compare consumption at various sites. Customers in competitive regions can match their consumption profile (detailed via Smart Grid data) with specific offerings from competitive suppliers. Customers seeing inexplicable consumption patterns and power quality problems may investigate further. The result can be discovery of electrical problems that can be resolved through rewiring or maintenance—before more serious fires or accidents happen. Smart Grid Business Software Facilitates Use of Renewables Generation from wind and solar resources is a popular alternative to fossil fuel generation, which emits greenhouse gases. Wind and solar generation may also increase energy security in regions that currently import fossil fuel for use in generation. Utilities face many technical issues as they attempt to integrate intermittent resource generation into traditional grids, which traditionally handle only fully dispatchable generation. Smart Grid business software helps solves many of these issues by: Detecting sudden drops in production from renewables-generated electricity (wind and solar) and automatically triggering electricity storage and smart appliance response to compensate as needed. Supporting industry-standard distributed generation interconnection processes to reduce interconnection costs and avoid adding renewable supplies to locations already subject to grid congestion. Facilitating modeling and monitoring of locally generated supply from renewables and thus helping to maximize their use. Increasing the efficiency of “net metering” (through which utilities can use electricity generated by customers) by: Providing data for analysis. Integrating the production and consumption aspects of customer accounts. During non-peak periods, such techniques enable utilities to increase the percent of renewable generation in their supply mix. During peak periods, Smart Grid business software controls circuit reconfiguration to maximize available capacity. Conclusion Utility missions are changing. Yesterday, they focused on delivery of reasonably priced energy and water. Tomorrow, their missions will expand to encompass sustainable use and environmental improvement.Smart Grids are key to helping utilities achieve this expanded mission. But they come at a relatively high price. Utilities will need to invest heavily in new hardware, software, business process development, and staff training. Customer investments in home area networks and smart appliances will be large. Learning to change the energy and water consumption habits of a lifetime could ultimately prove even more formidable tasks.Smart Grid business software can ease the cost and difficulties inherent in a needed transition to a more flexible, reliable, responsive electricity grid. Justifying its implementation, however, requires a full understanding of the benefits it brings—benefits that can ultimately help customers, utilities, communities, and the world address global issues like energy security and climate change while minimizing costs and maximizing customer convenience. This white paper is available for download here. For further information about Oracle's Primavera Solutions for Utilities, please read our Utilities e-book.

    Read the article

  • C# Resource Array

    - by user472875
    Hi, I have a bunch of pictures I'm using in a C# project, and I'm trying to initialize them all for later use. There are over 50 of them and they all have the same name format Properties.Resources._#, where # is the picture number. What I'm trying to do is something like: for(int i = 0; i < 100; i++) { pics[i] = Properties.Resources._i; } How would I go about embedding the index into the name? Thanks, and happy holidays. EDIT: Just realized that if I had a way to embed the index in the name, I could just have a function that returns the specific picture based on the number given, so that would work too.

    Read the article

  • RadControl DateTimePicker Selecting new time doesn't remove highlight from previous selection

    - by Jason Beck
    This is not browser specific - the behavior exists in Firefox and IE. The RadControl is being used within a User Control in a SiteFinity site. Very little customization has been done to the control. <telerik:RadDateTimePicker ID="RadDateTimePicker1" runat="server" MinDate="2010/1/1" Width="250px"> <ClientEvents></ClientEvents> <TimeView starttime="08:00:00" endtime="20:00:00" interval="02:00:00"></TimeView> <DateInput runat="server" ID="DateInput"></DateInput> </telerik:RadDateTimePicker> protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { RadDateTimePicker1.MinDate = DateTime.Now; } }

    Read the article

  • QTreeWidget insertTopLevelItem - index given not accurately displayed in Tree?

    - by mleep
    I am unable to properly insert a QTreeWidgetItem at a specific index, in this case I am removing all QTreeWidgetItems from the tree, doing a custom sort on their Date Objects and then inserting them back into the QTreeWidget. However, upon inserting (even one at a time) the QTreeWidgetItem is not inserted into the correct place. The code below prints out: index 0: 0 index 0: 1 index 1: 0 index 0: 2 index 1: 1 index 2: 0 index 0: 3 index 1: 2 index 2: 0 index 3: 1 index 0: 4 index 1: 2 index 2: 0 index 3: 1 index 4: 3 print 'index 0: ', self.indexOfTopLevelItem(childrenList[0]) self.insertTopLevelItem(0, childrenList[1]) print 'index 0: ', self.indexOfTopLevelItem(childrenList[0]), ' index 1: ',\ self.indexOfTopLevelItem(childrenList[1]) self.insertTopLevelItem(0, childrenList[2]) print 'index 0: ', self.indexOfTopLevelItem(childrenList[0]), ' index 1: ',\ self.indexOfTopLevelItem(childrenList[1]), ' index 2: ', \ self.indexOfTopLevelItem(childrenList[2]) self.insertTopLevelItem(0, childrenList[3]) print 'index 0: ', self.indexOfTopLevelItem(childrenList[0]), ' index 1: ',\ self.indexOfTopLevelItem(childrenList[1]), ' index 2: ',\ self.indexOfTopLevelItem(childrenList[2]), 'index 3: ',\ self.indexOfTopLevelItem(childrenList[3]) self.insertTopLevelItem(0, childrenList[4]) print 'index 0: ', self.indexOfTopLevelItem(childrenList[0]),\ ' index 1: ', self.indexOfTopLevelItem(childrenList[1]),\ ' index 2: ', self.indexOfTopLevelItem(childrenList[2]),\ 'index 3: ', self.indexOfTopLevelItem(childrenList[3]),\ 'index 4: ', self.indexOfTopLevelItem(childrenList[4])

    Read the article

  • libvirt and VirtualBox / Getting Started

    - by Marc Lucas
    I'm trying to get started on libvirt with VirtualBox as a virtualization solution. I installed everything and VirtualBox itself is running when using their VBoxHeadless command. However, libvirt fails to connect to VirtualBox: # virsh -c vbox:///session libvir: error : could not connect to vbox:///session error: failed to connect to the hypervisor I could not find any hints in the libvirt documentation that point to whether I have to make any domain specific configuration before using virsh. Does anyone have a hint? Or even better, maybe a tutorial that works through the way of using libvirt, virsh or it's APIs (my later goal) from the ground up.

    Read the article

  • dojo/dijit and Printing

    - by Kitson
    I want to be able to provide a button to my users to just print a particular portion of my dojo/dijit application. There seems to be a general lack of documentation and examples when it comes to printing. For example, I have a specific dijit.layout.ContentPane that contains the content that I would like to print, but I wouldn't want to print the rest of the document. I have seen some pure JavaScript examples on the web where the node.innerHTML is read into a "hidden" iframe and then printed from there. I suspect that would work, but I was wondering if there was a more dojo centric approach to printing. Any thoughts?

    Read the article

  • Can dummy objects be simulated on iphone?

    - by Mike
    I come from 3D animation and one of the basic things all 3D software have is the ability to create dummy objects. Dummy objects can be used to groups objects that can be rotated, moved or scaled together around a specific anchor point. This is the idea of what I am asking. Obviously we can have fake dummies by using a view and put other views as subviews, but this has problems as the view receives clicks and sometimes you don't want it to do so. You cannot change the anchorpoint of a view too. So, the dummies as I ask have, at least, these properties: adjustable anchor point it is not clickable it is totally invisible (cannot be rendered). any scale, rotation and translation of a dummy are propagated to the grouped objects considering the dummy's anchor point. it is totally animatable. Can this be simulated on iPhone? Is there any object that can be created to simulate this? thanks.

    Read the article

  • Windows Azure: Announcing release of Windows Azure SDK 2.2 (with lots of goodies)

    - by ScottGu
    Earlier today I blogged about a big update we made today to Windows Azure, and some of the great new features it provides. Today I’m also excited to also announce the release of the Windows Azure SDK 2.2. Today’s SDK release adds even more great features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter The below post has more details on what’s available in today’s Windows Azure SDK 2.2 release.  Also head over to Channel 9 to see the new episode of the Visual Studio Toolbox show that will be available shortly, and which highlights these features in a video demonstration. Visual Studio 2013 Support Version 2.2 of the Window Azure SDK is the first official version of the SDK to support the final RTM release of Visual Studio 2013. If you installed the 2.1 SDK with the Preview of Visual Studio 2013 we recommend that you upgrade your projects to SDK 2.2.  SDK 2.2 also works side by side with the SDK 2.0 and SDK 2.1 releases on Visual Studio 2012: Integrated Windows Azure Sign In within Visual Studio Integrated Windows Azure Sign-In support within Visual Studio is one of the big improvements added with this Windows Azure SDK release.  Integrated sign-in support enables developers to develop/test/manage Windows Azure resources within Visual Studio without having to download or use management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer inside Visual Studio and choose the “Connect to Windows Azure” context menu option to connect to Windows Azure: Doing this will prompt you to enter the email address of the account you wish to sign-in with: You can use either a Microsoft Account (e.g. Windows Live ID) or an Organizational account (e.g. Active Directory) as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio Server Explorer (and you can start using them): With this new integrated sign in experience you are now able to publish web apps, deploy VMs and cloud services, use Windows Azure diagnostics, and fully interact with your Windows Azure services within Visual Studio without the need for a management certificate.  All of the authentication is handled using the Windows Azure Active Directory associated with your Windows Azure account (details on this can be found in my earlier blog post). Integrating authentication this way end-to-end across the Service Management APIs + Dev Tools + Management Portal + PowerShell automation scripts enables a much more secure and flexible security model within Windows Azure, and makes it much more convenient to securely manage multiple developers + administrators working on a project.  It also allows organizations and enterprises to use the same authentication model that they use for their developers on-premises in the cloud.  It also ensures that employees who leave an organization immediately lose access to their company’s cloud based resources once their Active Directory account is suspended. Filtering/Subscription Management Once you login within Visual Studio, you can filter which Windows Azure subscriptions/regions are visible within the Server Explorer by right-clicking the “Filter Services” context menu within the Server Explorer.  You can also use the “Manage Subscriptions” context menu to mange your Windows Azure Subscriptions: Bringing up the “Manage Subscriptions” dialog allows you to see which accounts you are currently using, as well as which subscriptions are within them: The “Certificates” tab allows you to continue to import and use management certificates to manage Windows Azure resources as well.  We have not removed any functionality with today’s update – all of the existing scenarios that previously supported management certificates within Visual Studio continue to work just fine.  The new integrated sign-in support provided with today’s release is purely additive. Note: the SQL Database node and the Mobile Service node in Server Explorer do not support integrated sign-in at this time. Therefore, you will only see databases and mobile services under those nodes if you have a management certificate to authorize access to them.  We will enable them with integrated sign-in in a future update. Remote Debugging Cloud Resources within Visual Studio Today’s Windows Azure SDK 2.2 release adds support for remote debugging many types of Windows Azure resources. With live, remote debugging support from within Visual Studio, you are now able to have more visibility than ever before into how your code is operating live in Windows Azure.  Let’s walkthrough how to enable remote debugging for a Cloud Service: Remote Debugging of Cloud Services To enable remote debugging for your cloud service, select Debug as the Build Configuration on the Common Settings tab of your Cloud Service’s publish dialog wizard: Then click the Advanced Settings tab and check the Enable Remote Debugging for all roles checkbox: Once your cloud service is published and running live in the cloud, simply set a breakpoint in your local source code: Then use Visual Studio’s Server Explorer to select the Cloud Service instance deployed in the cloud, and then use the Attach Debugger context menu on the role or to a specific VM instance of it: Once the debugger attaches to the Cloud Service, and a breakpoint is hit, you’ll be able to use the rich debugging capabilities of Visual Studio to debug the cloud instance remotely, in real-time, and see exactly how your app is running in the cloud. Today’s remote debugging support is super powerful, and makes it much easier to develop and test applications for the cloud.  Support for remote debugging Cloud Services is available as of today, and we’ll also enable support for remote debugging Web Sites shortly. Firewall Management Support with SQL Databases By default we enable a security firewall around SQL Databases hosted within Windows Azure.  This ensures that only your application (or IP addresses you approve) can connect to them and helps make your infrastructure secure by default.  This is great for protection at runtime, but can sometimes be a pain at development time (since by default you can’t connect/manage the database remotely within Visual Studio if the security firewall blocks your instance of VS from connecting to it). One of the cool features we’ve added with today’s release is support that makes it easy to enable and configure the security firewall directly within Visual Studio.  Now with the SDK 2.2 release, when you try and connect to a SQL Database using the Visual Studio Server Explorer, and a firewall rule prevents access to the database from your machine, you will be prompted to add a firewall rule to enable access from your local IP address: You can simply click Add Firewall Rule and a new rule will be automatically added for you. In some cases, the logic to detect your local IP may not be sufficient (for example: you are behind a corporate firewall that uses a range of IP addresses) and you may need to set up a firewall rule for a range of IP addresses in order to gain access. The new Add Firewall Rule dialog also makes this easy to do.  Once connected you’ll be able to manage your SQL Database directly within the Visual Studio Server Explorer: This makes it much easier to work with databases in the cloud. Visual Studio 2013 RTM Virtual Machine Images Available for MSDN Subscribers Last week we released the General Availability Release of Visual Studio 2013 to the web.  This is an awesome release with a ton of new features. With today’s Windows Azure update we now have a set of pre-configured VM images of VS 2013 available within the Windows Azure Management Portal for use by MSDN customers.  This enables you to create a VM in the cloud with VS 2013 pre-installed on it in with only a few clicks: Windows Azure now provides the fastest and easiest way to get started doing development with Visual Studio 2013. Windows Azure Management Libraries for .NET (Preview) Having the ability to automate the creation, deployment, and tear down of resources is a key requirement for applications running in the cloud.  It also helps immensely when running dev/test scenarios and coded UI tests against pre-production environments. Today we are releasing a preview of a new set of Windows Azure Management Libraries for .NET.  These new libraries make it easy to automate tasks using any .NET language (e.g. C#, VB, F#, etc).  Previously this automation capability was only available through the Windows Azure PowerShell Cmdlets or to developers who were willing to write their own wrappers for the Windows Azure Service Management REST API. Modern .NET Developer Experience We’ve worked to design easy-to-understand .NET APIs that still map well to the underlying REST endpoints, making sure to use and expose the modern .NET functionality that developers expect today: Portable Class Library (PCL) support targeting applications built for any .NET Platform (no platform restriction) Shipped as a set of focused NuGet packages with minimal dependencies to simplify versioning Support async/await task based asynchrony (with easy sync overloads) Shared infrastructure for common error handling, tracing, configuration, HTTP pipeline manipulation, etc. Factored for easy testability and mocking Built on top of popular libraries like HttpClient and Json.NET Below is a list of a few of the management client classes that are shipping with today’s initial preview release: .NET Class Name Supports Operations for these Assets (and potentially more) ManagementClient Locations Credentials Subscriptions Certificates ComputeManagementClient Hosted Services Deployments Virtual Machines Virtual Machine Images & Disks StorageManagementClient Storage Accounts WebSiteManagementClient Web Sites Web Site Publish Profiles Usage Metrics Repositories VirtualNetworkManagementClient Networks Gateways Automating Creating a Virtual Machine using .NET Let’s walkthrough an example of how we can use the new Windows Azure Management Libraries for .NET to fully automate creating a Virtual Machine. I’m deliberately showing a scenario with a lot of custom options configured – including VHD image gallery enumeration, attaching data drives, network endpoints + firewall rules setup - to show off the full power and richness of what the new library provides. We’ll begin with some code that demonstrates how to enumerate through the built-in Windows images within the standard Windows Azure VM Gallery.  We’ll search for the first VM image that has the word “Windows” in it and use that as our base image to build the VM from.  We’ll then create a cloud service container in the West US region to host it within: We can then customize some options on it such as setting up a computer name, admin username/password, and hostname.  We’ll also open up a remote desktop (RDP) endpoint through its security firewall: We’ll then specify the VHD host and data drives that we want to mount on the Virtual Machine, and specify the size of the VM we want to run it in: Once everything has been set up the call to create the virtual machine is executed asynchronously In a few minutes we’ll then have a completely deployed VM running on Windows Azure with all of the settings (hard drives, VM size, machine name, username/password, network endpoints + firewall settings) fully configured and ready for us to use: Preview Availability via NuGet The Windows Azure Management Libraries for .NET are now available via NuGet. Because they are still in preview form, you’ll need to add the –IncludePrerelease switch when you go to retrieve the packages. The Package Manager Console screen shot below demonstrates how to get the entire set of libraries to manage your Windows Azure assets: You can also install them within your .NET projects by right clicking on the VS Solution Explorer and using the Manage NuGet Packages context menu command.  Make sure to select the “Include Prerelease” drop-down for them to show up, and then you can install the specific management libraries you need for your particular scenarios: Open Source License The new Windows Azure Management Libraries for .NET make it super easy to automate management operations within Windows Azure – whether they are for Virtual Machines, Cloud Services, Storage Accounts, Web Sites, and more.  Like the rest of the Windows Azure SDK, we are releasing the source code under an open source (Apache 2) license and it is hosted at https://github.com/WindowsAzure/azure-sdk-for-net/tree/master/libraries if you wish to contribute. PowerShell Enhancements and our New Script Center Today, we are also shipping Windows Azure PowerShell 0.7.0 (which is a separate download). You can find the full change log here. Here are some of the improvements provided with it: Windows Azure Active Directory authentication support Script Center providing many sample scripts to automate common tasks on Windows Azure New cmdlets for Media Services and SQL Database Script Center Windows Azure enables you to script and automate a lot of tasks using PowerShell.  People often ask for more pre-built samples of common scenarios so that they can use them to learn and tweak/customize. With this in mind, we are excited to introduce a new Script Center that we are launching for Windows Azure. You can learn about how to scripting with Windows Azure with a get started article. You can then find many sample scripts across different solutions, including infrastructure, data management, web, and more: All of the sample scripts are hosted on TechNet with links from the Windows Azure Script Center. Each script is complete with good code comments, detailed descriptions, and examples of usage. Summary Visual Studio 2013 and the Windows Azure SDK 2.2 make it easier than ever to get started developing rich cloud applications. Along with the Windows Azure Developer Center’s growing set of .NET developer resources to guide your development efforts, today’s Windows Azure SDK 2.2 release should make your development experience more enjoyable and efficient. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Covariance and Contravariance on the same type argument

    - by William Edmondson
    The C# spec states that an argument type cannot be both covariant and contravariant at the same time. This is apparent when creating a covariant or contravariant interface you decorate your type parameters with "out" or "in" respectively. There is not option that allows both at the same time ("outin"). Is this limitation simply a language specific constraint or are there deeper, more fundamental reasons based in category theory that would make you not want your type to be both covariant and contravariant? Edit: My understanding was that arrays were actually both covariant and contravariant. public class Pet{} public class Cat : Pet{} public class Siamese : Cat{} Cat[] cats = new Cat[10]; Pet[] pets = new Pet[10]; Siamese[] siameseCats = new Siamese[10]; //Cat array is covariant pets = cats; //Cat array is also contravariant since it accepts conversions from wider types cats = siameseCats;

    Read the article

  • How to learn high-level Java web development concepts

    - by titaniumdecoy
    I have some experience writing web applications in Java for class projects. My first project used Servlets and my second, the Stripes framework. However, I feel that I am missing the greater picture of Java web development. I don't really understand the web.xml and context.xml files. I'm not sure what constitutes a Java EE application as opposed to a generic Java web application. I can't figure out how a bean is different from an ordinary Java class (POJO?) and how that differs from an Enterprise Java Bean (EJB). These are just the first few questions I could think of, but there are many more. What is a good way to learn how Java web applications function from the top down rather than simply how to develop an application with a specific framework? (Is there a book for this sort of thing?) Ultimately, I would like to understand Java web applications well enough to write my own framework.

    Read the article

  • Problem loading XMLDocument with non standard tags

    - by David Conde
    Hi, I have a code needed to load an XML document from a reader, something like this: private static XmlDocument GetDocumentStream(string xmlAddress) { var settings = new XmlReaderSettings(); settings.DtdProcessing = DtdProcessing.Ignore; settings.ValidationFlags = XmlSchemaValidationFlags.None; var reader = XmlReader.Create(xmlAddress, settings); document.Load(reader); return document; } But in my XML document, I have nodes like this one: <link rel="edit-media" title="Package" href="Packages(Id='51Degrees.mobi',Version='0.1.11.9')/$value" /> Is to my understanding that the node should be like <link rel="edit-media" title="Package"></link> But, I don't create the Xml document and I certainly don't want to change it, but when I try to load the XML document, the document.Load line throws an exception. To be more specific, the XML file is the RSS source for the nuPack project. Any ideas would be very appreaciated on how to be able to read this document properly.

    Read the article

  • Relating NP-Complete problems to real world problems

    - by terru
    I have a decent grasp of NP Complete problems; that's not the issue. What I don't have is a good sense of where they turn up in "real" programming. Some (like knapsack and traveling salesman) are obvious, but others don't seem obviously connected to "real" problems. I've had the experience several times of struggling with a difficult problem only to realize it is a well known NP Complete problem that has been researched extensively. If I had recognized the connection more quickly I could have saved quite a bit of time researching existing solutions to my specific problem. Are there any resources (online or print) that specifically connect NP Complete to real world instances?

    Read the article

  • How to void authorized transaction in authorize.net gateway using ActiveMerchant

    - by m7d
    Goal: Only have successful purchases show up on a customer's billing statement. I don't want declined authorizations showing up on their billing statement (as seen in an online banking system) as pending. A customer often will accidentally input an incorrect billing address, for example, followed by a correct one. Together, the two attempts, one successful and one not both show up on their billing statement as pending prior to settlement. This can scare the customer as it looks potentially like they will be charged twice. Details: When I do an AUTH_CAPTURE (via ActiveMerchant's purchase) or an AUTH (via ActiveMerchant's authorize) which is declined and subsequently want to void that authorization (via ActiveMerchant's void) so as not to have it appear on a customer's billing statement as pending (even though it will settle out after a few days), the gateway can't find the transaction to void using the authorization code returned from the authorization or capture method calls on the gateway. This is specific to the authorize.net AIM gateway. Please advise. Thanks!

    Read the article

  • How significant is the bazaar performance factor?

    - by memodda
    I hear all this stuff about bazaar being slower than git. I haven't used too much distributed version control yet, but in Bazaar vs. Git on the bazaar site, they say that most complaints about performance aren't true anymore. Have you found this to be true? Is performance pretty much on par now? I've heard that speed can affect workflow (people are more likely to do good thing X if X is fast). What specific cases does performance currently affect workflow in bazaar vs other systems (especially git), and how? I'm just trying to get at why performance is of particular importance. Usually when I check something in or update it, I expect it to take a little while, but it doesn't matter. I commit/update when I have a second, so it doesn't interfere with my productivity. But then I haven't used DVCS yet, so maybe that has something to do with it?

    Read the article

  • Anyone successfully using Commission Junction API ?

    - by Mauricio Scheffer
    Is anyone successfully using the CJ web services? I just keep getting java.lang.NullPointerExceptions even though my app is .net (clearly their errors). CJ support doesn't even know what a web service is. I googled and found many people getting this or other errors. Question is: is it a temporary problem or am I doomed to parse manually downloaded reports for eternity? The specific API I'm trying to use is the daily publisher commission service. Here is the WSDL. Links: CJ web services home API Reference

    Read the article

  • How to Visualize your Audit Data with BI Publisher?

    - by kanichiro.nishida
      Do you know how many reports on your BI Publisher server are accessed yesterday ? Or, how many users accessed to the reports yesterday, or what are the average number of the users accessed to the reports during the week vs. weekend or morning vs. afternoon ? With BI Publisher 11G, now you can audit your user’s reports access and understand the state of the reporting environment at your server, each user, or each report level. At the previous post I’ve talked about what the BI Publisher’s auditing functionality and how to enable it so that BI Publisher can start collecting such data. (How to Audit and Monitor BI Publisher Reports Access?)Now, how can you visualize such auditing data to have a better understanding and gain more insights? With Fusion Middleware Audit Framework you have an option to store the auditing data into a database instead of a log file, which is the default option. Once you enable the database storage option, that means you have your auditing data (or, user report access data) in your database tables, now no brainer, you can start visualize the data, create reports, analyze, and share with BI Publisher. So, first, let’s take a look on how to enable the database storage option for the auditing data. How to Feed the Auditing Data into Database First you need to create a database schema for Fusion Middleware Audit Framework with RCU (Repository Creation Utility). If you have already installed BI Publisher 11G you should be familiar with this RCU. It creates any database schema necessary to run any Fusion Middleware products including BI stuff. And you can use the same RCU that you used for your BI or BI Publisher installation to create this Audit schema. Create Audit Schema with RCU Here are the steps: Go to $RCU_HOME/bin and execute the ‘rcu’ command Choose Create at the starting screen and click Next. Enter your database details and click Next. Choose the option to create a new prefix, for example ‘BIP’, ‘KAN’, etc. Select 'Audit Services' from the list of schemas. Click Next and accept the tablespace creation. Click Finish to start the process. After this, there should be following three Audit related schema created in your database. <prefix>_IAU (e.g. KAN_IAU) <prefix>_IAU_APPEND (e.g. KAN_IAU_APPEND) <prefix>_IAU_VIEWER (e.g. KAN_IAU_VIEWER) Setup Datasource at WebLogic After you create a database schema for your auditing data, now you need to create a JDBC connection on your WebLogic Server so the Audit Framework can access to the database schema that was created with the RCU with the previous step. Connect to the Oracle WebLogic Server administration console: http://hostname:port/console (e.g. http://report.oracle.com:7001/console) Under Services, click the Data Sources link. Click ‘Lock & Edit’ so that you can make changes Click New –> ‘Generic Datasource’ to create a new data source. Enter the following details for the new data source:  Name: Enter a name such as Audit Data Source-0.  JNDI Name: jdbc/AuditDB  Database Type: Oracle  Click Next and select ‘Oracle's Driver (Thin XA) Versions: 9.0.1 or later’ as Database Driver (if you’re using Oracle database), and click Next. The Connection Properties page appears. Enter the following information: Database Name: Enter the name of the database (SID) to which you will connect. Host Name: Enter the hostname of the database.  Port: Enter the database port.  Database User Name: This is the name of the audit schema that you created in RCU. The suffix is always IAU for the audit schema. For example, if you gave the prefix as ‘BIP’, then the schema name would be ‘KAN_IAU’.  Password: This is the password for the audit schema that you created in RCU.   Click Next. Accept the defaults, and click Test Configuration to verify the connection. Click Next Check listed servers where you want to make this JDBC connection available. Click ‘Finish’ ! After that, make sure you click ‘Activate Changes’ at the left hand side top to take the new JDBC connection in effect. Register your Audit Data Storing Database to your Domain Finally, you can register the JNDI/JDBC datasource as your Auditing data storage with Fusion Middleware Control (EM). Here are the steps: 1. Login to Fusion Middleware Control 2. Navigate to Weblogic Domain, right click on ‘bifoundation…..’, select Security, then Audit Store. 3. Click the searchlight icon next to the Datasource JNDI Name field. 4.Select the Audit JNDI/JDBC datasource you created in the previous step in the pop-up window and click OK. 5. Click Apply to continue. 6. Restart the whole WebLogic Servers in the domain. After this, now the BI Publisher should start feeding all the auditing data into the database table called ‘IAU_BASE’. Try login to BI Publisher and open a couple of reports, you should see the activity audited in the ‘IAU_BASE’ table. If not working, you might want to check the log file, which is located at $BI_HOME/user_projects/domains/bifoundation_domain/servers/AdminServer/logs/AdminServer-diagnostic.log to see if there is any error. Once you have the data in the database table, now, it’s time to visualize with BI Publisher reports! Create a First BI Publisher Auditing Report Register Auditing Datasource as JNDI datasource First thing you need to do is to register the audit datasource (JNDI/JDBC connection) you created in the previous step as JNDI data source at BI Publisher. It is a JDBC connection registered as JNDI, that means you don’t need to create a new JDBC connection by typing the connection URL, username/password, etc. You can just register it using the JNDI name. (e.g. jdbc/AuditDB) Login to BI Publisher as Administrator (e.g. weblogic) Go to Administration Page Click ‘JNDI Connection’ under Data Sources and Click ‘New’ Type Data Source Name and JNDI Name. The JNDI Name is the one you created in the WebLogic Console as the auditing datasource. (e.g. jdbc/AuditDB) Click ‘Test Connection’ to make sure the datasource connection works. Provide appropriate roles so that the report developers or viewers can share this data source to view reports. Click ‘Apply’ to save. Create Data Model Select Data Model from the tool bar menu ‘New’ Set ‘Default Data Source’ to the audit JNDI data source you have created in the previous step. Select ‘SQL Query’ for your data set Use Query Builder to build a query or just type a sql query. Either way, the table you want to report against is ‘IAU_BASE’. This IAU_BASE table contains all the auditing data for other products running on the WebLogic Server such as JPS, OID, etc. So, if you care only specific to BI Publisher then you want to filter by using  ‘IAU_COMPONENTTYPE’ column which contains the product name (e.g. ’xmlpserver’ for BI Publisher). Here is my sample sql query. select     "IAU_BASE"."IAU_COMPONENTTYPE" as "IAU_COMPONENTTYPE",      "IAU_BASE"."IAU_EVENTTYPE" as "IAU_EVENTTYPE",      "IAU_BASE"."IAU_EVENTCATEGORY" as "IAU_EVENTCATEGORY",      "IAU_BASE"."IAU_TSTZORIGINATING" as "IAU_TSTZORIGINATING",    to_char("IAU_TSTZORIGINATING", 'YYYY-MM-DD') IAU_DATE,    to_char("IAU_TSTZORIGINATING", 'DAY') as IAU_DAY,    to_char("IAU_TSTZORIGINATING", 'HH24') as IAU_HH24,    to_char("IAU_TSTZORIGINATING", 'WW') as IAU_WEEK_OF_YEAR,      "IAU_BASE"."IAU_INITIATOR" as "IAU_INITIATOR",      "IAU_BASE"."IAU_RESOURCE" as "IAU_RESOURCE",      "IAU_BASE"."IAU_TARGET" as "IAU_TARGET",      "IAU_BASE"."IAU_MESSAGETEXT" as "IAU_MESSAGETEXT",      "IAU_BASE"."IAU_FAILURECODE" as "IAU_FAILURECODE",      "IAU_BASE"."IAU_REMOTEIP" as "IAU_REMOTEIP" from    "KAN3_IAU"."IAU_BASE" "IAU_BASE" where "IAU_BASE"."IAU_COMPONENTTYPE" = 'xmlpserver' Once you saved a sample XML for this data model, now you can create a report with this data model. Create Report Now you can use one of the BI Publisher’s layout options to design the report layout and visualize the auditing data. I’m a big fan of Online Layout Editor, it’s just so easy and simple to create reports, and on top of that, all the reports created with Online Layout Editor has the Interactive View with automatic data linking and filtering feature without any setting or coding. If you haven’t checked the Interactive View or Online Layout Editor you might want to check these previous blog posts. (Interactive Reporting with BI Publisher 11G, Interactive Master Detail Report Just A Few Clicks Away!) But of course, you can use other layout design option such as RTF template. Here are some sample screenshots of my report design with Online Layout Editor.     Visualize and Gain More Insights about your Customers (Users) ! Now you can visualize your auditing data to have better understanding and gain more insights about your reporting environment you manage. It’s been actually helping me personally to answer the  questios like below.  How many reports are accessed or opened yesterday, today, last week ? Who is accessing which report at what time ? What are the time windows when the most of the reports access happening ? What are the most viewed reports ? Who are the active users ? What are the # of reports access or user access trend for the last month, last 6 months, last 12 months, etc ? I was talking with one of the best concierge in the world at this hotel the other day, and he was telling me that the best concierge knows about their customers inside-out therefore they can provide a very private service that is customized to each customer to meet each customer’s specific needs. Well, this is true when it comes to how to administrate and manage your reporting environment, right ? The best way to serve your customers (report users, including both viewers and developers) is to understand how they use, what they use, when they use. Auditing is not just about compliance, but it’s the way to improve the customer service. The BI Publisher 11G Auditing feature enables just that to help you understand your customers better. Happy customer service, be the best reporting concierge! p.s. please share with us on what other information would be helpful for you for the auditing! Always, any feedback is a great value and inspiration for us!  

    Read the article

  • Firebird Insert Distinct Data Using ZeosLib and Delphi

    - by Brad
    I'm using Zeos 7, and Delphi 2K9 and want to check to see if a value is already in the database under a specific field before I post the data to the database. Example: Field Keyword Values of Cheese, Mouse, Trap tblkeywordKEYWORD.Value = Cheese What is wrong with the following? And is there a better way? zQueryKeyword.SQL.Add('IF NOT EXISTS(Select KEYWORD from KEYWORDLIST ='''+ tblkeywordKEYWORD.Value+''')INSERT into KEYWORDLIST(KEYWORD) VALUES ('''+ tblkeywordKEYWORD.Value+'''))'); zQueryKeyword.ExecSql; I tried using the unique constraint in IB Expert, but it gives the following error: Invalid insert or update value(s): object columns are constrained - no 2 table rows can have duplicate column values. attempt to store duplicate value (visible to active transactions) in unique index "UNQ1_KEYWORDLIST". Thanks for any help -Brad

    Read the article

  • Magic quotes in PHP

    - by VirtuosiMedia
    According to the PHP manual, in order to make code more portable, they recommend using something like the following for escaping data: if (!get_magic_quotes_gpc()) { $lastname = addslashes($_POST['lastname']); } else { $lastname = $_POST['lastname']; } I have other validation checks that I will be performing, but how secure is the above strictly in terms of escaping data? I also saw that magic quotes will be deprecated in PHP 6. How will that affect the above code? I would prefer not to have to rely on a database-specific escaping function like mysql_real_escape_string().

    Read the article

  • Most user-friendly issue tracker?

    - by Kugel
    Sorry if this has been asked before, couldn't find anything specific. I'm looking for the most user-friendly issue tracker. Currently I'm using Trac. I'm looking for these 2 features: Very easy to create new ticket Email to ticket conversion (with attachements). The customers should be glad to use it (not avoid it like Trac). More often than not I get e-mails as bug reports and putting them into Trac wastes my time. I would really love to just hit forward and the ticket would appear in the issue tracker.

    Read the article

  • Creating an equation in a word 2003 document using a marco (or through the API)

    - by Sambatyon
    I think the title is fully descriptive. Anyway, I need to generate a word document from my delphi application. It needs to choose from one of four different equations (with some specific parameters for each document). So far I have manage to create the whole document programmatically except the equation. Is it possible to create equations programmatically? if so, where is de API documentation from MS? if not, which solution can be used?

    Read the article

  • MinGW GCJ libiconv path problem

    - by Hussain
    I'm having a problem with the MinGW implementation of GCJ. I read that you have to install libiconv before you can use it. However, the documentation wasn't very specific, and it did not say where to extract the binaries and developer files (libiconv-bin and libiconv-lib). I have tried the following paths: $p = c:\mingw $p\libiconv-1.9.2-1-[bin|lib]\ $p\libiconv-[bin|lib]\ $p\mingw32\libiconv-1.9.2-1-[bin|lib] $p\mingw32[bin|lib]\libiconv $p\mingw[bin|lib]\liconv $p\bin\libiconv-1.9.2-1-[bin|lib] $p\bin\libiconv-[bin|lib] None of these work. Any help on where I'm supposed to put the libiconv files?

    Read the article

  • Combining a one-to-one relationship into one object in Fluent NHibernate

    - by Mike C.
    I have a one-to-one relationship in my database, and I'd like to just combine that into one object in Fluent NHibernate. The specific tables I am talking about are the aspnet_Users and aspnet_Membership tables from the default ASP.NET Membership implementation. I'd like to combine those into one simple User object and only get the fields I want. I would also like to make this read-only, as I want to use the built-in ASP.NET Membership API to modify. I simply want to take advantage of lazy-loading. Any help would be appreciated. Thanks!

    Read the article

  • ASP.NET MVC Response Filter + OutputCache Attribute

    - by Shane Andrade
    I'm not sure if this is an ASP.NET MVC specific thing or ASP.NET in general but here's what's happening. I have an action filter that removes whitespace by the use of a response filter: public class StripWhitespaceAttribute : ActionFilterAttribute { public StripWhitespaceAttribute () { } public override void OnResultExecuted(ResultExecutedContext filterContext) { base.OnResultExecuted(filterContext); filterContext.HttpContext.Response.Filter = new WhitespaceFilter(filterContext.HttpContext.Response.Filter); } } When used in conjunction with the OutputCache attribute, my calls to Response.WriteSubstitution for "donut hole caching" do not work. The first and second time the page loads the callback passed to WriteSubstitution get called, after that they are not called anymore until the output cache expires. I've noticed this with not just this particular filter but any filter used on Response.Filter... am I missing something? I also forgot to mention I've tried this without the use of an MVC action filter attribute by attaching to the PostReleaseRequestState event in the global.asax and setting the Response.Filter value there... but still no luck.

    Read the article

  • MS Dynamics CRM 4.0 - onChange event error

    - by Brett
    Hi I have an onChange event that keeps bringing up the error below whenever I preview it. 'Object doesnt support this property or method' I have the onChange event associated with a picklist and when a specific option is selected another field is unhidden. The code is below: onLoad: //If How did you hear about us is set to event show the Source Event lookup crmForm.SourceEvent = function SourceEvent() { if (crmForm.all.gcs_howdidyouhearaboutus.DataValue == 5) { crmForm.all.gcs_sourceeventid_c.style.display = '' ; crmForm.all.gcs_sourceeventid_d.style.display = '' ; } else { crmForm.all.gcs_sourceeventid_c.style.display = 'none' ; crmForm.all.gcs_sourceeventid_d.style.display = 'none' ; } } crmForm.SourceEvent() ; onChange crmForm.SourceEvent() ; Would be great if someone could let me know why this error is showing up? Also, this has happened on a few onChange events on the form preview but once published onto the live system it does not error. Any ideas? Thank you Brett

    Read the article

< Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >