Search Results

Search found 52153 results on 2087 pages for 'application role'.

Page 1/2087 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Managing User & Role Security with Oracle SQL Developer

    - by thatjeffsmith
    With the advent of SQL Developer v3.0, users have had access to some powerful database administration features. Version 3.1 introduced more powerful features such as an interface to Data Pump and RMAN. Today I want to talk about some very simple but frequently ran tasks that SQL Developer can assist with, like: identifying privs granted to users managing role privs assigning new roles and privs to users & roles Before getting started, you’ll need a connection to the database with the proper privileges. The common ROLE used to accomplish this is the ‘DBA‘ role. Curious as to what the DBA role is actually comprised of? Let’s find out! Open the DBA Console First make sure you’re connected to the database you want to manage security on with a privileged administrator account. Then open the View menu and select ‘DBA.’ Accessing the DBA panel ‘Create’ a Connection Click on the green ‘+’ button in the DBA panel. It will ask you to choose a previously defined SQL Developer connection. Defining a DBA connection in Oracle SQL Developer Once connected you will see a tree list of DBA features you can start interacting with. Expand the ‘Security’ Tree Node As you click on an object in the DBA panel, the ‘viewer’ will open on the right-hand-side, just like you are accustomed to seeing when clicking on a table or stored procedure. Accessing the DBA role If I’m a newly hired Oracle DBA, the first thing I might want to do is become very familiar with the DBA role. People will be asking you to grant them this role or a subset of its privileges. Once you see what the role can do, you will become VERY protective of it. My favorite 3-letter 4-letter word is ‘ANY’ and the DBA role is littered with privileges like this: ANY TABLE privs granted to DBA role So if this doesn’t freak you out, then maybe you should re-consider your career path. Or in other words, don’t be granting this role to ANYONE you don’t completely trust to take care of your database. If I’m just assigned a new database to manage, the first thing I might want to look at is just WHO has been assigned the DBA role. SQL Developer makes this easy to ascertain, just click on the ‘User Grantees’ panel. Who has the keys to your car? Making Changes to Roles and Users If you mouse-right-click on a user in the Tree, you can do individual tasks like grant a sys priv or expire an account. But, you can also use the ‘Edit User’ dialog to do a lot of work in one pass. As you click through options in these dialogs, it will build the ‘ALTER USER’ script in the SQL panel, which can then be executed or copied to the worksheet or to your .SQL file to be ran at your discretion. A Few Clicks vs a Lot of Typing These dialogs won’t make you a DBA, but if you’re pressed for time and you’re already in SQL Developer, they can sure help you make up for lost time in just a few clicks!

    Read the article

  • The Proper Use of the VM Role in Windows Azure

    - by BuckWoody
    At the Professional Developer’s Conference (PDC) in 2010 we announced an addition to the Computational Roles in Windows Azure, called the VM Role. This new feature allows a great deal of control over the applications you write, but some have confused it with our full infrastructure offering in Windows Hyper-V. There is a proper architecture pattern for both of them. Virtualization Virtualization is the process of taking all of the hardware of a physical computer and replicating it in software alone. This means that a single computer can “host” or run several “virtual” computers. These virtual computers can run anywhere - including at a vendor’s location. Some companies refer to this as Cloud Computing since the hardware is operated and maintained elsewhere. IaaS The more detailed definition of this type of computing is called Infrastructure as a Service (Iaas) since it removes the need for you to maintain hardware at your organization. The operating system, drivers, and all the other software required to run an application are still under your control and your responsibility to license, patch, and scale. Microsoft has an offering in this space called Hyper-V, that runs on the Windows operating system. Combined with a hardware hosting vendor and the System Center software to create and deploy Virtual Machines (a process referred to as provisioning), you can create a Cloud environment with full control over all aspects of the machine, including multiple operating systems if you like. Hosting machines and provisioning them at your own buildings is sometimes called a Private Cloud, and hosting them somewhere else is often called a Public Cloud. State-ful and Stateless Programming This paradigm does not create a new, scalable way of computing. It simply moves the hardware away. The reason is that when you limit the Cloud efforts to a Virtual Machine, you are in effect limiting the computing resources to what that single system can provide. This is because much of the software developed in this environment maintains “state” - and that requires a little explanation. “State-ful programming” means that all parts of the computing environment stay connected to each other throughout a compute cycle. The system expects the memory, CPU, storage and network to remain in the same state from the beginning of the process to the end. You can think of this as a telephone conversation - you expect that the other person picks up the phone, listens to you, and talks back all in a single unit of time. In “Stateless” computing the system is designed to allow the different parts of the code to run independently of each other. You can think of this like an e-mail exchange. You compose an e-mail from your system (it has the state when you’re doing that) and then you walk away for a bit to make some coffee. A few minutes later you click the “send” button (the network has the state) and you go to a meeting. The server receives the message and stores it on a mail program’s database (the mail server has the state now) and continues working on other mail. Finally, the other party logs on to their mail client and reads the mail (the other user has the state) and responds to it and so on. These events might be separated by milliseconds or even days, but the system continues to operate. The entire process doesn’t maintain the state, each component does. This is the exact concept behind coding for Windows Azure. The stateless programming model allows amazing rates of scale, since the message (think of the e-mail) can be broken apart by multiple programs and worked on in parallel (like when the e-mail goes to hundreds of users), and only the order of re-assembling the work is important to consider. For the exact same reason, if the system makes copies of those running programs as Windows Azure does, you have built-in redundancy and recovery. It’s just built into the design. The Difference Between Infrastructure Designs and Platform Designs When you simply take a physical server running software and virtualize it either privately or publicly, you haven’t done anything to allow the code to scale or have recovery. That all has to be handled by adding more code and more Virtual Machines that have a slight lag in maintaining the running state of the system. Add more machines and you get more lag, so the scale is limited. This is the primary limitation with IaaS. It’s also not as easy to deploy these VM’s, and more importantly, you’re often charged on a longer basis to remove them. your agility in IaaS is more limited. Windows Azure is a Platform - meaning that you get objects you can code against. The code you write runs on multiple nodes with multiple copies, and it all works because of the magic of Stateless programming. you don’t worry, or even care, about what is running underneath. It could be Windows (and it is in fact a type of Windows Server), Linux, or anything else - but that' isn’t what you want to manage, monitor, maintain or license. You don’t want to deploy an operating system - you want to deploy an application. You want your code to run, and you don’t care how it does that. Another benefit to PaaS is that you can ask for hundreds or thousands of new nodes of computing power - there’s no provisioning, it just happens. And you can stop using them quicker - and the base code for your application does not have to change to make this happen. Windows Azure Roles and Their Use If you need your code to have a user interface, in Visual Studio you add a Web Role to your project, and if the code needs to do work that doesn’t involve a user interface you can add a Worker Role. They are just containers that act a certain way. I’ll provide more detail on those later. Note: That’s a general description, so it’s not entirely accurate, but it’s accurate enough for this discussion. So now we’re back to that VM Role. Because of the name, some have mistakenly thought that you can take a Virtual Machine running, say Linux, and deploy it to Windows Azure using this Role. But you can’t. That’s not what it is designed for at all. If you do need that kind of deployment, you should look into Hyper-V and System Center to create the Private or Public Infrastructure as a Service. What the VM Role is actually designed to do is to allow you to have a great deal of control over the system where your code will run. Let’s take an example. You’ve heard about Windows Azure, and Platform programming. You’re convinced it’s the right way to code. But you have a lot of things you’ve written in another way at your company. Re-writing all of your code to take advantage of Windows Azure will take a long time. Or perhaps you have a certain version of Apache Web Server that you need for your code to work. In both cases, you think you can (or already have) code the the software to be “Stateless”, you just need more control over the place where the code runs. That’s the place where a VM Role makes sense. Recap Virtualizing servers alone has limitations of scale, availability and recovery. Microsoft’s offering in this area is Hyper-V and System Center, not the VM Role. The VM Role is still used for running Stateless code, just like the Web and Worker Roles, with the exception that it allows you more control over the environment of where that code runs.

    Read the article

  • The Application Architecture Domain

    - by Michael Glas
    I have been spending a lot of time thinking about Application Architecture in the context of EA. More specifically, as an Enterprise Architect, what do I need to consider when looking at/defining/designing the Application Architecture Domain?There are several definitions of Application Architecture. TOGAF says “The objective here [in Application Architecture] is to define the major kinds of application system necessary to process the data and support the business”. FEA says the Application Architecture “Defines the applications needed to manage the data and support the business functions”.I agree with these definitions. They reflect what the Application Architecture domain does. However, they need to be decomposed to be practical.I find it useful to define a set of views into the Application Architecture domain. These views reflect what an EA needs to consider when working with/in the Applications Architecture domain. These viewpoints are, at a high level:Capability View: This view reflects how applications alignment with business capabilities. It is a super set of the following views when viewed in aggregate. By looking at the Application Architecture domain in terms of the business capabilities it supports, you get a good perspective on how those applications are directly supporting the business.Technology View: The technology view reflects the underlying technology that makes up the applications. Based on the number of rationalization activities I have seen (more specifically application rationalization), the phrase “complexity equals cost” drives the importance of the technology view, especially when attempting to reduce that complexity through standardization type activities. Some of the technology components to be considered are: Software: The application itself as well as the software the application relies on to function (web servers, application servers). Infrastructure: The underlying hardware and network components required by the application and supporting application software. Development: How the application is created and maintained. This encompasses development components that are part of the application itself (i.e. customizable functions), as well as bolt on development through web services, API’s, etc. The maintenance process itself also falls under this view. Integration: The interfaces that the application provides for integration as well as the integrations to other applications and data sources the application requires to function. Type: Reflects the kind of application (mash-up, 3 tiered, etc). (Note: functional type [CRM, HCM, etc.] are reflected under the capability view). Organization View: Organizations are comprised of people and those people use applications to do their jobs. Trying to define the application architecture domain without taking the organization that will use/fund/change it into consideration is like trying to design a car without thinking about who will drive it (i.e. you may end up building a formula 1 car for a family of 5 that is really looking for a minivan). This view reflects the people aspect of the application. It includes: Ownership: Who ‘owns’ the application? This will usually reflect primary funding and utilization but not always. Funding: Who funds both the acquisition/creation as well as the on-going maintenance (funding to create/change/operate)? Change: Who can/does request changes to the application and what process to the follow? Utilization: Who uses the application, how often do they use it, and how do they use it? Support: Which organization is responsible for the on-going support of the application? Information View: Whether or not you subscribe to the view that “information drives the enterprise”, it is a fact that information is critical. The management, creation, and organization of that information are primary functions of enterprise applications. This view reflects how the applications are tied to information (or at a higher level – how the Application Architecture domain relates to the Information Architecture domain). It includes: Access: The application is the mechanism by which end users access information. This could be through a primary application (i.e. CRM application), or through an information access type application (a BI application as an example). Creation: Applications create data in order to provide information to end-users. (I.e. an application creates an order to be used by an end-user as part of the fulfillment process). Consumption: Describes the data required by applications to function (i.e. a product id is required by a purchasing application to create an order. Application Service View: Organizations today are striving to be more agile. As an EA, I need to provide an architecture that supports this agility. One of the primary ways to achieve the required agility in the application architecture domain is through the use of ‘services’ (think SOA, web services, etc.). Whether it is through building applications from the ground up utilizing services, service enabling an existing application, or buying applications that are already ‘service enabled’, compartmentalizing application functions for re-use helps enable flexibility in the use of those applications in support of the required business agility. The applications service view consists of: Services: Here, I refer to the generic definition of a service “a set of related software functionalities that can be reused for different purposes, together with the policies that should control its usage”. Functions: The activities within an application that are not available / applicable for re-use. This view is helpful when identifying duplication functions between applications that are not service enabled. Delivery Model View: It is hard to talk about EA today without hearing the terms ‘cloud’ or shared services.  Organizations are looking at the ways their applications are delivered for several reasons, to reduce cost (both CAPEX and OPEX), to improve agility (time to market as an example), etc.  From an EA perspective, where/how an application is deployed has impacts on the overall enterprise architecture. From integration concerns to SLA requirements to security and compliance issues, the Enterprise Architect needs to factor in how applications are delivered when designing the Enterprise Architecture. This view reflects how applications are delivered to end-users. The delivery model view consists of different types of delivery mechanisms/deployment options for applications: Traditional: Reflects non-cloud type delivery options. The most prevalent consists of an application running on dedicated hardware (usually specific to an environment) for a single consumer. Private Cloud: The application runs on infrastructure provisioned for exclusive use by a single organization comprising multiple consumers. Public Cloud: The application runs on infrastructure provisioned for open use by the general public. Hybrid: The application is deployed on two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability. While by no means comprehensive, I find that applying these views to the application domain gives a good understanding of what an EA needs to consider when effecting changes to the Application Architecture domain.Finally, the application architecture domain is one of several architecture domains that an EA must consider when developing an overall Enterprise Architecture. The Oracle Enterprise Architecture Framework defines four Primary domains: Business Architecture, Application Architecture, Information Architecture, and Technology Architecture. Each domain links to the others either directly or indirectly at some point. Oracle links them at a high level as follows:Business Capabilities and/or Business Processes (Business Architecture), links to the Applications that enable the capability/process (Applications Architecture – COTS, Custom), links to the Information Assets managed/maintained by the Applications (Information Architecture), links to the technology infrastructure upon which all this runs (Technology Architecture - integration, security, BI/DW, DB infrastructure, deployment model). There are however, times when the EA needs to narrow focus to a particular domain for some period of time. These views help me to do just that.

    Read the article

  • Advise on how to move from a .net developer role to a web developer role

    - by dermd
    I've been working primarily as a .net developer for the past 4 years for a financial services company. I've worked on .net 1.1, 2.0, 3.5 and have done the 3.5 enterprise app developer cert (not that that's worth a whole lot!). Before that I worked as a java developer with a bit of Flex thrown in for just over a year. My educational background is an Electronic and computer engineering degree, a higher diploma in systems analysis as well as one in web development (this was mainly java - JSP, Spring, etc) and a science masters in software design and development. I really feel like a change and would like to move to a different field to experience something different. I've done some courses in RoR and played around with it a bit in my spare time. Similarly I've done various web and mobile courses and done up some mobile webapps along with android and ios equivalents (haven't tried pushing them up to the app stores yet but may be worth tidying them up and doing that). I currently work long enough hours so find it hard to find time to work on too many side projects to get a decent portfolio together. But when I do work on the web stuff I do find it really enjoyable so think it's something I'd like to do full time. However, since my experience is pretty much all .net and financial services I find it very hard to get my foot in the door anywhere or get past a phone screen unless their specifically looking for someone with .net knowledge. What is the best way to move into a web development role without starting from scratch again. I do think a lot of the skills I have translate over but I seem to just get paired with .net jobs whenever I look around? Apart from js, jquery, html5, objective C are there any other technologies I should be looking into?

    Read the article

  • Role provider and Role management

    - by AspOnMyNet
    When the CacheRolesInCookie property is set to true in the Web.config file, role information for each user is stored in a cookie. When role management checks to see whether a user is in a particular role, the roles cookie is checked before the role provider is called to check the list of roles at the data source. The cookie is dynamically updated to cache the most recently validated role names. a) As far as I understand the above text, even though role management checks the roles cookie, role provider still checks the list of roles at the data source? b) Above text talks about role management, which is invoked before role provider is called. What class acts as a role management? thanx

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite Now Available

    - by chung.wu
    Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite is now available. The management suite combines features that were available in the standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle's market leading real user monitoring and configuration management capabilities to provide the most complete solution for managing E-Business Suite applications. The features that were available in the standalone management packs are now packaged into Oracle E-Business Suite Plug-in 4.0, which is now fully certified with Oracle Enterprise Manager 11g Grid Control. This latest plug-in extends Grid Control with E-Business Suite specific management capabilities and features enhanced change management support. In addition, this latest release of Application Management Suite for Oracle E-Business Suite also includes numerous real user monitoring improvements. General Enhancements This new release of Application Management Suite for Oracle E-Business Suite offers the following key capabilities: Oracle Enterprise Manager 11g Grid Control Support: All components of the management suite are certified with Oracle Enterprise Manager 11g Grid Control. Built-in Diagnostic Ability: This release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Discovery, Cloning, and User Monitoring that will validate if the appropriate patches, privileges, setups, and profile options have been configured. This feature improves the setup and configuration time to be up and operational. Lifecycle Automation Enhancements Application Management Suite for Oracle E-Business Suite provides a centralized view to monitor and orchestrate changes (both functional and technical) across multiple Oracle E-Business Suite systems. In this latest release, it provides even more control and flexibility in managing Oracle E-Business Suite changes.Change Management: Built-in Diagnostic Ability: This latest release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Customization Manager, Patch Manager, and Setup Manager that will validate if the appropriate patches, privileges, setups, and profile options have been configured. Enhancing the setup time and configuration time to be up and operational. Customization Manager: Multi-Node Custom Application Registration: This feature automates the process of registering and validating custom products/applications on every node in a multi-node EBS system. Public/Private File Source Mappings and E-Business Suite Mappings: File Source Mappings & E-Business Suite Mappings can be created and marked as public or private. Only the creator/owner can define/edit his/her own mappings. Users can use public mappings, but cannot edit or change settings. Test Checkout Command for Versions: This feature allows you to test/verify checkout commands at the version level within the File Source Mapping page. Prerequisite Patch Validation: You can specify prerequisite patches for Customization packages and for Release 12 Oracle E-Business Suite packages. Destination Path Population: You can now automatically populate the Destination Path for common file types during package construction. OAF File Type Support: Ability to package Oracle Application Framework (OAF) customizations and deploy them across multiple Oracle E-Business Suite instances. Extended PLL Support: Ability to distinguish between different types of PLLs (that is, Report and Forms PLL files). Providing better granularity when managing PLL objects. Enhanced Standard Checker: Provides greater and more comprehensive list of coding standards that are verified during the package build process (for example, File Driver exceptions, Java checks, XML checks, SQL checks, etc.) HTML Package Readme: The package Readme is in HTML format and includes the file listing. Advanced Package Search Capabilities: The ability to utilize more criteria within the advanced search package (that is, Public, Last Updated by, Files Source Mapping, and E-Business Suite Mapping). Enhanced Package Build Notifications: More detailed information on the results of a package build process. Better, more detailed troubleshooting guidance in the event of build failures. Patch Manager:Staged Patches: Ability to run Patch Manager with no external internet access. Customer can download Oracle E-Business Suite patches into a shared location for Patch Manager to access and apply. Supports highly secured production environments that prohibit external internet connections. Support for Superseded Patches: Automatic check for superseded patches. Allows users to easily add superseded patches into the Patch Run. More comprehensive and correct Patch Runs. Removes many manual and laborious tasks, frees up Apps DBAs for higher value-added tasks. Automatic Primary Node Identification: Users can now specify which is the "primary node" (that is, which node hosts the Shared APPL_TOP) during the Patch Run interview process, available for Release 12 only. Setup Manager:Preview Extract Results: Ability to execute an extract in "proof mode", and examine the query results, to determine accuracy. Used in conjunction with the "where" clause in Advanced Filtering. This feature can provide better and more accurate fine tuning of extracts. Use Uploaded Extracts in New Projects: Ability to incorporate uploaded extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access extracts that have been uploaded. Allows customer to reuse uploaded extracts to provision new instances. Re-use Existing (that is, historical) Extracts in New Projects: Ability to incorporate existing extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access point-in-time extracts (snapshots) of configuration data. Allows customer to reuse existing extracts to provision new instances. Allows comparative historical reporting of identical APIs, executed at different times. Support for BR100 formats: Setup Manager can now automatically produce reports in the BR100 format. Native support for industry standard formats. Concurrent Manager API Support: General Foundation now provides an API for management of "Concurrent Manager" configuration data. Ability to migrate Concurrent Managers from one instance to another. Complete the setup once and never again; no need to redefine the Concurrent Managers. User Experience Management Enhancements Application Management Suite for Oracle E-Business Suite includes comprehensive capabilities for user experience management, supporting both real user and synthetic transaction based user monitoring techniques. This latest release of the management suite include numerous improvements in real user monitoring support. KPI Reporting: Configurable decimal precision for reporting of KPI and SLA values. By default, this is two decimal places. KPI numerator and denominator information. It is now possible to view KPI numerator and denominator information, and to have it available for export. Content Messages Processing: The application content message facility has been extended to distinguish between notifications and errors. In addition, it is now possible to specify matching rules that can be used to refine a selected content message specification. Note this is only available for XPath-based (not literal) message contents. Data Export: The Enriched data export facility has been significantly enhanced to provide improved performance and accessibility. Data is no longer stored within XML-based files, but is now stored within the Reporter database. However, it is possible to configure an alternative database for its storage. Access to the export data is through SQL. With this enhancement, it is now more easy than ever to use tools such as Oracle Business Intelligence Enterprise Edition to analyze correlated data collected from real user monitoring and business data sources. SNMP Traps for System Events: Previously, the SNMP notification facility was only available for KPI alerting. It has now been extended to support the generation of SNMP traps for system events, to provide external health monitoring of the RUEI system processes. Performance Improvements: Enhanced dashboard performance. The dashboard facility has been enhanced to support the parallel loading of items. In the case of dashboards containing large numbers of items, this can result in a significant performance improvement. Initial period selection within Data Browser and reports. The User Preferences facility has been extended to allow you to specify the initial period selection when first entering the Data Browser or reports facility. The default is the last hour. Performance improvement when querying the all sessions group. Technical Prerequisites, Download and Installation Instructions The Linux version of the plug-in is available for immediate download from Oracle Technology Network or Oracle eDelivery. For specific information regarding technical prerequisites, product download and installation, please refer to My Oracle Support note 1224313.1. The following certifications are in progress: * Oracle Solaris on SPARC (64-bit) (9, 10) * HP-UX Itanium (11.23, 11.31) * HP-UX PA-RISC (64-bit) (11.23, 11.31) * IBM AIX on Power Systems (64-bit) (5.3, 6.1)

    Read the article

  • Oracle Identity Manager Role Management With API

    - by mustafakaya
    As an administrator, you use roles to create and manage the records of a collection of users to whom you want to permit access to common functionality, such as access rights, roles, or permissions. Roles can be independent of an organization, span multiple organizations, or contain users from a single organization. Using roles, you can: View the menu items that the users can access through Oracle Identity Manager Administration Web interface. Assign users to roles. Assign a role to a parent role Designate status to the users so that they can specify defined responses for process tasks. Modify permissions on data objects. Designate role administrators to perform actions on roles, such as enabling members of another role to assign users to the current role, revoke members from current role and so on. Designate provisioning policies for a role. These policies determine if a resource object is to be provisioned to or requested for a member of the role. Assign or remove membership rules to or from the role. These rules determine which users can be assigned/removed as direct membership to/from the role.  In this post, i will share some examples for role management with Oracle Identity Management API.  You can do role operations you can use Thor.API.Operations.tcGroupOperationsIntf interface. tcGroupOperationsIntf service =  getClient().getService(tcGroupOperationsIntf.class);     Assign an user to role :    public void assignRoleByUsrKey(String roleName, String usrKey) throws Exception {         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);         String groupKey = role.getStringValue("Groups.Key");         service.addMemberUser(Long.parseLong(groupKey), Long.parseLong(usrKey));     }  Revoke an user from role:     public void revokeRoleByUsrKey(String roleName, String usrKey) throws Exception {         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);         String groupKey = role.getStringValue("Groups.Key");         service.removeMemberUser(Long.parseLong(groupKey), Long.parseLong(usrKey));     } Get all members of a role :      public List<User> getRoleMembers(String roleName) throws Exception {         List<User> userList = new ArrayList<User>();         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);       String groupKey = role.getStringValue("Groups.Key");         tcResultSet members = service.getAllMemberUsers(Long.parseLong(groupKey));         for (int i = 0; i < members.getRowCount(); i++) {                 members.goToRow(i);                 long userKey = members.getLongValue("Users.Key");                 User member = oimUserManager.findUserByUserKey(String.valueOf(userKey));                 userList.add(member);         }        return userList;     } About me: Mustafa Kaya is a Senior Consultant in Oracle Fusion Middleware Team, living in Istanbul. Before coming to Oracle, he worked in teams developing web applications and backend services at a telco company. He is a Java technology enthusiast, software engineer and addicted to learn new technologies,develop new ideas. Follow Mustafa on Twitter,Connect on LinkedIn, and visit his site for Oracle Fusion Middleware related tips.

    Read the article

  • How to Secure a Data Role by Multiple Business Units

    - by Elie Wazen
    In this post we will see how a Role can be data secured by multiple Business Units (BUs).  Separate Data Roles are generally created for each BU if a corresponding data template generates roles on the basis of the BU dimension. The advantage of creating a policy with a rule that includes multiple BUs is that while mapping these roles in HCM Role Provisioning Rules, fewer number of entires need to be made. This could facilitate maintenance for enterprises with a large number of Business Units. Note: The example below applies as well if the securing entity is Inventory Organization. Let us take for example the case of a user provisioned with the "Accounts Payable Manager - Vision Operations" Data Role in Fusion Applications. This user will be able to access Invoices in Vision Operations but will not be able to see Invoices in Vision Germany. Figure 1. A User with a Data Role restricting them to Data from BU: Vision Operations With the role granted above, this is what the user will see when they attempt to select Business Units while searching for AP Invoices. Figure 2.The List Of Values of Business Units is limited to single one. This is the effect of the Data Role granted to that user as can be seen in Figure 1 In order to create a data role that secures by multiple BUs,  we need to start by creating a condition that groups those Business Units we want to include in that data role. This is accomplished by creating a new condition against the BU View .  That Condition will later be used to create a data policy for our newly created Role.  The BU View is a Database resource and  is accessed from APM as seen in the search below Figure 3.Viewing a Database Resource in APM The next step is create a new condition,  in which we define a sql predicate that includes 2 BUs ( The ids below refer to Vision Operations and Vision Germany).  At this point we have simply created a standalone condition.  We have not used this condition yet, and security is therefore not affected. Figure 4. Custom Role that inherits the Purchase Order Overview Duty We are now ready to create our Data Policy.  in APM, we search for our newly Created Role and Navigate to “Find Global Policies”.  we query the Role we want to secure and navigate to view its global policies. Figure 5. The Job Role we plan on securing We can see that the role was not defined with a Data Policy . So will create one that uses the condition we created earlier.   Figure 6. Creating a New Data Policy In the General Information tab, we have to specify the DB Resource that the Security Policy applies to:  In our case this is the BU View Figure 7. Data Policy Definition - Selection of the DB Resource we will secure by In the Rules Tab, we  make the rule applicable to multiple values of the DB Resource we selected in the previous tab.  This is where we associate the condition we created against the BU view to this data policy by entering the Condition name in the Condition field Figure 8. Data Policy Rule The last step of Defining the Data Policy, consists of  explicitly selecting  the Actions that are goverened by this Data Policy.  In this case for example we select the Actions displayed below in the right pane. Once the record is saved , we are ready to use our newly secured Data Role. Figure 9. Data Policy Actions We can now see a new Data Policy associated with our Role.  Figure 10. Role is now secured by a Data Policy We now Assign that new Role to the User.  Of course this does not have to be done in OIM and can be done using a Provisioning Rule in HCM. Figure 11. Role assigned to the User who previously was granted the Vision Ops secured role. Once that user accesses the Invoices Workarea this is what they see: In the image below the LOV of Business Unit returns the two values defined in our data policy namely: Vision Operations and Vision Germany Figure 12. The List Of Values of Business Units now includes the two we included in our data policy. This is the effect of the data role granted to that user as can be seen in Figure 11

    Read the article

  • EMERGENCY - Major Problems After Perl Module Installed via WHM

    - by Russell C.
    I attempted to install the perl module Net::Twitter::Role::API::Lists using WHM and after doing so my whole site came down. It seems that something that was updated with the install isn't functioning correctly and since our website it written in Perl none of our site scripts will run. In almost 8 years of working with Perl I've never had any issues arise after installing a perl module so I have no idea how to even start troubleshooting. The error I see when trying to compile any of our Perl scripts is below. I'd appreciate any advice on what might be wrong and steps on how I can go about resolve it. Thanks in advance for your help! Attribute (+type_constraint) of class MooseX::AttributeHelpers::Counter has no associated methods (did you mean to provide an "is" argument?) at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Attribute.pm line 551 Moose::Meta::Attribute::_check_associated_methods('Moose::Meta::Attribute=HASH(0x9ae35b4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Class.pm line 303 Moose::Meta::Class::add_attribute('Moose::Meta::Class=HASH(0xa4d7718)', 'Moose::Meta::Attribute=HASH(0x9ae35b4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 142 Moose::Meta::Role::Application::ToClass::apply_attributes('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application.pm line 72 Moose::Meta::Role::Application::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 31 Moose::Meta::Role::Application::ToClass::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role.pm line 419 Moose::Meta::Role::apply('Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 132 Moose::Util::_apply_all_roles('Moose::Meta::Class=HASH(0xa4d7718)', 'undef', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 86 Moose::Util::apply_all_roles('Moose::Meta::Class=HASH(0xa4d7718)', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose.pm line 57 Moose::with('Moose::Meta::Class=HASH(0xa4d7718)', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Exporter.pm line 293 Moose::with('MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 10 require MooseX/AttributeHelpers/Counter.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers.pm line 23 MooseX::AttributeHelpers::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/AttributeHelpers.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute/Role/Meta/Class.pm line 6 MooseX::ClassAttribute::Role::Meta::Class::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/ClassAttribute/Role/Meta/Class.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute.pm line 11 MooseX::ClassAttribute::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/ClassAttribute.pm called at /usr/lib/perl5/site_perl/5.8.8/Olson/Abbreviations.pm line 6 Olson::Abbreviations::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Olson/Abbreviations.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTime/ButMaintained.pm line 10 MooseX::Types::DateTime::ButMaintained::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/Types/DateTime/ButMaintained.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTimeX.pm line 9 MooseX::Types::DateTimeX::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/Types/DateTimeX.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3/Client/Bucket.pm line 5 Net::Amazon::S3::Client::Bucket::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Net/Amazon/S3/Client/Bucket.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3.pm line 111 Net::Amazon::S3::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Net/Amazon/S3.pm called at /home/atrails/www/cgi-bin/main.pm line 1633 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require main.pm called at /home/atrails/cron/meetup.pl line 20 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 Attribute (+default) of class MooseX::AttributeHelpers::Counter has no associated methods (did you mean to provide an "is" argument?) at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Attribute.pm line 551 Moose::Meta::Attribute::_check_associated_methods('Moose::Meta::Attribute=HASH(0xa4df4b4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Class.pm line 303 Moose::Meta::Class::add_attribute('Moose::Meta::Class=HASH(0xa4d7718)', 'Moose::Meta::Attribute=HASH(0xa4df4b4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 142 Moose::Meta::Role::Application::ToClass::apply_attributes('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application.pm line 72 Moose::Meta::Role::Application::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 31 Moose::Meta::Role::Application::ToClass::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role.pm line 419 Moose::Meta::Role::apply('Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 132 Moose::Util::_apply_all_roles('Moose::Meta::Class=HASH(0xa4d7718)', 'undef', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 86 Moose::Util::apply_all_roles('Moose::Meta::Class=HASH(0xa4d7718)', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose.pm line 57 Moose::with('Moose::Meta::Class=HASH(0xa4d7718)', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Exporter.pm line 293 Moose::with('MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 10 require MooseX/AttributeHelpers/Counter.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers.pm line 23 MooseX::AttributeHelpers::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/AttributeHelpers.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute/Role/Meta/Class.pm line 6 MooseX::ClassAttribute::Role::Meta::Class::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/ClassAttribute/Role/Meta/Class.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute.pm line 11 MooseX::ClassAttribute::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/ClassAttribute.pm called at /usr/lib/perl5/site_perl/5.8.8/Olson/Abbreviations.pm line 6 Olson::Abbreviations::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Olson/Abbreviations.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTime/ButMaintained.pm line 10 MooseX::Types::DateTime::ButMaintained::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/Types/DateTime/ButMaintained.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTimeX.pm line 9 MooseX::Types::DateTimeX::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/Types/DateTimeX.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3/Client/Bucket.pm line 5 Net::Amazon::S3::Client::Bucket::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Net/Amazon/S3/Client/Bucket.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3.pm line 111 Net::Amazon::S3::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Net/Amazon/S3.pm called at /home/atrails/www/cgi-bin/main.pm line 1633 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require main.pm called at /home/atrails/cron/meetup.pl line 20 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 Attribute (+type_constraint) of class MooseX::AttributeHelpers::Number has no associated methods (did you mean to provide an "is" argument?) at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Attribute.pm line 551 Moose::Meta::Attribute::_check_associated_methods('Moose::Meta::Attribute=HASH(0xa4ea48c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Class.pm line 303 Moose::Meta::Class::add_attribute('Moose::Meta::Class=HASH(0xa4f778c)', 'Moose::Meta::Attribute=HASH(0xa4ea48c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 142 Moose::Meta::Role::Application::ToClass::apply_attributes('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application.pm line 72 Moose::Meta::Role::Application::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 31 Moose::Meta::Role::Application::ToClass::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role.pm line 419 Moose::Meta::Role::apply('Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 132 Moose::Util::_apply_all_roles('Moose::Meta::Class=HASH(0xa4f778c)', 'undef', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 86 Moose::Util::apply_all_roles('Moose::Meta::Class=HASH(0xa4f778c)', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose.pm line 57 Moose::with('Moose::Meta::Class=HASH(0xa4f778c)', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Exporter.pm line 293 Moose::with('MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 9 require MooseX/AttributeHelpers/Number.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers.pm line 24 MooseX::AttributeHelpers::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/AttributeHelpers.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute/Role/Meta/Class.pm line 6 MooseX::ClassAttribute::Role::Meta::Class::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/ClassAttribute/Role/Meta/Class.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute.pm line 11 MooseX::ClassAttribute::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/ClassAttribute.pm called at /usr/lib/perl5/site_perl/5.8.8/Olson/Abbreviations.pm line 6 Olson::Abbreviations::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Olson/Abbreviations.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTime/ButMaintained.pm line 10 MooseX::Types::DateTime::ButMaintained::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/Types/DateTime/ButMaintained.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTimeX.pm line 9 MooseX::Types::DateTimeX::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/Types/DateTimeX.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3/Client/Bucket.pm line 5 Net::Amazon::S3::Client::Bucket::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Net/Amazon/S3/Client/Bucket.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3.pm line 111 Net::Amazon::S3::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Net/Amazon/S3.pm called at /home/atrails/www/cgi-bin/main.pm line 1633 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require main.pm called at /home/atrails/cron/meetup.pl line 20 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 Attribute (+default) of class MooseX::AttributeHelpers::Number has no associated methods (did you mean to provide an "is" argument?) at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Attribute.pm line 551 Moose::Meta::Attribute::_check_associated_methods('Moose::Meta::Attribute=HASH(0xa4f7804)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Class.pm line 303 Moose::Meta::Class::add_attribute('Moose::Meta::Class=HASH(0xa4f778c)', 'Moose::Meta::Attribute=HASH(0xa4f7804)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 142 Moose::Meta::Role::Application::ToClass::apply_attributes('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application.pm line 72 Moose::Meta::Role::Application::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 31 Moose::Meta::Role::Application::ToClass::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role.pm line 419 Moose::Meta::Role::apply('Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 132 Moose::Util::_apply_all_roles('Moose::Meta::Class=HASH(0xa4f778c)', 'undef', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 86 Moose::Util::apply_all_roles('Moose::Meta::Class=HASH(0xa4f778c)', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose.pm line 57 Moose::with('Moose::Meta::Class=HASH(0xa4f778c)', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Exporter.pm line 293 Moose::with('MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 9 require MooseX/AttributeHelpers/Number.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers.pm line 24 MooseX::AttributeHelpers::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/AttributeHelpers.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute/Role/Meta/Class.pm line 6 MooseX::ClassAttribute::Role::Meta::Class::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/ClassAttribute/Role/Meta/Class.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute.pm line 11 MooseX::ClassAttribute::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/ClassAttribute.pm called at /usr/lib/perl5/site_perl/5.8.8/Olson/Abbreviations.pm line 6 Olson::Abbreviations::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Olson/Abbreviations.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTime/ButMaintained.pm line 10 MooseX::Types::DateTime::ButMaintained::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/Types/DateTime/ButMaintained.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTimeX.pm line 9 MooseX::Types::DateTimeX::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/Types/DateTimeX.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3/Client/Bucket.pm line 5 Net::Amazon::S3::Client::Bucket::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Net/Amazon/S3/Client/Bucket.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3.pm line 111 Net::Amazon::S3::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Net/Amazon/S3.pm called at /home/atrails/www/cgi-bin/main.pm line 1633 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require main.pm called at /home/atrails/cron/meetup.pl line 20 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 Attribute (+type_constraint) of class MooseX::AttributeHelpers::String has no associated methods (did you mean to provide an "is" argument?) at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Attribute.pm line 551 Moose::Meta::Attribute::_check_associated_methods('Moose::Meta::Attribute=HASH(0xa4fdae0)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Class.pm line 303 Moose::Meta::Class::add_attribute('Moose::Meta::Class=HASH(0xa4fd5c4)', 'Moose::Meta::Attribute=HASH(0xa4fdae0)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 142 Moose::Meta::Role::Application::ToClass::apply_attributes('Moose::Meta::Role::Application::ToClass=HASH(0xa5002d8)', 'Moose::Meta::Role=HASH(0xa42a690)', 'Moose::Meta::Class=HASH(0xa4fd5c4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application.pm line 72 Moose::Meta::Role::Application::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa5002d8)', 'Moose::Meta::Role=HASH(0xa42a690)', 'Moose::Meta::Class=HASH(0xa4fd5c4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 31 Moose::Meta::Role::Application::ToClass::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa5002d8)', 'Moose::Meta::Role=HASH(0xa42a690)', 'Moose::Meta::Class=HASH(0xa4fd5c4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role.pm line 419 Moose::Meta::Role::apply('Moose::Meta::Role=HASH(0xa42a690)', 'Moose::Meta::Class=HASH(0xa4fd5c4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 132 Moose::Util::_apply_all_roles('Moose::Meta::Class=HASH(0xa4fd5c4)', 'undef', 'MooseX::AttributeHelpers::Trait::String') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 86 Moose::Util::apply_all_roles('Moose::Meta::Class=HASH(0xa4fd5c4)', 'MooseX::AttributeHelpers::Trait::String') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose.pm line 57 Moose::with('Moose::Meta::Class=HASH(0xa4fd5c4)', 'MooseX::AttributeHelpers::Trait::String') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Exporter.pm line 293 Moose::with('MooseX::AttributeHelpers::Trait::String') called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 10 require MooseX/AttributeHelpers/String.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers.pm line 25 MooseX::AttributeHelpers::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require MooseX/AttributeHelpers.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute/Role/Meta/Class.pm line 6 MooseX::ClassAttribute::Role::Meta::Class::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require MooseX/ClassAttribute/Role/Meta/Class.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute.pm line 11 MooseX::ClassAttribute::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require MooseX/ClassAttribute.pm called at /usr/lib/perl5/site_perl/5.8.8/Olson/Abbreviations.pm line 6 Olson::Abbreviations::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require Olson/Abbreviations.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTime/ButMaintained.pm line 10 MooseX::Types::DateTime::ButMaintained::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require MooseX/Types/DateTime/ButMaintained.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTimeX.pm line 9 MooseX::Types::DateTimeX::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require MooseX/Types/DateTimeX.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3/Client/Bucket.pm line 5 Net::Amazon::S3::Client::Bucket::BEGIN() called at /usr/lib/perl5/site_per

    Read the article

  • Multi-Threaded Application vs. Single Threaded Application

    Why would we use a multi threaded application vs. a single threaded application? First we must define multithreading. Multithreading is a feature of an operating system that allows programs to run subcomponents or threads in parallel. Typically most applications only need to use one thread because they do not perform time consuming tasks. The use of multiple threads allows an application to distribute long running tasks so that they can be executed in parallel. This gives the user the perceived appearance that the application is working faster due to the fact that while one thread is waiting on an IO process the remaining tasks can make use of the available CPU. The allows working threads to execute in tandem so that they can be competed sooner. Multithreading Benefits Improved responsiveness — Users usually report improved responsiveness compared to single thread applications. Faster applications — Multiple threads can lead to improved application performance. Prioritization — Threads can be assigned a priority which would allow higher priority tasks to take precedence over lower priority tasks. Single Threading Benefits Programming and debugging —These activities are easier compared to multithreaded applications due to the reduced complexity Less Overhead — Threads add overhead to an application When developing multi-threaded applications, the following must be considered. Deadlocks occur when two threads hold a monitor that the other one requires. In essence each task is blocking the other and both tasks are waiting for the other monitor to be released. This forces an application to hang or deadlock. Resource allocation is used to prevent deadlocks because the system determines if approving the resource request will render the system in an unsafe state. An unsafe state could result in a deadlock. The system only approves requests that will lead to safe states. Thread Synchronization is used when multiple threads use the same instance of an object. The threads accessing the object can then be locked and then synchronized so that each task can interact with the static object on at a time.

    Read the article

  • How to design application for scaling the application?

    - by Muhammad
    I have one application which handles hardware events connected on the same computer's PCIe slots. The maximum number of PCIe slots on motherboard are two. I have utilized both slots. Now for scaling the application I need either more PCIe slots in same computer or I use another computer. So consider I am using another computer with same application and hardware connected on the PCIe Slots. Now my problem is that I want to design application over it which can access both computers hardware devices and does the process on it. The processed data should be send back to the respective PC's hardware. Please refer the attached diagram for expansion.

    Read the article

  • Looking for reading material on application architecture with web UI

    - by toong
    I'm looking for articles (or other reading material) on the topic of fat client applications with a web UI layer. Open-source projects that use this architecture would be very interesting too. Such an application would embed one (or more) browser-window(s) (chromiumembedded for example). You would need bidirectional communication between your web-UI and your domain model/services. I think this allows quick prototyping the UI, a clean separation between logic and UI and potentially easier portability across platforms (compared to WinForms for example). But that is just my view, I was looking for the view of people who have been on that road. An example of an application using a web-ui layer is Light Table. Unfortunately it is not open source (at this point?).

    Read the article

  • Confused about application submission

    - by snowflake
    I finished my app but I'm totally confused about how to submit it to launchpad and the software center. I'm also new to launchpad. 1) I created a project at launchpad and synchronized an OpenPGP key via the password tool. This was about 30 minutes ago but launchpad still shows "No OpenPGP keys registered." Is this normal? 2) What I have to do next with the PPA? 3) How do I submit my app? Do I have to adjust files with the PPA? The showdown blog shows that quickly submitubuntu is enough but here in the forum I read about quickly release or quickly share. So which one is correct? 4) Do I have to add this file to launchpad as well? 5) Do I get a confirmation that my app was correctly submitted and take part of the contest? Since I do it for first time I want to be sure that everything is ok. I'm sorry for all my questions and I know that there are many topics about it but everywhere something different is written and I'm confused how to proceed. Thanks in advance!

    Read the article

  • Webcast Q&A: ING on How to Scale Role Management and Compliance

    - by Tanu Sood
    Thanks to all who attended the live webcast we hosted on ING: Scaling Role Management and Access Certifications to Thousands of Applications on Wed, April 11th. Those of you who couldn’t join us, the webcast replay is now available. Many thanks to our guest speaker, Mark Robison, Enterprise Architect at ING for walking us through ING’s drivers and rationale for the platform approach, the phased implementation strategy, results & metrics, roadmap and recommendations. We greatly appreciate the insight he shared with us all on the deployment synergies between Oracle Identity Manager (OIM) and Oracle Identity Analytics (OIA) to enforce streamlined user and role management and scalable compliance. Mark was also kind enough to walk us through specific solutions features that helped ING manage the problem of role explosion and implement closed loop remediation. Our host speaker, Neil Gandhi, Principal Product Manager, Oracle rounded off the presentation by discussing common use cases and deployment scenarios we see organizations implement to automate user/identity administration and enforce closed-loop scalable compliance. Neil also called out the specific features in Oracle Identity Analytics 11gR1 that cater to expediting and streamlining compliance processes such as access certifications. While we tackled a few questions during the webcast, we have captured the responses to those that we weren’t able to get to here; our sincere thanks to Mark Robison for taking the time to respond to questions specific to ING’s implementation and strategy. Q. Did you include business friendly entitlment descriptions, or is the business seeing application descriptors A. We include very business friendly descriptions.  The OIA tool has the facility to allow this. Q. When doing attestation on job change, who is in the workflow to review and confirm that the employee should continue to have access? Is that a best practice?   A. The new and old manager  are in the workflow.  The tool can check for any Separation of Duties (SOD) violations with both having similiar accesses.  It may not be a best practice, but it is a reality of doing your old and new job for a transition period on a transfer. Q. What versions of OIM and OIA are being used at ING?   A. OIM 11gR1 and OIA 11gR1; the very latest versions available. Q. Are you using an entitlements / role catalog?   A. Yes. We use both roles and entitlements. Q. What specific unexpected benefits did the Identity Warehouse provide ING?   A. The most unanticipated was to help Legal Hold identify user ID's in the various applications.   Other benefits included providing a one stop shop for all aggregated ID information. Q. How fine grained are your application and entitlements? Did OIA, OIM support that level of granularity?   A. We have some very fine grained entitlements, but we role this up into approved Roles to allow for easier management.   For managing very fine grained entitlements, Oracle offers the Oracle Entitlement Server.  We currently do not own this software but are considering it. Q. Do you allow any individual access or is everything truly role based?   A. We are a hybrid environment with roles and individual positive and negative entitlements Q. Did you use an Agile methodology like scrum to deliver functionality during your project? A. We started with waterfall, but used an agile approach to provide benefits after the initial implementation Q. How did you handle rolling out the standard ID format to existing users? A. We just used the standard IDs for new users.  We have not taken on a project to address the existing nonstandard IDs. Q. To avoid role explosion, how do you deal with apps that require more than a couple of entitlement TYPES? For example, an app may have different levels of access and it may need to know the user's country/state to associate them with particular customers.   A. We focus on the functional user and craft the role around their daily job requirements.  The role captures the required application entitlements.  To keep role explosion down, we use role mining in OIA and also meet and interview the business.  It is an iterative process to get role consensus. Q. Great presentation! How many rounds of Certifications has ING performed so far?  A. Around 7 quarters and constant certifications on transfer. Q. Did you have executive support from the top down   A. Yes  The executive support was key to our success. Q. For your cloud instance are you using OIA or OIM as SaaS?  A. No.  We are just provisioning and deprovisioning to various Cloud providers.  (Service Now is an example) Q. How do you ensure a role owner does not get more priviliges as are intended and thus violates another role, e,g, a DBA Roles should not get tor rigt to run somethings as root, as this would affect the root role? A. We have SOD  checks.  Also all Roles are initially approved by external audit and the role owners have to certify the roles and any changes Q. What is your ratio of employees to roles?   A. We are still in process going through our various lines of business, so I do not have a final ratio.  From what we have seen, the ratio varies greatly depending on the Line of Business and the diversity of Job Functions.  For standardized lines of business such as call centers, the ratio is very good where we can have a single role that covers many employees.  For specialized lines of business like treasury, it can be one or two people per role. Q. Is ING using Oracle On Demand service ?   A. No Q. Do you have to implement or migrate to OIM in order to get the Identity Warehouse, or can OIA provide the identity warehouse as well if you haven't reached OIM yet? A. No, OIM deployment is not required to implement OIA’s Identity Warehouse but as you heard during the webcast, there are tremendous deployment synergies in deploying both OIA and OIM together. Q. When is the Security Governor product coming out? A. Oracle Security Governor for Healthcare is available today. Hope you enjoyed the webcast and we look forward to having you join us for the next webcast in the Customers Talk: Identity as a Platform webcast series: Toyota: Putting Customers First – Identity Platform as a Business Enabler Wednesday, May 16th at 10 am PST/ 1 pm EST Register Today You can also register for a live event at a city near you where Aberdeen’s Derek Brink will discuss the survey results from the recently published report “Analyzing Platform vs. Point Solution Approach in Identity”. And, you can do a quick (& free)  online assessment of your identity programs by benchmarking it against the 160 organizations surveyed  in the Aberdeen report, compliments of Oracle. Here’s the slide deck from our ING webcast: ING webcast platform View more presentations from OracleIDM

    Read the article

  • local web application vs desktop application speed?

    - by Josh
    Which one would be faster - a local web app gui made with something like qooxdoo or a desktop app? How much speed difference would there be expected? I would prefer creating a web app which could in the future be shared than creating a desktop gui which is specialized on certain gui toolkits.

    Read the article

  • iphone app with role based login?

    - by chaitanya
    Can iPhone apps have role based login? In my application I have to display the content according to the role of the user (employee, visitor). Till now I havent seen any app with role based login for iphone. Can I develop role based login? is there any restriction from apple side for these kind of logins to approve the app?

    Read the article

  • SQL SERVER – Extending SQL Azure with Azure worker role – Guest Post by Paras Doshi

    - by pinaldave
    This is guest post by Paras Doshi. Paras Doshi is a research Intern at SolidQ.com and a Microsoft student partner. He is currently working in the domain of SQL Azure. SQL Azure is nothing but a SQL server in the cloud. SQL Azure provides benefits such as on demand rapid provisioning, cost-effective scalability, high availability and reduced management overhead. To see an introduction on SQL Azure, check out the post by Pinal here In this article, we are going to discuss how to extend SQL Azure with the Azure worker role. In other words, we will attempt to write a custom code and host it in the Azure worker role; the aim is to add some features that are not available with SQL Azure currently or features that need to be customized for flexibility. This way we extend the SQL Azure capability by building some solutions that run on Azure as worker roles. To understand Azure worker role, think of it as a windows service in cloud. Azure worker role can perform background processes, and to handle processes such as synchronization and backup, it becomes our ideal tool. First, we will focus on writing a worker role code that synchronizes SQL Azure databases. Before we do so, let’s see some scenarios in which synchronization between SQL Azure databases is beneficial: scaling out access over multiple databases enables us to handle workload efficiently As of now, SQL Azure database can be hosted in one of any six datacenters. By synchronizing databases located in different data centers, one can extend the data by enabling access to geographically distributed data Let us see some scenarios in which SQL server to SQL Azure database synchronization is beneficial To backup SQL Azure database on local infrastructure Rather than investing in local infrastructure for increased workloads, such workloads could be handled by cloud Ability to extend data to different datacenters located across the world to enable efficient data access from remote locations Now, let us develop cloud-based app that synchronizes SQL Azure databases. For an Introduction to developing cloud based apps, click here Now, in this article, I aim to provide a bird’s eye view of how a code that synchronizes SQL Azure databases look like and then list resources that can help you develop the solution from scratch. Now, if you newly add a worker role to the cloud-based project, this is how the code will look like. (Note: I have added comments to the skeleton code to point out the modifications that will be required in the code to carry out the SQL Azure synchronization. Note the placement of Setup() and Sync() function.) Click here (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-1-for-extending-sql-azure-with-azure-worker-role1.pdf ) Enabling SQL Azure databases synchronization through sync framework is a two-step process. In the first step, the database is provisioned and sync framework creates tracking tables, stored procedures, triggers, and tables to store metadata to enable synchronization. This is one time step. The code for the same is put in the setup() function which is called once when the worker role starts. Now, the second step is continuous (or on demand) synchronization of SQL Azure databases by propagating changes between databases. This is done on a continuous basis by calling the sync() function in the while loop. The code logic to synchronize changes between SQL Azure databases should be put in the sync() function. Discussing the coding part step by step is out of the scope of this article. Therefore, let me suggest you a resource, which is given here. Also, note that before you start developing the code, you will need to install SYNC framework 2.1 SDK (download here). Further, you will reference some libraries before you start coding. Details regarding the same are available in the article that I just pointed to. You will be charged for data transfers if the databases are not in the same datacenter. For pricing information, go here Currently, a tool named DATA SYNC, which is built on top of sync framework, is available in CTP that allows SQL Azure <-> SQL server and SQL Azure <-> SQL Azure synchronization (without writing single line of code); however, in some cases, the custom code shown in this blogpost provides flexibility that is not available with Data SYNC. For instance, filtering is not supported in the SQL Azure DATA SYNC CTP2; if you wish to have such a functionality now, then you have the option of developing a custom code using SYNC Framework. Now, this code can be easily extended to synchronize at some schedule. Let us say we want the databases to get synchronized every day at 10:00 pm. This is what the code will look like now: (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-2-for-extending-sql-azure-with-azure-worker-role.pdf) Don’t you think that by writing such a code, we are imitating the functionality provided by the SQL server agent for a SQL server? Think about it. We are scheduling our administrative task by writing custom code – in other words, we have developed a “Light weight SQL server agent for SQL Azure!” Since the SQL server agent is not currently available in cloud, we have developed a solution that enables us to schedule tasks, and thus we have extended SQL Azure with the Azure worker role! Now if you wish to track jobs, you can do so by storing this data in SQL Azure (or Azure tables). The reason is that Windows Azure is a stateless platform, and we will need to store the state of the job ourselves and the choice that you have is SQL Azure or Azure tables. Note that this solution requires custom code and also it is not UI driven; however, for now, it can act as a temporary solution until SQL server agent is made available in the cloud. Moreover, this solution does not encompass functionalities that a SQL server agent provides, but it does open up an interesting avenue to schedule some of the tasks such as backup and synchronization of SQL Azure databases by writing some custom code in the Azure worker role. Now, let us see one more possibility – i.e., running BCP through a worker role in Azure-hosted services and then uploading the backup files either locally or on blobs. If you upload it locally, then consider the data transfer cost. If you upload it to blobs residing in the same datacenter, then no transfer cost applies but the cost on blob size applies. So, before choosing the option, you need to evaluate your preferences keeping the cost associated with each option in mind. In this article, I have shown that Azure worker role solution could be developed to synchronize SQL Azure databases. Moreover, a light-weight SQL server agent for SQL Azure can be developed. Also we discussed the possibility of running BCP through a worker role in Azure-hosted services for backing up our precious SQL Azure data. Thus, we can extend SQL Azure with the Azure worker role. But remember: you will be charged for running Azure worker roles. So at the end of the day, you need to ask – am I willing to build a custom code and pay money to achieve this functionality? I hope you found this blog post interesting. If you have any questions/feedback, you can comment below or you can mail me at Paras[at]student-partners[dot]com Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Azure, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Free Windows Application Blocker/Monitor

    - by Click Ok
    I want a free application monitor that when detects certain keywords on window title by example, it closes the application (or prevents that it opens/install). Nice extra will be if the program log the application activity too and internet sites accessed by any browser. Thank you very much! PS: I'm using Windows 7 Ultimate.

    Read the article

  • How to comply with this guideline for submitting an application to the Software Center?

    - by George Edison
    I was reading through the Ubuntu Developer Programme Agreement for submitting applications to the Software Center and stubled across the following clause: 3.1 You must first test Apps you submit to confirm they are compatible with all currently supported versions of Ubuntu (as listed on Canonical's website at the date of submission by you) and your Apps must comply with the Publishing Policy. Does this mean I must install both the 32 and 64 bit versions of Ubuntu 8.04, 10.04, 10.10, 11.04, and 11.10? If so, that's 10 installations of Ubuntu - is that really feasible (even with virtual machines)? Alternatively, does anyone have suggestions for testing the application without actually installing each version? Some sort of chroot tool, perhaps?

    Read the article

  • Creating HTML5 Offline Web Applications with ASP.NET

    - by Stephen Walther
    The goal of this blog entry is to describe how you can create HTML5 Offline Web Applications when building ASP.NET web applications. I describe the method that I used to create an offline Web application when building the JavaScript Reference application. You can read about the HTML5 Offline Web Application standard by visiting the following links: Offline Web Applications Firefox Offline Web Applications Safari Offline Web Applications Currently, the HTML5 Offline Web Applications feature works with all modern browsers with one important exception. You can use Offline Web Applications with Firefox, Chrome, and Safari (including iPhone Safari). Unfortunately, however, Internet Explorer does not support Offline Web Applications (not even IE 9). Why Build an HTML5 Offline Web Application? The official reason to build an Offline Web Application is so that you do not need to be connected to the Internet to use it. For example, you can use the JavaScript Reference Application when flying in an airplane, riding a subway, or hiding in a cave in Borneo. The JavaScript Reference Application works great on my iPhone even when I am completely disconnected from any network. The following screenshot shows the JavaScript Reference Application running on my iPhone when airplane mode is enabled (notice the little orange airplane):   Admittedly, it is becoming increasingly difficult to find locations where you can’t get Internet access. A second, and possibly better, reason to create Offline Web Applications is speed. An Offline Web Application must be downloaded only once. After it gets downloaded, all of the files required by your Web application (HTML, CSS, JavaScript, Image) are stored persistently on your computer. Think of Offline Web Applications as providing you with a super browser cache. Normally, when you cache files in a browser, the files are cached on a file-by-file basis. For each HTML, CSS, image, or JavaScript file, you specify how long the file should remain in the cache by setting cache headers. Unlike the normal browser caching mechanism, the HTML5 Offline Web Application cache is used to specify a caching policy for an entire set of files. You use a manifest file to list the files that you want to cache and these files are cached until the manifest is changed. Another advantage of using the HTML5 offline cache is that the HTML5 standard supports several JavaScript events and methods related to the offline cache. For example, you can be notified in your JavaScript code whenever the offline application has been updated. You can use JavaScript methods, such as the ApplicationCache.update() method, to update the cache programmatically. Creating the Manifest File The HTML5 Offline Cache uses a manifest file to determine the files that get cached. Here’s what the manifest file looks like for the JavaScript Reference application: CACHE MANIFEST # v30 Default.aspx # Standard Script Libraries Scripts/jquery-1.4.4.min.js Scripts/jquery-ui-1.8.7.custom.min.js Scripts/jquery.tmpl.min.js Scripts/json2.js # App Scripts App_Scripts/combine.js App_Scripts/combine.debug.js # Content (CSS & images) Content/default.css Content/logo.png Content/ui-lightness/jquery-ui-1.8.7.custom.css Content/ui-lightness/images/ui-bg_glass_65_ffffff_1x400.png Content/ui-lightness/images/ui-bg_glass_100_f6f6f6_1x400.png Content/ui-lightness/images/ui-bg_highlight-soft_100_eeeeee_1x100.png Content/ui-lightness/images/ui-icons_222222_256x240.png Content/ui-lightness/images/ui-bg_glass_100_fdf5ce_1x400.png Content/ui-lightness/images/ui-bg_diagonals-thick_20_666666_40x40.png Content/ui-lightness/images/ui-bg_gloss-wave_35_f6a828_500x100.png Content/ui-lightness/images/ui-icons_ffffff_256x240.png Content/ui-lightness/images/ui-icons_ef8c08_256x240.png Content/browsers/c8.png Content/browsers/es3.png Content/browsers/es5.png Content/browsers/ff3_6.png Content/browsers/ie8.png Content/browsers/ie9.png Content/browsers/sf5.png NETWORK: Services/EntryService.svc http://superexpert.com/resources/JavaScriptReference/ A Cache Manifest file always starts with the line of text Cache Manifest. In the manifest above, all of the CSS, image, and JavaScript files required by the JavaScript Reference application are listed. For example, the Default.aspx ASP.NET page, jQuery library, JQuery UI library, and several images are listed. Notice that you can add comments to a manifest by starting a line with the hash character (#). I use comments in the manifest above to group JavaScript and image files. Finally, notice that there is a NETWORK: section of the manifest. You list any file that you do not want to cache (any file that requires network access) in this section. In the manifest above, the NETWORK: section includes the URL for a WCF Service named EntryService.svc. This service is called to get the JavaScript entries displayed by the JavaScript Reference. There are two important things that you need to be aware of when using a manifest file. First, all relative URLs listed in a manifest are resolved relative to the manifest file. The URLs listed in the manifest above are all resolved relative to the root of the application because the manifest file is located in the application root. Second, whenever you make a change to the manifest file, browsers will download all of the files contained in the manifest (all of them). For example, if you add a new file to the manifest then any browser that supports the Offline Cache standard will detect the change in the manifest and download all of the files listed in the manifest automatically. If you make changes to files in the manifest (for example, modify a JavaScript file) then you need to make a change in the manifest file in order for the new version of the file to be downloaded. The standard way of updating a manifest file is to include a comment with a version number. The manifest above includes a # v30 comment. If you make a change to a file then you need to modify the comment to be # v31 in order for the new file to be downloaded. When Are Updated Files Downloaded? When you make changes to a manifest, the changes are not reflected the very next time you open the offline application in your web browser. Your web browser will download the updated files in the background. This can be very confusing when you are working with JavaScript files. If you make a change to a JavaScript file, and you have cached the application offline, then the changes to the JavaScript file won’t appear when you reload the application. The HTML5 standard includes new JavaScript events and methods that you can use to track changes and make changes to the Application Cache. You can use the ApplicationCache.update() method to initiate an update to the application cache and you can use the ApplicationCache.swapCache() method to switch to the latest version of a cached application. My heartfelt recommendation is that you do not enable your application for offline storage until after you finish writing your application code. Otherwise, debugging the application can become a very confusing experience. Offline Web Applications versus Local Storage Be careful to not confuse the HTML5 Offline Web Application feature and HTML5 Local Storage (aka DOM storage) feature. The JavaScript Reference Application uses both features. HTML5 Local Storage enables you to store key/value pairs persistently. Think of Local Storage as a super cookie. I describe how the JavaScript Reference Application uses Local Storage to store the database of JavaScript entries in a separate blog entry. Offline Web Applications enable you to store static files persistently. Think of Offline Web Applications as a super cache. Creating a Manifest File in an ASP.NET Application A manifest file must be served with the MIME type text/cache-manifest. In order to serve the JavaScript Reference manifest with the proper MIME type, I added two files to the JavaScript Reference Application project: Manifest.txt – This text file contains the actual manifest file. Manifest.ashx – This generic handler sends the Manifest.txt file with the MIME type text/cache-manifest. Here’s the code for the generic handler: using System.Web; namespace JavaScriptReference { public class Manifest : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/cache-manifest"; context.Response.WriteFile(context.Server.MapPath("Manifest.txt")); } public bool IsReusable { get { return false; } } } } The Default.aspx file contains a reference to the manifest. The opening HTML tag in the Default.aspx file looks like this: <html manifest="Manifest.ashx"> Notice that the HTML tag contains a manifest attribute that points to the Manifest.ashx generic handler. Internet Explorer simply ignores this attribute. Every other modern browser will download the manifest when the Default.aspx page is requested. Seeing the Offline Web Application in Action The experience of using an HTML5 Web Application is different with different browsers. When you first open the JavaScript Reference application with Firefox, you get the following warning: Notice that you are provided with the choice of whether you want to use the application offline or not. Browsers other than Firefox, such as Chrome and Safari, do not provide you with this choice. Chrome and Safari will create an offline cache automatically. If you click the Allow button then Firefox will download all of the files listed in the manifest. You can view the files contained in the Firefox offline application cache by typing about:cache in the Firefox address bar: You can view the actual items being cached by clicking the List Cache Entries link: The Offline Web Application experience is different in the case of Google Chrome. You can view the entries in the offline cache by opening the Developer Tools (hit Shift+CTRL+I), selecting the Storage tab, and selecting Application Cache: Notice that you view the status of the Application Cache. In the screen shot above, the status is UNCACHED which means that the files listed in the manifest have not been downloaded and cached yet. The different possible values for the status are included in the HTML5 Offline Web Application standard: UNCACHED – The Application Cache has not been initialized. IDLE – The Application Cache is not currently being updated. CHECKING – The Application Cache is being fetched and checked for updates. DOWNLOADING – The files in the Application Cache are being updated. UPDATEREADY – There is a new version of the Application. OBSOLETE – The contents of the Application Cache are obsolete. Summary In this blog entry, I provided a description of how you can use the HTML5 Offline Web Application feature in the context of an ASP.NET application. I described how this feature is used with the JavaScript Reference Application to store the entire application on a user’s computer. By taking advantage of this new feature of the HTML5 standard, you can improve the performance of your ASP.NET web applications by requiring users of your web application to download your application once and only once. Furthermore, you can enable users to take advantage of your applications anywhere -- regardless of whether or not they are connected to the Internet.

    Read the article

  • Windows Azure Learning Plan - Compute

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with the "compute" function of Windows Azure, which includes Configuration Files, the Web Role, the Worker Role, and the VM Role. There is a general programming guide for Windows Azure that you can find here to help with the overall process.   Configuration Files Configuration Files define the environment for a Windows Azure application, similar to an ASP.NET application. This section explains how to work with these. General Introduction and Overview http://blogs.itmentors.com/bill/2009/11/04/configuration-files-and-windows-azure/ Service Definition File Schema http://msdn.microsoft.com/en-us/library/ee758711.aspx Service Configuration File Schema http://msdn.microsoft.com/en-us/library/ee758710.aspx  Windows Azure Web Role The Web Role runs code (such as ASP pages) that require a User Interface. Web Role "Boot Camp" Video  https://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?culture=en-US&EventID=1032470854&CountryCode=US Web Role Deployment Checklist http://blogs.infragistics.com/blogs/anton_staykov/archive/2010/06/30/windows-azure-web-role-deployment-checklist.aspx  Using a Web Role as a Worker Role for Small Applications http://www.31a2ba2a-b718-11dc-8314-0800200c9a66.com/2010/12/how-to-combine-worker-and-web-role-in.html Windows Azure Worker Role  The Worker Role is used for code that does not require a direct User Interface. Worker Role "Boot Camp" Video https://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?culture=en-US&EventID=1032470871&CountryCode=US Worker Role versus Web Roles http://msdn.microsoft.com/en-us/library/gg433012.aspx Deploying other applications (like Java) in a Windows Azure Worker Role http://blogs.msdn.com/b/mariok/archive/2011/01/05/deploying-java-applications-in-azure.aspx Windows Azure VM Role The Windows Azure VM Role is an Operating System-level mechanism for code deployment. VM Role Overview and Details  http://msdn.microsoft.com/en-us/library/gg465398.aspx  The proper use of the VM Role http://blogs.msdn.com/b/buckwoody/archive/2010/12/28/the-proper-use-of-the-vm-role-in-windows-azure.aspx

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >