Search Results

Search found 5953 results on 239 pages for 'customer stories'.

Page 232/239 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • ANTS CLR and Memory Profiler In Depth Review (Part 2 of 2 &ndash; Memory Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible. Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind… In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve. One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well. In this review, I am going to cover some of the features of the ANTS memory profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program. I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction – Part 2 In my last post, I reviewed the feature packed Red Gate ANTS Performance Profiler.  Separate from the Red Gate Performance Profiler is the Red Gate ANTS Memory Profiler – a simple, easy to use utility for checking how your application is handling memory management…  A tool that I wish I had had many times in the past.  This post will be focusing on the ANTS Memory Profiler and its tool set. The memory profiler has a large assortment of features just like the Performance Profiler, with the new session looking nearly exactly alike: ANTS Memory Profiler Memory profiling is not something that I have to do very often…  In the past, the few cases I’ve had to find a memory leak in an application I have usually just had to trace the code of the operations being performed to look for oddities…  Sadly, I have come across more undisposed/non-using’ed IDisposable objects, usually from ADO.Net than I would like to ever see.  Support is not fun, however using ANTS Memory Profiler makes this task easier.  For this round of testing, I am going to use the same code from my previous example, using the WPF application. This time, I will choose the ‘Profile Memory’ option from the ANTS menu in Visual Studio, which launches the solution in its currently configured state/start-up project, and then launches the ANTS Memory Profiler to help.  It prepopulates all of the fields with the current project information, and all I have to do is select the ‘Start Profiling’ option. When the window comes up, it is actually quite barren, just giving ideas on how to work the profiler.  You start by getting to the point in your application that you want to profile, and then taking a ‘Memory Snapshot’.  This performs a full garbage collection, and snapshots the managed heap.  Using the same WPF app as before, I will go ahead and take a snapshot now. As you can see, ANTS is already giving me lots of information regarding the snapshot, however this is just a snapshot.  The whole point of the profiler is to perform an action, usually one where a memory problem is being noticed, and then take another snapshot and perform a diff between them to see what has changed.  I am going to go ahead and generate 5000 primes, and then take another snapshot: As you can see, ANTS is already giving me a lot of new information about this snapshot compared to the last.  Information such as difference in memory usage, fragmentation, class usage, etc…  If you take more snapshots, you can use the dropdown at the top to set your actual comparison snapshots. If you beneath the timeline, you will see a breadcrumb trail showing how best to approach profiling memory using ANTS.  When you first do the comparison, you start on the Summary screen.  You can either use the charts at the bottom, or switch to the class list screen to get to the next step.  Here is the class list screen: As you can see, it lists information about all of the instances between the snapshots, as well as at the bottom giving you a way to filter by telling ANTS what your problem is.  I am going to go ahead and select the Int16[] to look at the Instance Categorizer Using the instance categorizer, you can travel backwards to see where all of the instances are coming from.  It may be hard to see in this image, but hopefully the lightbox (click on it) will help: I can see that all of these instances are rooted to the application through the UI TextBlock control.  This image will probably be even harder to see, however using the ‘Instance Retention Graph’, you can trace an objects memory inheritance up the chain to see its roots as well.  This is a simple example, as this is simply a known element.  Usually you would be profiling an actual problem, and comparing those differences.  I know in the past, I have spotted a problem where a new context was created per page load, and it was rooted into the application through an event.  As the application began to grow, performance and reliability problems started to emerge.  A tool like this would have been a great way to identify the problem quickly. Overview Overall, I think that the Red Gate ANTS Memory Profiler is a great utility for debugging those pesky leaks.  3 Biggest Pros: Easy to use interface with lots of options for configuring profiling session Intuitive and helpful interface for drilling down from summary, to instance, to root graphs ANTS provides an API for controlling the profiler. Not many options, but still helpful. 2 Biggest Cons: Inability to automatically snapshot the memory by interval Lack of complete integration with Visual Studio via an extension panel Ratings Ease of Use (9/10) – I really do believe that they have brought simplicity to the once difficult task of memory profiling.  I especially liked how it stepped you further into the drilldown by directing you towards the best options. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Features (7/10) – A really great set of features all around in the application, however, I would like to see some ability for automatically triggering snapshots based on intervals or framework level items such as events. Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (9/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (9/10) – Overall, I am happy with the Memory Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  Thank you for reading up to here, or skipping ahead – I told you it would be shorter!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • Getting Started With Tailoring Business Processes

    - by Richard Bingham
    In this article, and for the sake of simplicity, we will use the term “On-Premise” to mean a deployment where you have design-time development access to the instance, including administration of the technology components, the applications filesystem, and the database. In reality this might be a local development instance that is then supported by a team who can deploy your customizations to the restricted production instance equivalents. Tools Overview Firstly let’s look at the Design-Time tools within JDeveloper for customizing and extending the artifacts of a Business Process. In essence this falls into two buckets; SOA Composite Editor for working with BPEL processes, and the BPM Studio. The SOA Composite Editor As a standard extension to JDeveloper, this graphical design tool should be familiar to anyone previously worked with Oracle SOA Server. With easy-to-use modeling capability, backed-up by full XML source-view (for read-only), it provides everything that is needed to implement the technical design. In simple terms, once deployed to the remote SOA Server the composite components (like Mediator) leverage the Event Delivery Network (EDN) for interaction with the application logic. If you are customizing an existing Fusion Applications BPEL process then be aware that it does support MDS-based customization layers just like Page Composer where different customizations are used based on the run-time context, like for a specific Product or Business Unit. This also makes them safe from patching and upgrades, although only a single active version of the composite is available at run-time. This is defined by a field on the composite record, available in Enterprise Manager. Obviously if you wish to fire different activities and tasks based on the user context then you can should include switches to fork the flows in your custom BPEL process. Figure 1 – A BPEL process in Composite Editor The following describes the simplified steps for making customizations to BPEL processes. This is the most common method of changing the business processes of Fusion Applications, as over 400 BPEL-based composite applications are provided out-of-the-box. Setup your local Fusion Applications JDeveloper environment. The SOA Composite Editor should be installed as part of the Fusion Applications extension. If there are problems you can also find it under the ‘Check for Updates’ help menu option. Since SOA Server is not part of the JDeveloper integrated WebLogic Server, setup a standalone WebLogic environment for deploying and testing. Obviously you might use a Fusion Applications development instance also. Package the existing standard Fusion Applications SOA Composite using Enterprise Manager and export it as a complete SOA Archive (SAR) file, resulting in a local .jar file. You may need to ask your system administrator for this. Import the exported SAR .jar file into JDeveloper using the File menu, under the option ‘SOA Archive into SOA Project’. In JDeveloper set the appropriate customization layer values, and then change from the default role to the Fusion Applications Customization Developer role. Make the customizations and save the application project. Finally redeploy the composite application, either to a direct Application Server connection, or as a fresh SAR (jar) file that can then be re-imported and deployed via Enterprise Manager. The Business Process Management (BPM) Suite In addition to the relatively low-level development environment associated with BPEL process creation, Oracle provides a suite of products that allow business process adjustments to be made without the need for some of the programming skills.  The aim is to abstract much of the technical implementation and to provide a Business Analyst tools for immediately implementing organization changes. Obviously there are some limitations on what they can do, however the BPM Suite functionality increases with each release and for the majority of the cases the tools remains as applicable as its developer-orientated sister. At the current time business processes must be explicitly coded to support just one of these use-cases, either BPEL for developer use or BPM for business analyst use. That said, they both run on the same SOA Server in much the same way. The components bundled in each SOA Composite Application can be verified by inspection through Enterprise Manager. Figure 2 – A BPM Process in JDeveloper BPM Suite. BPM processes are written in a standard notation (BPMN) and the modeling tools are very similar to that of BPEL. The steps to deploy a custom BPM process are also essentially much the same, since the BPM process is bundled into a SOA Composite just like a BPEL process. As such the SOA Composite Editor  actually has support for both artifacts and even allows use of them together, such as a calling a BPM process as a partnerlink from a BPEL process. For more details see the references below. Business Analyst Tooling In addition to using JDeveloper extensions for BPM development, there are run-time tools that Business Analysts can use to make adjustments, so that without high costs of an IT project the system can be tuned to match changes to the business operation. The first tool to consider is the BPM Composer, deployed with the middleware SOA Server and accessible online, and for Fusion Applications it is under the Business Process icon on the homepage of the Application Composer. Figure 3 – Business Process Composer showing a CRM process flow. The key difference between this and using JDeveloper is that the BPM Composer has a Business Catalog prepopulated with features and functions that can be used, mostly through registered WebServices. This means no coding or complex interface development is required, simply drag-drop-configure. The items in the business catalog are seeded by either Oracle (as a BPM Template) or added to by your own custom development. You cannot create or generate catalog content from BPM Composer directly. As per the screenshot you can see the Business Catalog content in the BPM Project browser region. In addition, other online tools for use by Business Analysts include the BPM Worklist application for editing business rules and approval management configuration, plus the SOA Composer which focuses on non-approval business rules and domain value maps. At the current time there are only a handful of BPM processes shipped with Fusion Applications HCM and CRM, including on-boarding workers and processing customer registrations.  This also means a limited number of associated BPM Templates provided out-of-the-box, therefore a limited Business Catalog. That said, BPM-based extension is a powerful capability to leverage and will most likely develop going forwards, especially for use in SaaS deployments where full design-time JDeveloper access is not available. Further Reading For BPEL – Fusion Applications Extensibility Guide – Section 12 For BPM – Fusion Applications Extensibility Guide – Section 7 The product-specific documentation and implementation guides for Fusion Applications Fusion Middleware Developers Guide for SOA Suite Modeling and Implementation Guide for Oracle Business Process Management User’s Guide for Oracle Business Process Composer Oracle University courses on BPM Suite and SOA Development

    Read the article

  • PASS Summit 2010 BI Workshop Feedbacks

    - by Davide Mauri
    As many other speakers already did, I’d like to share with the SQL Community the feedback of my PASS Summit 2010 Workshop. For those who were not there, my workshop was the “BI From A-Z” and the main objective of that workshop was to introduce people in the BI world not only from a technical point of view but insist a lot on the methodological and “engineered” approach. The will to put more engineering in the IT (and specially in the BI field) is something that has been growing stronger and stronger in me every day for of this last 5 years since is simply envy the fact that Airbus, Fincatieri, BMW (just to name a few) can create very complex machine “just” using putting people together and giving them some rules to follow (Of course this is an oversimplification but I think you get what I mean). The key point of engineering is that, after having defined the project blueprint, you have the possibility to give to a huge number of people, the rules to follow, the correct tools in order to implement the rules easily and semi-automatically and a way to measure the quality of the results. Could this be done in IT? Very big question, so my scope is now limited to BI. So that’s the main point of my workshop: and entry-level approach to BI (level was 200) in order to allow attendees to know the basics, to understand what tools they should use for which purpose and, above all, a set of rules and tools in order to make a BI solution scalable in terms of people working on it, while still maintaining a very good quality. All done not focusing only on the practice but explaining the theory behind to see how it can help *a lot* to build a correct solution despite the technology used to implement it. The idea is to reach a point where more then 70% of the work done to create a BI solution can be reused even if technologies changes. This is a very demanding challenge nowadays with the coming of Denali and its column-aligned storage and the shiny-new DAX language. As you may understand I was looking forward to get the feedback since you may have noticed that there’s a lot of “architectural” stuff in IT but really nothing on “engineering”. So how the session could be perceived by the attendees was really unknown to me. The feedback could also give a good indication if the need of more “engineering” is something I feel only by myself or if is something more broad. I’m very happy to be able to say that the overall score of 4.75 put my workshop in the TOP 20 session (on near 200 sessions)! Here’s the detailed evaluations: How would you rate the usefulness of the information presented in your day-to-day environment? 4.75 Answer:    # of Responses 3    1         4    12        5    42               How would you rate the Speaker's presentation skills? 4.80 Answer:    # of Responses 3 : 1         4 : 9         5 : 45               How would you rate the Speaker's knowledge of the subject? 4.95 Answer:    # of Responses 4 :  3         5 : 52               How would you rate the accuracy of the session title, description and experience level to the actual session? 4.75 Answer:    # of Responses 3 : 2         4 : 10         5 : 43               How would you rate the amount of time allocated to cover the topic/session? 4.44 Answer:    # of Responses 3 : 7         4 : 17        5 : 31               How would you rate the quality of the presentation materials? 4.62 Answer:    # of Responses 4 : 21        5 : 34 The comments where all very positive. Many of them asked for more time on the subject (or to shorten the very last topics). I’ll make treasure of these comments and will review the content accordingly. We’ll organize a two-day classes on this topic, where also more examples will be shown and some arguments will be explained more deeply. I’d just like to answer a comment that asks how much of what I shown is “universally applicable”. I can tell you that all of our BI project follow these rules and they’ve been applied to different markets (Insurance, Fashion, GDO) with different people and different teams and they allowed us to be “Adaptive” against the customer. The more the rules are well defined and the more there are tools that supports their implementations, the easier is to add new people to the project and to add or change solution features. Think of a car. How come that almost any mechanic can help you to fix a problem? Because they know what to expect. Because there a rules that allow them to identify the problem without having to discover each time how the car has been implemented build. And this is of course also true for car upgrades/improvements. Last but not least: thanks a lot to everyone for coming!

    Read the article

  • Clouds Everywhere But not a Drop of Rain – Part 3

    - by sxkumar
    I was sharing with you how a broad-based transformation such as cloud will increase agility and efficiency of an organization if process re-engineering is part of the plan.  I have also stressed on the key enterprise requirements such as “broad and deep solutions, “running your mission critical applications” and “automated and integrated set of capabilities”. Let me walk you through some key cloud attributes such as “elasticity” and “self-service” and what they mean for an enterprise class cloud. I will also talk about how we at Oracle have taken a very enterprise centric view to developing cloud solutions and how our products have been specifically engineered to address enterprise cloud needs. Cloud Elasticity and Enterprise Applications Requirements Easy and quick scalability for a short-period of time is the signature of cloud based solutions. It is this elasticity that allows you to dynamically redistribute your resources according to business priorities, helps increase your overall resource utilization, and reduces operational costs by allowing you to get the most out of your existing investment. Most public clouds are offering a instant provisioning mechanism of compute power (CPU, RAM, Disk), customer pay for the instance-hours(and bandwidth) they use, adding computing resources at peak times and removing them when they are no longer needed. This type of “just-in-time” serving of compute resources is well known for mid-tiers “state less” servers such as web application servers and web servers that just need another machine to start and run on it but what does it really mean for an enterprise application and its underlying data? Most enterprise applications are not as quite as “state less” and justifiably so. As such, how do you take advantage of cloud elasticity and make it relevant for your enterprise apps? This is where Cloud meets Grid Computing. At Oracle, we have invested enormous amount of time, energy and resources in creating enterprise grid solutions. All our technology products offer built-in elasticity via clustering and dynamic scaling. With products like Real Application Clusters (RAC), Automatic Storage Management, WebLogic Clustering, and Coherence In-Memory Grid, we allow all your enterprise applications to benefit from Cloud elasticity –both vertically and horizontally - without requiring any application changes. A number of technology vendors take a rather simplistic route of starting up additional or removing unneeded VM as the "Cloud Scale-Out" solution. While this may work for stateless mid-tier servers where load balancers can handle the addition and remove of instances transparently but following a similar approach for the database tier - often called as "database sharding" - requires significant application modification and typically does not work with off the shelf packaged applications. Technologies like Oracle Database Real Application Clusters, Automatic Storage Management, etc. on the other hand bring the benefits of incremental scalability and on-demand elasticity to ANY application by providing a simplified abstraction layers where the application does not need deal with data spread over multiple database instances. Rather they just talk to a single database and the database software takes care of aggregating resources across multiple hardware components. It is the technologies like these that truly make a cloud solution relevant for enterprises.  For customers who are looking for a next generation hardware consolidation platform, our engineered systems (e.g. Exadata, Exalogic) not only provide incredible amount of performance and capacity, they also reduce the data center complexity and simplify operations. Assemble, Deploy and Manage Enterprise Applications for Cloud Products like Oracle Virtual assembly builder (OVAB) resolve the complex problem of bringing the cloud speed to complex multi-tier applications. With assemblies, you can not only provision all components of a multi-tier application and wire them together by push of a button, other aspects of application lifecycle, such as real-time application testing, scale-up/scale-down, performance and availability monitoring, etc., are also automated using Oracle Enterprise Manager.  An essential criteria for an enterprise cloud to succeed is the ability to ensure business service levels especially when business users have either full visibility on the usage cost with a “show back” or a “charge back”. With Oracle Enterprise Manager 12c, we have created the most comprehensive cloud management solution in the industry that is capable of managing business service levels “applications-to-disk” in a enterprise private cloud – all from a single console. It is the only cloud management platform in the industry that allows you to deliver infrastructure, platform and application cloud services out of the box. Moreover, it offers integrated and complete lifecycle management of the cloud - including planning and set up, service delivery, operations management, metering and chargeback, etc .  Sounds unbelievable? Well, just watch this space for more details on how Oracle Enterprise Manager 12c is the nerve center of Oracle Cloud! Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies.  Summary  But the intent of this blog post isn't to dwell on how great our solutions are (these are just some examples to illustrate how we at Oracle have approached this problem space). Rather it is to help you ask the right questions before you embark on your cloud journey.  So to summarize, here are the key takeaways.       It is critical that you are clear on why you are building the cloud. Successful organizations keep business benefits as the first and foremost cloud objective. On the other hand, those who approach this purely as a technology project are more likely to fail. Think about where you want to be in 3-5 years before you get started. Your long terms objectives should determine what your first step ought to be. As obvious as it may seem, more people than not make the first move without knowing where they are headed.  Don’t make the mistake of equating cloud to virtualization and Infrastructure-as-a-Service (IaaS). Spinning a VM on-demand will give some short term relief to your IT staff but is unlikely to solve your larger business problems. As such, even if IaaS is your first step towards a more comprehensive cloud, plan the roadmap around those higher level services before you begin. And ask your vendors on how they are going to be your partners in this journey. Capabilities like self-service access and chargeback/showback are absolutely critical if you really expect your cloud to be transformational. Your business won't see the full benefits of the cloud until it empowers them with same kind of control and transparency that they are used to while using a public cloud service.  Evaluate the benefits of integration, as opposed to blindly following the best-of-breed strategy. Integration is a huge challenge and more so in a cloud environment. There are enormous costs associated with stitching a solution out of disparate components and even more in maintaining it. Hope you found these ideas helpful. Looking forward to hearing your thoughts and experiences.

    Read the article

  • Automatic Maintenance Jobs in every PDB? New SPM Evolve Advisor Task in Oracle 12.1.0.2

    - by Mike Dietrich
    A customer checking out our slides from the OTN Tour in August 2014 asked me a finicky question the other day: "According to the documentation the Automatic SQL Tuning Advisor maintenance task gets executed only within the CDB$ROOT, but not within each PDB - but the slides are not clear here. So what is the truth?" Ok, that's good question. In my understanding all tasks will get executed within each PDB - that's why we recommend (based on experience) to break up the default maintenance windows when using Oracle Multitenant. Otherwise all PDBs will have the same maintenance windows, and guess what will happen when 25 PDBs start gathering object statistics at the same time ... The documentation indeed says: Automatic SQL Tuning Advisor data is stored in the root. It might have results about SQL statements executed in a PDB that were analyzed by the advisor, but these results are not included if the PDB is unplugged. A common user whose current container is the root can run SQL Tuning Advisor manually for SQL statements from any PDB. When a statement is tuned, it is tuned in any container that runs the statement. This sounds reasonable. But when we have a look into our PDBs or into the CDB_AUTOTASK_CLIENT view the result is different from what the doc says. In my environment I did create just two fresh empty PDBs (CON_ID 3 and 4): SQL> select client_name, status, con_id from cdb_autotask_client; CLIENT_NAME                           STATUS         CON_ID------------------------------------- ---------- ----------auto optimizer stats collection       ENABLED             1sql tuning advisor                    ENABLED             1auto space advisor                    ENABLED             1auto optimizer stats collection       ENABLED             4sql tuning advisor                    ENABLED             4auto space advisor                    ENABLED             4auto optimizer stats collection       ENABLED             3sql tuning advisor                    ENABLED             3auto space advisor                    ENABLED             3 9 rows selected. I haven't verified the reason why this is different from the docs but it may have been related to one change in Oracle Database 12.1.0.2: The new SPM Evolve Advisor Task ( SYS_AUTO_SPM_EVOLVE_TASK) for automatic plan evolution for SQL Plan Management. This new task doesn't appear as a stand-alone job (client) in the maintenance window but runs as a sub-entity of the Automatic SQL Tuning Advisor task. And (I'm just guessing) this may be one of the reasons why every PDB will have to have its own Automatic SQL Tuning Advisor task  Here you'll find more information about how to enable, disable and configure the new Oracle 12.1.0.2 SPM Evolve Advisor Task: Oracle Database 12.1.0.2 SQL Tuning Guide:Managing the SPM Evolve Advisor Task -Mike

    Read the article

  • What's up with LDoms: Part 5 - A few Words about Consoles

    - by Stefan Hinker
    Back again to look at a detail of LDom configuration that is often forgotten - the virtual console server. Remember, LDoms are SPARC systems.  As such, each guest will have it's own OBP running.  And to connect to that OBP, the administrator will need a console connection.  Since it's OBP, and not some x86 BIOS, this console will be very serial in nature ;-)  It's really very much like in the good old days, where we had a terminal concentrator where all those serial cables ended up in.  Just like with other components in LDoms, the virtualized solution looks very similar. Every LDom guest requires exactly one console connection.  Envision this similar to the RS-232 port on older SPARC systems.  The LDom framework provides one or more console services that provide access to these connections.  This would be the virtual equivalent of a network terminal server (NTS), where all those serial cables are plugged in.  In the physical world, we'd have a list somewhere, that would tell us which TCP-Port of the NTS was connected to which server.  "ldm list" does just that: root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 16 7680M 0.4% 27d 8h 22m jupiter bound ------ 5002 20 8G mars active -n---- 5000 2 8G 0.5% 55d 14h 10m venus active -n---- 5001 2 8G 0.5% 56d 40m pluto inactive ------ 4 4G The column marked "CONS" tells us, where to reach the console of each domain. In the case of the primary domain, this is actually a (more) physical connection - it's the console connection of the physical system, which is either reachable via the ILOM of that system, or directly via the serial console port on the chassis. All the other guests are reachable through the console service which we created during the inital setup of the system.  Note that pluto does not have a port assigned.  This is because pluto is not yet bound.  (Binding can be viewed very much as the assembly of computer parts - CPU, Memory, disks, network adapters and a serial console cable are all put together when binding the domain.)  Unless we set the port number explicitly, LDoms Manager will do this on a first come, first serve basis.  For just a few domains, this is fine.  For larger deployments, it might be a good idea to assign these port numbers manually using the "ldm set-vcons" command.  However, there is even better magic associated with virtual consoles. You can group several domains into one console group, reachable through one TCP port of the console service.  This can be useful when several groups of administrators are to be given access to different domains, or for other grouping reasons.  Here's an example: root@sun # ldm set-vcons group=planets service=console jupiter root@sun # ldm set-vcons group=planets service=console pluto root@sun # ldm bind jupiter root@sun # ldm bind pluto root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 16 7680M 6.1% 27d 8h 24m jupiter bound ------ 5002 200 8G mars active -n---- 5000 2 8G 0.6% 55d 14h 12m pluto bound ------ 5002 4 4G venus active -n---- 5001 2 8G 0.5% 56d 42m root@sun # telnet localhost 5002 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. sun-vnts-planets: h, l, c{id}, n{name}, q:l DOMAIN ID DOMAIN NAME DOMAIN STATE 2 jupiter online 3 pluto online sun-vnts-planets: h, l, c{id}, n{name}, q:npluto Connecting to console "pluto" in group "planets" .... Press ~? for control options .. What I did here was add the two domains pluto and jupiter to a new console group called "planets" on the service "console" running in the primary domain.  Simply using a group name will create such a group, if it doesn't already exist.  By default, each domain has its own group, using the domain name as the group name.  The group will be available on port 5002, chosen by LDoms Manager because I didn't specify it.  If I connect to that console group, I will now first be prompted to choose the domain I want to connect to from a little menu. Finally, here's an example how to assign port numbers explicitly: root@sun # ldm set-vcons port=5044 group=pluto service=console pluto root@sun # ldm bind pluto root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 16 7680M 3.8% 27d 8h 54m jupiter active -t---- 5002 200 8G 0.5% 30m mars active -n---- 5000 2 8G 0.6% 55d 14h 43m pluto bound ------ 5044 4 4G venus active -n---- 5001 2 8G 0.4% 56d 1h 13m With this, pluto would always be reachable on port 5044 in its own exclusive console group, no matter in which order other domains are bound. Now, you might be wondering why we always have to mention the console service name, "console" in all the examples here.  The simple answer is because there could be more than one such console service.  For all "normal" use, a single console service is absolutely sufficient.  But the system is flexible enough to allow more than that single one, should you need them.  In fact, you could even configure such a console service on a domain other than the primary (or control domain), which would make that domain a real console server.  I actually have a customer who does just that - they want to separate console access from the control domain functionality.  But this is definately a rather sophisticated setup. Something I don't want to go into in this post is access control.  vntsd, which is the daemon providing all these console services, is fully RBAC-aware, and you can configure authorizations for individual users to connect to console groups or individual domain's consoles.  If you can't wait until I get around to security, check out the man page of vntsd. Further reading: The Admin Guide is rather reserved on this subject.  I do recommend to check out the Reference Manual. The manpage for vntsd will discuss all the control sequences as well as the grouping and authorizations mentioned here.

    Read the article

  • FOUR questions to ask if you are implementing DATABASE-AS-A-SERVICE

    - by Sudip Datta
    During my ongoing tenure at Oracle, I have met all types of DBAs. Happy DBAs, unhappy DBAs, proud DBAs, risk-loving DBAs, cautious DBAs. These days, as Database-as-a-Service (DBaaS) becomes more mainstream, I find some complacent DBAs who are basking in their achievement of having implemented DBaaS. Some others, however, are not that happy. They grudgingly complain that they did not have much of a say in the implementation, they simply had to follow what their cloud architects (mostly infrastructure admins) offered them. In most cases it would be a database wrapped inside a VM that would be labeled as “Database as a Service”. In other cases, it would be existing brute-force automation simply exposed in a portal. As much as I think that there is more to DBaaS than those approaches and often get tempted to propose Enterprise Manager 12c, I try to be objective. Neither do I want to dampen the spirit of the happy ones, nor do I want to stoke the pain of the unhappy ones. As I mentioned in my previous post, I don’t deny vanilla automation could be useful. I like virtualization too for what it has helped us accomplish in terms of resource management, but we need to scrutinize its merit on a case-by-case basis and apply it meaningfully. For DBAs who either claim to have implemented DBaaS or are planning to do so, I simply want to provide four key questions to ponder about: 1. Does it make life easier for your end users? Database-as-a-Service can have several types of end users. Junior DBAs, QA Engineers, Developers- each having their own skillset. The objective of DBaaS is to make their life simple, so that they can focus on their core responsibilities without having to worry about additional stuff. For example, if you are a Developer using Oracle Application Express (APEX), you want to deal with schema, objects and PL/SQL code and not with datafiles or listener configuration. If you are a QA Engineer needing database copies for functional testing, you do not want to deal with underlying operating system patching and compliance issues. The question to ask, therefore, is, whether DBaaS makes life easier for those users. It is often convenient to give them VM shells to deal with a la Amazon EC2 IaaS, but is that what they really want? Is it a productive use of a developer's time if he needs to apply RPM errata to his Linux operating system. Asking him to keep the underlying operating system current is like making a guest responsible for a restaurant's decor. 2. Does it make life easier for your administrators? Cloud, in general, is supposed to free administrators from attending to mundane tasks like provisioning services for every single end user request. It is supposed to enable a readily consumable platform and enforce standardization in the process. For example, if a Service Catalog exposes DBaaS of specific database versions and configurations, it, by its very nature, enforces certain discipline and standardization within the IT environment. What if, instead of specific database configurations, cloud allowed each end user to create databases of their liking resulting in hundreds of version and patch levels and thousands of individual databases. Therefore the right question to ask is whether the unwanted consequence of DBaaS is OS and database sprawl. And if so, who is responsible for tracking them, backing them up, administering them? Studies have shown that these administrative overheads increase exponentially with new targets, and it could result in a management nightmare. That leads us to our next question. 3. Does it satisfy your Security Officers and Compliance Auditors? Compliance Auditors need to know who did what and when. They also want the cloud platform to be secure, so that end users have little freedom in tampering with it. Dealing with VM sprawl is not the easiest of challenges, let alone dealing with them as they keep getting reconfigured and moved around. This leads to the proverbial needle in the haystack problem, and all it needs is one needle to cause a serious compliance issue in the enterprise. Bottomline is, flexibility and agility should not come at the expense of compliance and it is very important to get the balance right. Can we have security and isolation without creating compliance challenges? Instead of a ‘one size fits all approach’ i.e. OS level isolation, can we think smartly about database isolation or schema based isolation? This is where the appropriate resource modeling needs to be applied. The usual systems management vendors out there with heterogeneous common-denominator approach have compromised on these semantics. If you follow Enterprise Manager’s DBaaS solution, you will see that we have considered different models, not precluding virtualization, for different customer use cases. The judgment to use virtual assemblies versus databases on physical RAC versus Schema-as-a-Service in a single database, should be governed by the need of the applications and not by putting compliance considerations in the backburner. 4. Does it satisfy your CIO? Finally, does it satisfy your higher ups? As the sponsor of cloud initiative, the CIO is expected to lead an IT transformation project, not merely a run-of-the-mill IT operations. Simply virtualizing server resources and delivering them through self-service is a good start, but hardly transformational. CIOs may appreciate the instant benefit from server consolidation, but studies have revealed that the ROI from consolidation would flatten out at 20-25%. The question would be: what next? As we go higher up in the stack, the need to virtualize, segregate and optimize shifts to those layers that are more palpable to the business users. As Sushil Kumar noted in his blog post, " the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment." Business users could not care less about infrastructure consolidation or virtualization - they care about business agility and service level assurance. Last but not the least, lot of CIOs get miffed if we ask them to throw away their existing hardware investments for implementing DBaaS. In Oracle, we always emphasize on freedom of choosing a platform; hence Enterprise Manager’s DBaaS solution is platform neutral. It can work on any Operating System (that the agent is certified on) Oracle’s hardware as well as 3rd party hardware. As a parting note, I urge you to remember these 4 questions. Remember that your satisfaction as an implementer lies in the satisfaction of others.

    Read the article

  • Adding multiple data importers support to web applications

    - by DigiMortal
    I’m building web application for customer and there is requirement that users must be able to import data in different formats. Today we will support XLSX and ODF as import formats and some other formats are waiting. I wanted to be able to add new importers on the fly so I don’t have to deploy web application again when I add new importer or change some existing one. In this posting I will show you how to build generic importers support to your web application. Importer interface All importers we use must have something in common so we can easily detect them. To keep things simple I will use interface here. public interface IMyImporter {     string[] SupportedFileExtensions { get; }     ImportResult Import(Stream fileStream, string fileExtension); } Our interface has the following members: SupportedFileExtensions – string array of file extensions that importer supports. This property helps us find out what import formats are available and which importer to use with given format. Import – method that does the actual importing work. Besides file we give in as stream we also give file extension so importer can decide how to handle the file. It is enough to get started. When building real importers I am sure you will switch over to abstract base class. Importer class Here is sample importer that imports data from Excel and Word documents. Importer class with no implementation details looks like this: public class MyOpenXmlImporter : IMyImporter {     public string[] SupportedFileExtensions     {         get { return new[] { "xlsx", "docx" }; }     }     public ImportResult Import(Stream fileStream, string extension)     {         // ...     } } Finding supported import formats in web application Now we have importers created and it’s time to add them to web application. Usually we have one page or ASP.NET MVC controller where we need importers. To this page or controller we add the following method that uses reflection to find all classes that implement our IMyImporter interface. private static string[] GetImporterFileExtensions() {     var types = from a in AppDomain.CurrentDomain.GetAssemblies()                 from t in a.GetTypes()                 where t.GetInterfaces().Contains(typeof(IMyImporter))                 select t;       var extensions = new Collection<string>();     foreach (var type in types)     {         var instance = (IMyImporter)type.InvokeMember(null,                        BindingFlags.CreateInstance, null, null, null);           foreach (var extension in instance.SupportedFileExtensions)         {             if (extensions.Contains(extension))                 continue;               extensions.Add(extension);         }     }       return extensions.ToArray(); } This code doesn’t look nice and is far from optimal but it works for us now. It is possible to improve performance of web application if we cache extensions and their corresponding types to some static dictionary. We have to fill it only once because our application is restarted when something changes in bin folder. Finding importer by extension When user uploads file we need to detect the extension of file and find the importer that supports given extension. We add another method to our page or controller that uses reflection to return us importer instance or null if extension is not supported. private static IMyImporter GetImporterForExtension(string extensionToFind) {     var types = from a in AppDomain.CurrentDomain.GetAssemblies()                 from t in a.GetTypes()                 where t.GetInterfaces().Contains(typeof(IMyImporter))                 select t;     foreach (var type in types)     {         var instance = (IMyImporter)type.InvokeMember(null,                        BindingFlags.CreateInstance, null, null, null);           if (instance.SupportedFileExtensions.Contains(extensionToFind))         {             return instance;         }     }       return null; } Here is example ASP.NET MVC controller action that accepts uploaded file, finds importer that can handle file and imports data. Again, this is sample code I kept minimal to better illustrate how things work. public ActionResult Import(MyImporterModel model) {     var file = Request.Files[0];     var extension = Path.GetExtension(file.FileName).ToLower();     var importer = GetImporterForExtension(extension.Substring(1));     var result = importer.Import(file.InputStream, extension);     if (result.Errors.Count > 0)     {         foreach (var error in result.Errors)             ModelState.AddModelError("file", error);           return Import();     }     return RedirectToAction("Index"); } Conclusion That’s it. Using couple of ugly methods and one simple interface we were able to add importers support to our web application. Example code here is not perfect but it works. It is possible to cache mappings between file extensions and importer types to some static variable because changing of these mappings means that something is changed in bin folder of web application and web application is restarted in this case anyway.

    Read the article

  • Loop through values and display in a pdf file

    - by chupinette
    Hello all! i have written the following code: As you can see there is a for loop to go through some values and display them in the generated pdf. The problem is that all the values are being written at the same place. I have tried to insert a new line but it does not seem to work. Can anyone suggest me how i can do it? Do i need to write a nested for loop so that it the values at different y positions? $pdf = pdf_new(); // open a file pdf_open_file($pdf, "C:/xampp/htdocs/final/6.pdf"); pdf_set_info($pdf, "Author", ""); pdf_set_info($pdf, "Title", ""); pdf_set_info($pdf, "Creator", ""); pdf_set_info($pdf, "Subject", ""); // start a new page (A4) $x = 595; $y = 842; pdf_begin_page($pdf, $x, $y); pdf_set_parameter($pdf, 'FontOutline', 'Arial=c:\windows\fonts\arial.ttf'); pdf_setcolor($pdf, "stroke", "rgb", 0, 0, 0, 1.0); // get and use a font object $font = pdf_findfont($pdf, "Arial", "host", 1); pdf_setfont($pdf, $font, 10); // print text pdf_show_xy($pdf, "QUOTATION" , 250, $y - 60); pdf_show_xy($pdf, "Customer Name: " . $this->customer_details['first_name'] . " " . $this->customer_details['last_name'], 50, 770); pdf_show_xy($pdf, "Date: " . date("F j, Y, g:i a"), 50, 750); pdf_show_xy($pdf, "Number of items requested: " . $count_items_req, 50, 730); pdf_show_xy($pdf, "Number of items found: " . $count_items_found, 50, 710); // add an image under the text $image = $image = PDF_load_image($pdf, "png", "C:/xampp/htdocs/final/images/footer_logo.png", ""); PDF_fit_image($pdf, $image, 50, 785, ""); pdf_moveto($pdf, 20, 780); pdf_lineto($pdf, 575, 780); pdf_stroke($pdf); // draw another line near the bottom of the page pdf_moveto($pdf, 20, 50); pdf_lineto($pdf, 575, 50); pdf_stroke($pdf); //Draw the lines $offset = 184; $i = 0; pdf_moveto($pdf, 20, $y - 160); pdf_lineto($pdf, $x - 20, $y - 160); pdf_stroke($pdf); pdf_moveto($pdf, $x - 400, $y - 160); pdf_lineto($pdf, $x - 400, 80); pdf_stroke($pdf); pdf_moveto($pdf, $x - 200, $y - 160); pdf_lineto($pdf, $x - 200, 80); pdf_stroke($pdf); pdf_moveto($pdf, $x - 100, $y - 160); pdf_lineto($pdf, $x - 100, 80); pdf_stroke($pdf); pdf_continue_text($pdf, ''); pdf_continue_text($pdf, ''); pdf_show_xy($pdf, "Searched Item", 70, $y - 150); pdf_show_xy($pdf, "Searched Item", 70, $y - 150); pdf_show_xy($pdf, "Item name", 240, $y - 150); pdf_show_xy($pdf, "Item name", 240, $y - 150); pdf_show_xy($pdf, "Price", $x - 180, $y - 150); pdf_show_xy($pdf, "Price", $x - 180, $y - 150); pdf_show_xy($pdf, "Discounted Price", $x - 100, $y - 150); pdf_show_xy($pdf, "Discounted Price", $x - 100, $y - 150); for ($i = 0; $i < count($this->quotation_details); $i++) { pdf_show_xy($pdf, $this->quotation_details[$i]['name_searched'] , 70, $y - 500); } // and write some text under it pdf_show_xy($pdf, "", 250, 35); // end page pdf_end_page($pdf); // close and save file pdf_close($pdf);

    Read the article

  • Edit and render RichText

    - by OregonGhost
    We have an application (a custom network management tool for building automation) that supports printing labels that you can cut out and insert into the devices' front displays. In earlier versions of the tool (not developed in my company), the application just pushed the strings into an Excel file that the field technician could then manipulate (like formatting text). We didn't do this in the new version because it was hard (impossible) to keep the Excel file in sync, and to avoid a binding to an external application (let alone different versions of Excel). We're using PDFSharp for rendering the labels. It has a System.Drawing-like interface, but can output to a System.Drawing.Graphics (screen / printer) as well as to a PDF file, which is a requirement. Later, basic formatting was introduced like Font Family, Style, Size, Color which would apply to one label (i.e. to exactly one string). Now the customer wants to be able to apply these formats to single characters in a string. I think the easiest way would be to support a subset of RichText. It's not as easy as I thought though. Currently the editor just displays a TextBox for the label you want to edit, with the font set to the label's font. I thought I'd just replace it with RichTextBox, and update the formatting buttons to use the RichTextBox formatting properties. Fairly easy. However, I need to draw the text. I know you can get the RichTextBox to draw to a HDC or System.Drawing.Graphics - but as already said, I need it to use PDFSharp. Rendering to bitmaps is not an option, since the PDF must not be huge, and it's a lot of labels. Unfortunately I couldn't get the RichTextBox to tell me the layout of the text - I'm fine with doing the actual rendering by hand, as long as I know where to draw what. This is the first question: How can I get the properly layouted metrics of the rich text out of a RichTextBox? Or is there any way to convert the rich text to a vector graphics format that can be easily drawn manually? I know about NRTFTree which can be used to parse and manipulate RichText. The documentation is bad (actually I don't know, it's Spanish), but I think I can get it to work. As far as I understood, it won't provide layouting as well. Because of this, I think I'll have to write a custom edit control (remember, it's basically just one or two line labels with basic RTF formatting, not a full-fledged edit control - more like editing a textbox in PowerPoint) and write custom text layout logic that used PDFSharp rather than System.Drawing for drawing. Is there any existing, even if partial, solution available, either for the editing or for doing the layout manually (or both)? Or is there an entirely different approach I'm just not seeing? Bonus points if exporting the label texts as RTF into a CSV file, and then importing in Excel retains the formatting. For the editing part, I need it to work in Windows Forms. Other than that it's not Windows-Forms-related, I think.

    Read the article

  • More information wanted on error: CREATE ASSEMBLY for assembly failed because assembly failed verif

    - by turnip.cyberveggie
    I have a small application that uses SQL Server 2005 Express with CLR stored procedures. It has been successfully installed and runs on many computers running XP and Vista. To create the assembly the following SQL is executed (names changed to protect the innocent): CREATE ASSEMBLY myAssemblyName FROM 'c:\pathtoAssembly\myAssembly.dll' On one computer (a test machine that reflects other computers targeted for installation) that is running Vista and has some very aggressive security policy restrictions I receive the following error: << Start Error Message Msg 6218, Level 16, State 2, Server domain\servername, Line 2 CREATE ASSEMBLY for assembly 'myAssembly' failed because assembly 'myAssembly' failed verification. Check if the referenced assemblies are up-to-date and trusted (for external_access or unsafe) to execute in the database. CLR Verifier error messages if any will follow this message [ : myProcSupport.Axis::Proc1][mdToken=0x6000004] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc2][mdToken=0x6000005] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc3][mdToken=0x6000006] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::.ctor][mdToken=0x600000a] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc4][mdToken=0x6000001] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc5][mdToken=0x6000002] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc6][mdToken=0x6000007] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc7][mdToken=0x6000008] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc8][mdToken=0x6000009] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc8][mdToken=0x600000b] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc9][mdToken=0x600000c] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation.... << End Error Message The C# DLL is defined as “Safe” as it only uses data contained in the database. The DLL is not normally signed, but I provided a signed version to test and received the same results. The installation is being done by someone else, and I don’t have access to the box, but they are executing scripts that I provided and work on other computers. I have tried to find information about this error beyond what the results of the script provide, but I haven’t found anything helpful. The person executing the script to create the assembly is logged in with an Admin account, is running CMD as admin, is connecting to the DB via Windows Authentication, has been added to the dbo_owner role, and added to the server role SysAdmin with the hopes that it is a permissions issue. This hasn't changed anything. Do I need to configure SQL Server 2005 Express differently for this environment? Is this error logged anywhere other than just the output from SQLCMD? What could cause this error? Could Vista security policies cause this? I don’t have access to the computer (the customer is doing the testing) so I can’t examine the box myself. TIA

    Read the article

  • Drawing a WPF UserControl with DataBinding to an Image

    - by LorenVS
    Hey Everyone, So I'm trying to use a WPF User Control to generate a ton of images from a dataset where each item in the dataset would produce an image... I'm hoping I can set it up in such a way that I can use WPF databinding, and for each item in the dataset, create an instance of my user control, set the dependency property that corresponds to my data item, and then draw the user control to an image, but I'm having problems getting it all working (not sure whether databinding or drawing to the image is my problem) Sorry for the massive code dump, but I've been trying to get this working for a couple of hours now, and WPF just doesn't like me (have to learn at some point though...) My User Control looks like this: <UserControl x:Class="Bleargh.ImageTemplate" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:c="clr-namespace:Bleargh" x:Name="ImageTemplateContainer" Height="300" Width="300"> <Canvas> <TextBlock Canvas.Left="50" Canvas.Top="50" Width="200" Height="25" FontSize="16" FontFamily="Calibri" Text="{Binding Path=Booking.Customer,ElementName=ImageTemplateContainer}" /> <TextBlock Canvas.Left="50" Canvas.Top="100" Width="200" Height="25" FontSize="16" FontFamily="Calibri" Text="{Binding Path=Booking.Location,ElementName=ImageTemplateContainer}" /> <TextBlock Canvas.Left="50" Canvas.Top="150" Width="200" Height="25" FontSize="16" FontFamily="Calibri" Text="{Binding Path=Booking.ItemNumber,ElementName=ImageTemplateContainer}" /> <TextBlock Canvas.Left="50" Canvas.Top="200" Width="200" Height="25" FontSize="16" FontFamily="Calibri" Text="{Binding Path=Booking.Description,ElementName=ImageTemplateContainer}" /> </Canvas> </UserControl> And I've added a dependency property of type "Booking" to my user control that I'm hoping will be the source for the databound values: public partial class ImageTemplate : UserControl { public static readonly DependencyProperty BookingProperty = DependencyProperty.Register("Booking", typeof(Booking), typeof(ImageTemplate)); public Booking Booking { get { return (Booking)GetValue(BookingProperty); } set { SetValue(BookingProperty, value); } } public ImageTemplate() { InitializeComponent(); } } And I'm using the following code to render the control: List<Booking> bookings = Booking.GetSome(); for(int i = 0; i < bookings.Count; i++) { ImageTemplate template = new ImageTemplate(); template.Booking = bookings[i]; RenderTargetBitmap bitmap = new RenderTargetBitmap( (int)template.Width, (int)template.Height, 120.0, 120.0, PixelFormats.Pbgra32); bitmap.Render(template); BitmapEncoder encoder = new PngBitmapEncoder(); encoder.Frames.Add(BitmapFrame.Create(bitmap)); using (Stream s = File.OpenWrite(@"C:\Code\Bleargh\RawImages\" + i.ToString() + ".png")) { encoder.Save(s); } } EDIT: I should add that the process works without any errors whatsoever, but I end up with a directory full of plain-white images, not text or anything... And I have confirmed using the debugger that my Booking objects are being filled with the proper data... EDIT 2: Did something I should have done a long time ago, set a background on my canvas, but that didn't change the output image at all, so my problem is most definitely somehow to do with my drawing code (although there may be something wrong with my databinding too)

    Read the article

  • Spring 3 simple extentionless url mappings with annotation-based mapping - impossible?

    - by caerphilly
    Hi, I'm using Spring 3, and trying to set up a simple web-app using annotations to define controller mappings. This seems to be incredibly difficult without peppering all the urls with *.form or *.do Because part of the site needs to be password protected, these urls are all under /secure. There is a <security-constraint> in the web.xml protecting everything under that root. I want to map all the Spring controllers to /secure/app/. Example URLs would be: /secure/app/landingpage /secure/app/edit/customer/{id} each of which I would handle with an appropriate jsp/xml/whatever. So, in web.xml I have this: <servlet> <servlet-name>dispatcher</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>dispatcher</servlet-name> <url-pattern>/secure/app/*</url-pattern> </servlet-mapping> And in despatcher-servlet.xml I have this: <context:component-scan base-package="controller" /> In the Controller package I have a controller class: package controller; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.servlet.ModelAndView; import javax.servlet.http.HttpServletRequest; @Controller @RequestMapping("/secure/app/main") public class HomePageController { public HomePageController() { } @RequestMapping(method = RequestMethod.GET) public ModelAndView getPage(HttpServletRequest request) { ModelAndView mav = new ModelAndView(); mav.setViewName("main"); return mav; } } Under /WEB-INF/jsp I have a "main.jsp", and a suitable view resolver set up to point to this. I had things working when mapping the despatcher using *.form, but can't get anything working using the above code. When Spring starts up it appears to map everything correctly: 13:22:36,762 INFO main annotation.DefaultAnnotationHandlerMapping:399 - Mapped URL path [/secure/app/main] onto handler [controller.HomePageController@2a8ab08f] I also noticed this line, which looked suspicious: 13:25:49,578 DEBUG main servlet.DispatcherServlet:443 - No HandlerMappings found in servlet 'dispatcher': using default And at run time any attempt to view /secure/app/main just returns a 404 error in Tomcat, with this log output: 13:25:53,382 DEBUG http-8080-1 servlet.DispatcherServlet:842 - DispatcherServlet with name 'dispatcher' determining Last-Modified value for [/secure/app/main] 13:25:53,383 DEBUG http-8080-1 servlet.DispatcherServlet:850 - No handler found in getLastModified 13:25:53,390 DEBUG http-8080-1 servlet.DispatcherServlet:690 - DispatcherServlet with name 'dispatcher' processing GET request for [/secure/app/main] 13:25:53,393 WARN http-8080-1 servlet.PageNotFound:962 - No mapping found for HTTP request with URI [/secure/app/main] in DispatcherServlet with name 'dispatcher' 13:25:53,393 DEBUG http-8080-1 servlet.DispatcherServlet:677 - Successfully completed request So... Spring maps a URL, and then "forgets" about that mapping a second later? What is going on? Thanks.

    Read the article

  • Drools flow architecture, Drools flow events with AND join nodes

    - by Shoukry K
    I have been evaluating a number of frameworks including jBPM and Drools flow for my application requirements. Lots of the opinions seem to be inclined towards Drools flow as its more flexible, knowledge oriented, easier to integrate with business rules, etc.. The application is some sort of an Email Campaign manager , where different customers can sign in, prepare (design) and launch email campaigns. The application should be able to do the following : 1- Receive a list of email addresses, send emails to each of these addresses starting from a certain date and during a certain time interval of the day , do some custom actions, and then wait for reply emails. 2- If a reply email is received and depending on the response text of the email , and depending on the time the email was received certain actions need to happen, web service calls need to take place, and error handling for these calls. 3- The application will manage and run many and different campaigns (different customers and different flows for each customer) at any point of time. The first question is : Is Drools flow the way to go about this? My main concerns are scalability, suspending, resuming flows, and long wait, and flows management. As you see from the requirements : There is a scheduling part : Certain flows need to be run at a certain point in time, they need to get suspended and then resumed. For example start sending emails starting on Dec 1st 2010 and send emails only in the time interval between 08:00 and 17:00 GMT. By then it might be that all subscribers have been sent emails, but it might not be the case, the process needs to (resume) on Dec 2nd and send a second batch, however certain (users) already received emails and they should be able to (continue at different stages of the flow) There are long wait states : Days or even weeks , i need to persist, suspend / resume and terminate flows (manage flows) External Events : This is where i got stuck first, i tried to put together a simple flow (see attached screenshot) See image http://img46.imageshack.us/img46/9620/workflowwithevents.png , there is a start node , connected to an action node, connected to a join. An event node is connected to a second action node, which is connected to the join. The join is an AND join , after the join there is an action and the end node. Here is the sample code i am using to launch the flow : KnowledgeBuilder builder = KnowledgeBuilderFactory .newKnowledgeBuilder(); builder.add(ResourceFactory.newClassPathResource("campaign.rf", CampaignsDroolsPoc.class), ResourceType.DRF); if (builder.hasErrors()) { KnowledgeBuilderErrors errors = builder.getErrors(); Iterator<KnowledgeBuilderError> iterator = errors.iterator(); while (iterator.hasNext()) { System.out.println(iterator.next().toString()); } } KnowledgeBase base = KnowledgeBaseFactory.newKnowledgeBase(); base.addKnowledgePackages(builder.getKnowledgePackages()); final StatefulKnowledgeSession ksession = base .newStatefulKnowledgeSession(); // KnowledgeRuntimeLoggerFactory.newConsoleLogger(ksession); ksession.getWorkItemManager().registerWorkItemHandler("Log", new SendSMSWorkItemHandler()); ProcessInstance startProcess = ksession.startProcess("flow"); System.out.println("Signaling event"); startProcess.signalEvent("ev1", "ev1"); System.out.println("Signaled"); ksession.fireUntilHalt(); I am noticing that the event get triggered, the action node connected to the event gets triggered, however things seem to get stuck at the join. The flow does not continue past the AND join and the flow seems to get stuck. The action following the node does not get triggered. I also went through the drools flow documentation , and all the example codes, however i didn't find anything there. In addition any hints about the way to go about architecting the solution, and implementing it would be great.

    Read the article

  • Active Directory Password Policy Problem

    - by Will
    To Clarify: my question is why isn't my password policy applying to people in the domain. Hey guys, having trouble with our password policy in Active Directory. Sometimes it just helps me to type out what I’m seeing It appears to not be applying properly across the board. I am new to this environment and AD in general but I think I have a general grasp of what should be going on. It’s a pretty simple AD setup without too many Group Policies being applied. It looks something like this DOMAIN Default Domain Policy (link enabled) Password Policy (link enabled and enforce) Personal OU Force Password Change (completely empty nothing in this GPO) IT OU Lockout Policy (link enabled and enforced) CS OU Lockout Policy Accouting OU Lockout Policy The password policy and default domain policy both define the same things under Computer ConfigWindows seetings sec settings Account Policies / Password Policy Enforce password History : 24 passwords remembered Maximum Password age : 180 days Min password age: 14 days Minimum Password Length: 6 characters Password must meet complexity requirements: Enabled Store Passwords using reversible encryption: Disabled Account Policies / Account Lockout Policy Account Lockout Duration 10080 Minutes Account Lockout Threshold: 5 invalid login attempts Reset Account Lockout Counter after : 30 minutes IT lockout This just sets the screen saver settings to lock computers when the user is Idle. After running Group Policy modeling it seems like the password policy and default domain policy is getting applied to everyone. Here is the results of group policy modeling on MO-BLANCKM using the mblanck account, as you can see the policies are both being applied , with nothing important being denied Group Policy Results NCLGS\mblanck on NCLGS\MO-BLANCKM Data collected on: 12/29/2010 11:29:44 AM Summary Computer Configuration Summary General Computer name NCLGS\MO-BLANCKM Domain NCLGS.local Site Default-First-Site-Name Last time Group Policy was processed 12/29/2010 10:17:58 AM Group Policy Objects Applied GPOs Name Link Location Revision Default Domain Policy NCLGS.local AD (15), Sysvol (15) WSUS-52010 NCLGS.local/WSUS/Clients AD (54), Sysvol (54) Password Policy NCLGS.local AD (58), Sysvol (58) Denied GPOs Name Link Location Reason Denied Local Group Policy Local Empty Security Group Membership when Group Policy was applied BUILTIN\Administrators Everyone S-1-5-21-507921405-1326574676-682003330-1003 BUILTIN\Users NT AUTHORITY\NETWORK NT AUTHORITY\Authenticated Users NCLGS\MO-BLANCKM$ NCLGS\Admin-ComputerAccounts-GP NCLGS\Domain Computers WMI Filters Name Value Reference GPO(s) None Component Status Component Name Status Last Process Time Group Policy Infrastructure Success 12/29/2010 10:17:59 AM EFS recovery Success (no data) 10/28/2010 9:10:34 AM Registry Success 10/28/2010 9:10:32 AM Security Success 10/28/2010 9:10:34 AM User Configuration Summary General User name NCLGS\mblanck Domain NCLGS.local Last time Group Policy was processed 12/29/2010 11:28:56 AM Group Policy Objects Applied GPOs Name Link Location Revision Default Domain Policy NCLGS.local AD (7), Sysvol (7) IT-Lockout NCLGS.local/Personal/CS AD (11), Sysvol (11) Password Policy NCLGS.local AD (5), Sysvol (5) Denied GPOs Name Link Location Reason Denied Local Group Policy Local Empty Force Password Change NCLGS.local/Personal Empty Security Group Membership when Group Policy was applied NCLGS\Domain Users Everyone BUILTIN\Administrators BUILTIN\Users NT AUTHORITY\INTERACTIVE NT AUTHORITY\Authenticated Users LOCAL NCLGS\MissingSkidEmail NCLGS\Customer_Service NCLGS\Email_Archive NCLGS\Job Ticket Users NCLGS\Office Staff NCLGS\CUSTOMER SERVI-1 NCLGS\Prestige_Jobs_Email NCLGS\Telecommuters NCLGS\Everyone - NCL WMI Filters Name Value Reference GPO(s) None Component Status Component Name Status Last Process Time Group Policy Infrastructure Success 12/29/2010 11:28:56 AM Registry Success 12/20/2010 12:05:51 PM Scripts Success 10/13/2010 10:38:40 AM Computer Configuration Windows Settings Security Settings Account Policies/Password Policy Policy Setting Winning GPO Enforce password history 24 passwords remembered Password Policy Maximum password age 180 days Password Policy Minimum password age 14 days Password Policy Minimum password length 6 characters Password Policy Password must meet complexity requirements Enabled Password Policy Store passwords using reversible encryption Disabled Password Policy Account Policies/Account Lockout Policy Policy Setting Winning GPO Account lockout duration 10080 minutes Password Policy Account lockout threshold 5 invalid logon attempts Password Policy Reset account lockout counter after 30 minutes Password Policy Local Policies/Security Options Network Security Policy Setting Winning GPO Network security: Force logoff when logon hours expire Enabled Default Domain Policy Public Key Policies/Autoenrollment Settings Policy Setting Winning GPO Enroll certificates automatically Enabled [Default setting] Renew expired certificates, update pending certificates, and remove revoked certificates Disabled Update certificates that use certificate templates Disabled Public Key Policies/Encrypting File System Properties Winning GPO [Default setting] Policy Setting Allow users to encrypt files using Encrypting File System (EFS) Enabled Certificates Issued To Issued By Expiration Date Intended Purposes Winning GPO SBurns SBurns 12/13/2007 5:24:30 PM File Recovery Default Domain Policy For additional information about individual settings, launch Group Policy Object Editor. Public Key Policies/Trusted Root Certification Authorities Properties Winning GPO [Default setting] Policy Setting Allow users to select new root certification authorities (CAs) to trust Enabled Client computers can trust the following certificate stores Third-Party Root Certification Authorities and Enterprise Root Certification Authorities To perform certificate-based authentication of users and computers, CAs must meet the following criteria Registered in Active Directory only Administrative Templates Windows Components/Windows Update Policy Setting Winning GPO Allow Automatic Updates immediate installation Enabled WSUS-52010 Allow non-administrators to receive update notifications Enabled WSUS-52010 Automatic Updates detection frequency Enabled WSUS-52010 Check for updates at the following interval (hours): 1 Policy Setting Winning GPO Configure Automatic Updates Enabled WSUS-52010 Configure automatic updating: 4 - Auto download and schedule the install The following settings are only required and applicable if 4 is selected. Scheduled install day: 0 - Every day Scheduled install time: 03:00 Policy Setting Winning GPO No auto-restart with logged on users for scheduled automatic updates installations Disabled WSUS-52010 Re-prompt for restart with scheduled installations Enabled WSUS-52010 Wait the following period before prompting again with a scheduled restart (minutes): 30 Policy Setting Winning GPO Reschedule Automatic Updates scheduled installations Enabled WSUS-52010 Wait after system startup (minutes): 1 Policy Setting Winning GPO Specify intranet Microsoft update service location Enabled WSUS-52010 Set the intranet update service for detecting updates: http://lavender Set the intranet statistics server: http://lavender (example: http://IntranetUpd01) User Configuration Administrative Templates Control Panel/Display Policy Setting Winning GPO Hide Screen Saver tab Enabled IT-Lockout Password protect the screen saver Enabled IT-Lockout Screen Saver Enabled IT-Lockout Screen Saver executable name Enabled IT-Lockout Screen Saver executable name sstext3d.scr Policy Setting Winning GPO Screen Saver timeout Enabled IT-Lockout Number of seconds to wait to enable the Screen Saver Seconds: 1800 System/Power Management Policy Setting Winning GPO Prompt for password on resume from hibernate / suspend Enabled IT-Lockout

    Read the article

  • Excel Macro Runtime error 428 in Excel 2003

    - by Adam
    Hi I have created a xlt excel template which works fine in Excel 2007 under compatibility mode and shows no errors on compatibility check. The template runs a number of Macros which creates pivot tables and charts. When a colleague tries to run the same xlt on excel 2003 they get a Runtime error 428 (Object does not support this property or method). The runtime error fails at this point; ActiveWorkbook.PivotCaches.Create(SourceType:=xlDatabase, SourceData:= _ "raw!R1C1:R65536C37", Version:=xlPivotTableVersion10).CreatePivotTable _ TableDestination:="Frontpage!R7C1", TableName:="PivotTable2", _ DefaultVersion:=xlPivotTableVersion10 Any help would be appreciated. This is the full Macro; Sub Auto_Open() ' ' ImportData Macro ' Macro to import data, Data must be in your local D: Drive and named raw.csv ' ' Sheets("raw").Select With ActiveSheet.QueryTables.Add(Connection:= _ "TEXT;d:\raw.csv", Destination:=Range _ ("$A$1")) .Name = "raw_1" .FieldNames = True .RowNumbers = False .FillAdjacentFormulas = False .PreserveFormatting = True .RefreshOnFileOpen = False .RefreshStyle = xlInsertDeleteCells .SavePassword = False .SaveData = True .AdjustColumnWidth = True .RefreshPeriod = 0 .TextFilePromptOnRefresh = False .TextFilePlatform = 850 .TextFileStartRow = 1 .TextFileParseType = xlDelimited .TextFileTextQualifier = xlTextQualifierDoubleQuote .TextFileConsecutiveDelimiter = False .TextFileTabDelimiter = False .TextFileSemicolonDelimiter = False .TextFileCommaDelimiter = True .TextFileSpaceDelimiter = False .TextFileColumnDataTypes = Array(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, _ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) .TextFileTrailingMinusNumbers = True .Refresh BackgroundQuery:=False End With ' ' AddMonthColumn Macro ' ' Sheets("raw").Select Range("AK1").Select ActiveCell.FormulaR1C1 = "Month" Range("AK2").FormulaR1C1 = "=DATE(YEAR(RC[-36]),MONTH(RC[-36]),1)" LastRow = ActiveSheet.UsedRange.Rows.Count Range("AK2").AutoFill Destination:=Range("AK2:AK" & LastRow) Columns("AK:AK").EntireColumn.AutoFit Columns("AK:AK").Select Selection.NumberFormat = "mmmm" With Selection .HorizontalAlignment = xlCenter End With Columns("AK:AK").EntireColumn.AutoFit Selection.Copy Selection.PasteSpecial Paste:=xlPasteValues, Operation:=xlNone, SkipBlanks _ :=False, Transpose:=False ' ' Add Report Information [Text] ' Sheets("Frontpage").Select Range("A2:N2").Select Selection.Merge ActiveCell.FormulaR1C1 = "Service Activity Report" With Selection.Font .Size = 20 End With Range("A3:N3").Select Selection.Merge ActiveCell.FormulaR1C1 = InputBox("Customer Name") With Selection .HorizontalAlignment = xlCenter .VerticalAlignment = xlCenter End With Range("A4:N4").Select Selection.Merge ActiveCell.FormulaR1C1 = InputBox("Date Range dd/mm/yyyy - dd/mm/yyyy") With Selection .HorizontalAlignment = xlCenter .VerticalAlignment = xlCenter End With ' ' IncidentsbyPriority Macro ' ' Sheets("Frontpage").Select Range("A7").Select ActiveWorkbook.PivotCaches.Create(SourceType:=xlDatabase, SourceData:= _ "raw!R1C1:R65536C37", Version:=xlPivotTableVersion10).CreatePivotTable _ TableDestination:="Frontpage!R7C1", TableName:="PivotTable2", _ DefaultVersion:=xlPivotTableVersion10 Sheets("Frontpage").Select Cells(7, 1).Select ActiveSheet.Shapes.AddChart.Select ActiveChart.SetSourceData Source:=Range("Frontpage!$A$7:$H$22") ActiveChart.ChartType = xlColumnClustered With ActiveSheet.PivotTables("PivotTable2").PivotFields("Priority") .Orientation = xlRowField .Position = 1 End With ActiveSheet.PivotTables("PivotTable2").AddDataField ActiveSheet.PivotTables( _ "PivotTable2").PivotFields("Case ID"), "Count of Case ID", xlCount ActiveChart.Parent.Name = "IncidentsbyPriority" ActiveChart.ChartTitle.Text = "Incidents by Priority" Dim RngToCover As Range Dim ChtOb As ChartObject Set RngToCover = ActiveSheet.Range("D7:L16") Set ChtOb = ActiveSheet.ChartObjects("IncidentsbyPriority") ChtOb.Height = RngToCover.Height ' resize ChtOb.Width = RngToCover.Width ' resize ChtOb.Top = RngToCover.Top ' reposition ChtOb.Left = RngToCover.Left ' reposition ' ' IncidentbyMonth Macro ' ' Sheets("Frontpage").Select ActiveWorkbook.PivotCaches.Create(SourceType:=xlDatabase, SourceData:= _ "raw!R1C1:R65536C37", Version:=xlPivotTableVersion10).CreatePivotTable _ TableDestination:="Frontpage!R18C1", TableName:="PivotTable4", _ DefaultVersion:=xlPivotTableVersion10 Sheets("Frontpage").Select Cells(18, 1).Select ActiveSheet.Shapes.AddChart.Select ActiveChart.SetSourceData Source:=Range("Frontpage!$A$18:$H$38") ActiveChart.ChartType = xlColumnClustered With ActiveSheet.PivotTables("PivotTable4").PivotFields("Month") .Orientation = xlRowField .Position = 1 End With ActiveSheet.PivotTables("PivotTable4").AddDataField ActiveSheet.PivotTables( _ "PivotTable4").PivotFields("Case ID"), "Count of Case ID", xlCount ActiveChart.Parent.Name = "IncidentbyMonth" ActiveChart.ChartTitle.Text = "Incidents by Month" Dim RngToCover2 As Range Dim ChtOb2 As ChartObject Set RngToCover2 = ActiveSheet.Range("D18:L30") Set ChtOb2 = ActiveSheet.ChartObjects("IncidentbyMonth") ChtOb2.Height = RngToCover2.Height ' resize ChtOb2.Width = RngToCover2.Width ' resize ChtOb2.Top = RngToCover2.Top ' reposition ChtOb2.Left = RngToCover2.Left ' reposition ' ' IncidentbyCategory Macro ' ' Sheets("Frontpage").Select ActiveWorkbook.PivotCaches.Create(SourceType:=xlDatabase, SourceData:= _ "raw!R1C1:R65536C37", Version:=xlPivotTableVersion10).CreatePivotTable _ TableDestination:="Frontpage!R38C1", TableName:="PivotTable6", _ DefaultVersion:=xlPivotTableVersion10 Sheets("Frontpage").Select Cells(38, 1).Select ActiveSheet.Shapes.AddChart.Select ActiveChart.SetSourceData Source:=Range("Frontpage!$A$38:$H$119") ActiveChart.ChartType = xlColumnClustered With ActiveSheet.PivotTables("PivotTable6").PivotFields("Category 2") .Orientation = xlRowField .Position = 1 End With With ActiveSheet.PivotTables("PivotTable6").PivotFields("Category 3") .Orientation = xlPageField .Position = 1 End With ActiveSheet.PivotTables("PivotTable6").AddDataField ActiveSheet.PivotTables( _ "PivotTable6").PivotFields("Case ID"), "Count of Case ID", xlCount ActiveChart.Parent.Name = "IncidentbyCategory" ActiveChart.ChartTitle.Text = "Incidents by Category" Dim RngToCover3 As Range Dim ChtOb3 As ChartObject Set RngToCover3 = ActiveSheet.Range("D38:L56") Set ChtOb3 = ActiveSheet.ChartObjects("IncidentbyCategory") ChtOb3.Height = RngToCover3.Height ' resize ChtOb3.Width = RngToCover3.Width ' resize ChtOb3.Top = RngToCover3.Top ' reposition ChtOb3.Left = RngToCover3.Left ' reposition ' ' IncidentsbySiteandPriority Macro ' ' Sheets("Frontpage").Select Range("A71").Select ActiveWorkbook.PivotCaches.Create(SourceType:=xlDatabase, SourceData:= _ "raw!R1C1:R65536C37", Version:=xlPivotTableVersion10).CreatePivotTable _ TableDestination:="Frontpage!R71C1", TableName:="PivotTable3", _ DefaultVersion:=xlPivotTableVersion10 Sheets("Frontpage").Select Cells(71, 1).Select ActiveSheet.Shapes.AddChart.Select ActiveChart.SetSourceData Source:=Range("Frontpage!$A$71:$H$90") ActiveChart.ChartType = xlColumnClustered With ActiveSheet.PivotTables("PivotTable3").PivotFields("Site Name") .Orientation = xlRowField .Position = 1 End With With ActiveSheet.PivotTables("PivotTable3").PivotFields("Priority") .Orientation = xlColumnField .Position = 1 End With ActiveSheet.PivotTables("PivotTable3").AddDataField ActiveSheet.PivotTables( _ "PivotTable3").PivotFields("Case ID"), "Count of Case ID", xlCount ActiveChart.Parent.Name = "IncidentbySiteandPriority" ' ActiveChart.ChartTitle.Text = "Incidents by Site and Priority" Dim RngToCover4 As Range Dim ChtOb4 As ChartObject Set RngToCover4 = ActiveSheet.Range("H71:O91") Set ChtOb4 = ActiveSheet.ChartObjects("IncidentbySiteandPriority") ChtOb4.Height = RngToCover4.Height ' resize ChtOb4.Width = RngToCover4.Width ' resize ChtOb4.Top = RngToCover4.Top ' reposition ChtOb4.Left = RngToCover4.Left ' reposition Columns("A:G").Select Range("A52").Activate Columns("A:G").EntireColumn.AutoFit End Sub

    Read the article

  • 500 Internal Server Error with PHP application

    - by James
    I have written a PHP application using Windows and XAMPP. I've been trying to run it on Ubuntu 10.10 with Lighttpd 1.4.26. Parts of the application work fine, but whenever I try to log in, I get a 500 - Internal Server Error page. The only thing that shows up in /var/log/lighttpd/error.log is 2011-02-25 13:43:13: (mod_fastcgi.c.2582) unexpected end-of-file (perhaps the fastcgi process died): pid: 1169 socket: unix:/tmp/php.socket-0 2011-02-25 13:43:13: (mod_fastcgi.c.3367) response not received, request sent: 1596 on socket: unix:/tmp/php.socket-0 for /~denton/customer-facing-portal/index.php?, closing connection If I had any output whatsoever from PHP, this would be a lot easier to debug. Any ideas on how to get some? Here is my /etc/lighttpd/lighttpd.conf file: # Debian lighttpd configuration file # ############ Options you really have to take care of #################### ## modules to load server.modules = ( "mod_alias", "mod_compress", # "mod_rewrite", # "mod_redirect", # "mod_usertrack", # "mod_expire", # "mod_flv_streaming", # "mod_evasive", "mod_setenv" ) ## a static document-root, for virtual-hosting take look at the ## server.virtual-* options server.document-root = "/var/www/" ## where to upload files to, purged daily. server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" ## files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", "index.lighttpd.html" ) ## Use the "Content-Type" extended attribute to obtain mime type if possible # mimetype.use-xattr = "enable" ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## Use ipv6 only if available. (disabled for while, check #560837) #include_shell "/usr/share/lighttpd/use-ipv6.pl" ## bind to port (default: 80) # server.port = 81 ## bind to localhost only (default: all interfaces) ## server.bind = "localhost" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts server.pid-file = "/var/run/lighttpd.pid" ## ## Format: <errorfile-prefix><status>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/var/www/" ## virtual directory listings dir-listing.encoding = "utf-8" server.dir-listing = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't change) server.username = "www-data" ## change gid to <gid> (default: don't change) server.groupname = "www-data" #### compress module compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ("text/plain", "text/html", "application/x-javascript", "text/css") #### url handling modules (rewrite, redirect, access) # url.rewrite = ( "^/$" => "/server-status" ) # url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### expire module # expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "access plus 1 seconds 2 minutes") #### external configuration files ## mimetype mapping include_shell "/usr/share/lighttpd/create-mime.assign.pl" ## load enabled configuration files, ## read /etc/lighttpd/conf-available/README first include_shell "/usr/share/lighttpd/include-conf-enabled.pl" ## Set environment variables setenv.add-environment = ( "DB_URL__DEMO" => "192.168.1.231", "DB_NAME_DEMO" => "demo", "DB_USER_DEMO" => "user", "DB_PASS_DEMO" => "password", "DB_AGENCY_DEMO" => "demo" ) Here is my /etc/php5/cgi/php.ini file (sans 1641 lines of comments): [PHP] register_long_arrays = Off short_open_tag = Off engine = On short_open_tag = Off asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = Off safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT display_errors = On display_startup_errors = On log_errors = On log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = On html_errors = On variables_order = "GPCS" request_order = "GP" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 8M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off cgi.fix_pathinfo=1 file_uploads = On upload_max_filesize = 2M max_file_uploads = 20 allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] date.timezone = "America/Chicago" [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Pdo_mysql] pdo_mysql.cache_size = 2000 pdo_mysql.default_socket= [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [Interbase] ibase.allow_persistent = 1 ibase.max_persistent = -1 ibase.max_links = -1 ibase.timestampformat = "%Y-%m-%d %H:%M:%S" ibase.dateformat = "%Y-%m-%d" ibase.timeformat = "%H:%M:%S" [MySQL] mysql.allow_local_infile = On mysql.allow_persistent = On mysql.cache_size = 2000 mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_persistent = -1 mysqli.allow_persistent = On mysqli.max_links = -1 mysqli.cache_size = 2000 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [mysqlnd] mysqlnd.collect_statistics = On mysqlnd.collect_memory_statistics = Off [OCI8] [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 0 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 soap.wsdl_cache_limit = 5 [sysvshm] [ldap] ldap.max_links = -1 [mcrypt] [dba] Update: here is /etc/lighttpd/conf-enabled/15-fastcgi-php.conf As far as I know, it's just the default config file the Ubuntu package installed. ## FastCGI programs have the same functionality as CGI programs, ## but are considerably faster through lower interpreter startup ## time and socketed communication ## ## Documentation: /usr/share/doc/lighttpd-doc/fastcgi.txt.gz ## http://redmine.lighttpd.net/projects/lighttpd/wiki/Docs:ConfigurationOptions#mod_fastcgi-fastcgi ## Start an FastCGI server for php (needs the php5-cgi package) fastcgi.server += ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "idle-timeout" => 20, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "4", "PHP_FCGI_MAX_REQUESTS" => "10000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) )

    Read the article

  • Is SQL Server DRI (ON DELETE CASCADE) slow?

    - by Aaronaught
    I've been analyzing a recurring "bug report" (perf issue) in one of our systems related to a particularly slow delete operation. Long story short: It seems that the CASCADE DELETE keys were largely responsible, and I'd like to know (a) if this makes sense, and (b) why it's the case. We have a schema of, let's say, widgets, those being at the root of a large graph of related tables and related-to-related tables and so on. To be perfectly clear, deleting from this table is actively discouraged; it is the "nuclear option" and users are under no illusions to the contrary. Nevertheless, it sometimes just has to be done. The schema looks something like this: Widgets | +--- Anvils (1:1) | | | +--- AnvilTestData (1:N) | +--- WidgetHistory (1:N) | +--- WidgetHistoryDetails (1:N) Nothing too scary, really. A Widget can be different types, an Anvil is a special type, so that relationship is 1:1 (or more accurately 1:0..1). Then there's a large amount of data - perhaps thousands of rows of AnvilTestData per Anvil collected over time, dealing with hardness, corrosion, exact weight, hammer compatibility, usability issues, and impact tests with cartoon heads. Then every Widget has a long, boring history of various types of transactions - production, inventory moves, sales, defect investigations, RMAs, repairs, customer complaints, etc. There might be 10-20k details for a single widget, or none at all, depending on its age. So, unsurprisingly, there's a CASCADE DELETE relationship at every level here. If a Widget needs to be deleted, it means something's gone terribly wrong and we need to erase any records of that widget ever existing, including its history, test data, etc. Again, nuclear option. Relations are all indexed, statistics are up to date. Normal queries are fast. The system tends to hum along pretty smoothly for everything except deletes. Getting to the point here, finally, for various reasons we only allow deleting one widget at a time, so a delete statement would look like this: DELETE FROM Widgets WHERE WidgetID = @WidgetID Pretty simple, innocuous looking delete... that takes over 2 minutes to run, for a widget with no data! After slogging through execution plans I was finally able to pick out the AnvilTestData and WidgetHistoryDetails deletes as the sub-operations with the highest cost. So I experimented with turning off the CASCADE (but keeping the actual FK, just setting it to NO ACTION) and rewriting the script as something very much like the following: DECLARE @AnvilID int SELECT @AnvilID = AnvilID FROM Anvils WHERE WidgetID = @WidgetID DELETE FROM AnvilTestData WHERE AnvilID = @AnvilID DELETE FROM WidgetHistory WHERE HistoryID IN ( SELECT HistoryID FROM WidgetHistory WHERE WidgetID = @WidgetID) DELETE FROM Widgets WHERE WidgetID = @WidgetID Both of these "optimizations" resulted in significant speedups, each one shaving nearly a full minute off the execution time, so that the original 2-minute deletion now takes about 5-10 seconds - at least for new widgets, without much history or test data. Just to be absolutely clear, there is still a CASCADE from WidgetHistory to WidgetHistoryDetails, where the fanout is highest, I only removed the one originating from Widgets. Further "flattening" of the cascade relationships resulted in progressively less dramatic but still noticeable speedups, to the point where deleting a new widget was almost instantaneous once all of the cascade deletes to larger tables were removed and replaced with explicit deletes. I'm using DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test. I've disabled all triggers that might be causing further slowdowns (although those would show up in the execution plan anyway). And I'm testing against older widgets, too, and noticing a significant speedup there as well; deletes that used to take 5 minutes now take 20-40 seconds. Now I'm an ardent supporter of the "SELECT ain't broken" philosophy, but there just doesn't seem to be any logical explanation for this behaviour other than crushing, mind-boggling inefficiency of the CASCADE DELETE relationships. So, my questions are: Is this a known issue with DRI in SQL Server? (I couldn't seem to find any references to this sort of thing on Google or here in SO; I suspect the answer is no.) If not, is there another explanation for the behaviour I'm seeing? If it is a known issue, why is it an issue, and are there better workarounds I could be using?

    Read the article

  • ASP.net User Controls and business entities

    - by Chris
    Hi all, I am currently developing some user controls so that I can use them at several places within a project. One control is a about editing a list of addresses for a customer. Since this needs to be done at several places within the project I want to make it a simple user control. The user control contains a repeater control. By default the repeater displays one address item to be edited. If more addresses need to be added, the user can click on a button to append an additional address to be entered. The user control should work for creating new addresses as well as editing existing ones. The address business entity looks something like this: public class Address { public string Street { get; set; } public City City { get; set; } public Address(string street, City city) { Check.NotNullOrEmpty(street); Check.NotNull(city); Street = street; City = city; } } As you can see an address can only be instantiated if there is a street and a city. Now my idea was that the user control exposes a collection property called Addresses. The getter of this property collects the addresses from the repeater and return it in a collection. The setter would databind the addresses to be edited to the repeater. Like this: public partial class AddressEditControl : System.Web.UI.UserControl { public IEnumerable<Address> Addresses { get { IList<Address> addresses = new List<Address>(); // collect items from repeater and create addresses foreach (RepeaterItem item in addressRepeater.Items) { // collect values from repeater item addresses.Add(new Address(street, city)); } return addresses; } set { addressRepeater.DataSource = value; addressRepeater.DataBind(); } } } First I liked this approach since it is object oriented makes it very easy to reuse the control. But at some place in my project I wanted to use this control so a user could enter some addresses. And I wanted to pre-fill the street input field of each repeater item since I had that data so the user doesn't need to enter it all by his self. Now the problem is that this user control only accepts addresses in a valid state (since the address object has only one constructor). So I cannot do: IList<Addresses> addresses = new List<Address>(); addresses.Add(new Address("someStreet", null)); // i dont know the city yet (user has to find it out) addressControl.Addresses = addresses; So the above is not possible since I would get an error from address because the city is null. Now my question: How would I create such a control? ;) I was thinking about using an Address DTO instead of a real address, so it can later be mapped to an address. That way I can pass in and out an address collection which addresses don't need to be valid. Or did I misunderstood the way user controls work? Are there any best practices?

    Read the article

  • Weird routing issue (updated)

    - by smccloud
    I just updated the route tables due to a mistake on my part. I am working on getting networking working correctly on a cluster of 14 virtual servers at a customer site. 11 of them work fine for routing and 3 don't work correctly for their administrative network (172.28.56.0). All are running Windows Web Server 2008R2. Default gateway is set on the production network (172.28.58.0) and not on the administrative network (handled with persistent static routes). On a working server, route print gives me the following (MACs redacted) =========================================================================== Interface List 11...XX XX XX XX XX XX ......Intel(R) PRO/1000 MT Network Connection 13...XX XX XX XX XX XX00 0c 29 85 b2 98 ......Intel(R) PRO/1000 MT Network Connection #2 1...........................Software Loopback Interface 1 12...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 14...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 15...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 172.28.58.1 172.28.58.11 266 10.18.1.22 255.255.255.255 172.28.58.1 172.28.58.11 11 10.32.0.0 255.255.0.0 172.28.56.1 172.28.56.201 11 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 172.28.34.0 255.255.255.0 172.28.56.1 172.28.56.201 11 172.28.42.0 255.255.255.0 172.28.56.1 172.28.56.201 11 172.28.56.0 255.255.255.0 On-link 172.28.56.201 266 172.28.56.0 255.255.255.0 172.28.56.1 172.28.56.201 11 172.28.56.201 255.255.255.255 On-link 172.28.56.201 266 172.28.56.255 255.255.255.255 On-link 172.28.56.201 266 172.28.58.0 255.255.255.224 On-link 172.28.58.11 266 172.28.58.0 255.255.255.224 172.28.58.1 172.28.58.11 11 172.28.58.1 255.255.255.255 172.28.58.1 172.28.58.11 11 172.28.58.11 255.255.255.255 On-link 172.28.58.11 266 172.28.58.31 255.255.255.255 On-link 172.28.58.11 266 172.28.60.0 255.255.255.0 172.28.56.1 172.28.56.201 11 172.28.63.0 255.255.255.0 172.28.56.1 172.28.56.201 11 192.168.0.0 255.255.0.0 172.28.56.1 172.28.56.201 11 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 172.28.56.201 266 224.0.0.0 240.0.0.0 On-link 172.28.58.11 266 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 172.28.56.201 266 255.255.255.255 255.255.255.255 On-link 172.28.58.11 266 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 172.28.56.0 255.255.255.0 172.28.56.1 1 172.28.63.0 255.255.255.0 172.28.56.1 1 192.168.0.0 255.255.0.0 172.28.56.1 1 172.28.60.0 255.255.255.0 172.28.56.1 1 10.32.0.0 255.255.0.0 172.28.56.1 1 172.28.34.0 255.255.255.0 172.28.56.1 1 172.28.42.0 255.255.255.0 172.28.56.1 1 0.0.0.0 0.0.0.0 172.28.58.1 Default =========================================================================== IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 1 306 ::1/128 On-link 1 306 ff00::/8 On-link =========================================================================== Persistent Routes: None On one of the non-working server, route print gives me the following (MACs redacted) =========================================================================== Interface List 11...XX XX XX XX XX XX ......Intel(R) PRO/1000 MT Network Connection 13...XX XX XX XX XX XX ......Intel(R) PRO/1000 MT Network Connection #2 1...........................Software Loopback Interface 1 12...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 14...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 16...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 172.28.58.1 172.28.58.21 266 10.32.0.0 255.255.0.0 172.28.56.1 172.28.56.211 11 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 172.28.34.0 255.255.255.0 172.28.56.1 172.28.56.211 11 172.28.42.0 255.255.255.0 172.28.56.1 172.28.56.211 11 172.28.56.0 255.255.255.0 172.28.56.1 172.28.56.211 11 172.28.56.211 255.255.255.255 On-link 172.28.56.211 266 172.28.58.0 255.255.255.0 172.28.58.1 172.28.58.21 11 172.28.58.0 255.255.255.224 On-link 172.28.58.21 266 172.28.58.21 255.255.255.255 On-link 172.28.58.21 266 172.28.58.31 255.255.255.255 On-link 172.28.58.21 266 172.28.60.0 255.255.255.0 172.28.56.1 172.28.56.211 11 172.28.63.0 255.255.255.0 172.28.56.1 172.28.56.211 11 192.168.0.0 255.255.0.0 172.28.56.1 172.28.56.211 11 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 172.28.56.211 266 224.0.0.0 240.0.0.0 On-link 172.28.58.21 266 255.255.255.255 255.255.255.255 On-link 127.0.0.1 06 255.255.255.255 255.255.255.255 On-link 172.28.56.211 266 255.255.255.255 255.255.255.255 On-link 172.28.58.21 266 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 172.28.56.0 255.255.255.0 172.28.56.1 1 172.28.60.0 255.255.255.0 172.28.56.1 1 172.28.63.0 255.255.255.0 172.28.56.1 1 172.28.34.0 255.255.255.0 172.28.56.1 1 172.28.42.0 255.255.255.0 172.28.56.1 1 192.168.0.0 255.255.0.0 172.28.56.1 1 10.32.0.0 255.255.0.0 172.28.56.1 1 0.0.0.0 0.0.0.0 172.28.58.1 Default 172.28.58.0 255.255.255.0 172.28.58.1 1 =========================================================================== IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 1 306 ::1/128 On-link 1 306 ff00::/8 On-link =========================================================================== Persistent Routes: None I am at a complete loss why the non-working servers have no On-link route for 172.28.56.0. Does anyone have any suggestions on what I should be looking at to figure this out? Also, I do have "physical" access to the console if needed through vSphere Client.

    Read the article

  • How to do URL authentication in struts2

    - by Enrique Malhotra
    Hi, I am using struts2.1.6 + Spring 2.5 I have four modules in my application. Registration Module Admin Module Quote Module Location Module. In registration module the customer can register himself and only after registering he is supposed to have access of the remaining three modules. I want to implement something like if the action being called belongs to the registration module it will work as normal but if the action being called belongs to the rest of those three modules it first should check if the user is logged-in and session has not timed-out. if yes it should proceed normally otherwise it should redirect to the login page. Through research I have found out that interceptors could be used for this purpose but before proceeding I thought its better to get some feedback on it from experts. Please suggest how it should be done and If possible put some code suggestions. Here is my struts.xml file(The struts.xml contains four different config files belonging to each module): <struts> <include file="struts-default.xml" /> <constant name="struts.i18n.reload" value="false" /> <constant name="struts.objectFactory" value="spring" /> <constant name="struts.devMode" value="false" /> <constant name="struts.serve.static.browserCache" value="false" /> <constant name="struts.enable.DynamicMethodInvocation" value="true" /> <constant name="struts.multipart.maxSize" value="10000000" /> <constant name="struts.multipart.saveDir" value="C:/Temporary_image_location" /> <include file="/com/action/mappingFiles/registration_config.xml" /> <include file="/com/action/mappingFiles/admin_config.xml" /> <include file="/com/action/mappingFiles/quote.xml" /> <include file="/com/action/mappingFiles/location_config.xml" /> </struts> The sample registration_config.xml file is: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE struts PUBLIC "-//Apache Software Foundation//DTD Struts Configuration 2.0//EN" "http://struts.apache.org/dtds/struts-2.0.dtd"> <struts> <package name="registration" extends="struts-default" namespace="/my_company"> <action name="LoginView" class="registration" method="showLoginView"> <result>....</result> <result name="input">...</result> </action> </package> </struts> The sample admin_config.xml file is: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE struts PUBLIC "-//Apache Software Foundation//DTD Struts Configuration 2.0//EN" "http://struts.apache.org/dtds/struts-2.0.dtd"> <struts> <package name="admin" extends="struts-default" namespace="/my_company"> <action name="viewAdmin" class="admin" method="showAdminView"> <result>....</result> <result name="input">...</result> </action> </package> </struts> Same code is there in the rest of two struts2 xml config files. I have used the same namespace in all the four config files with the different package names(As you can see)

    Read the article

  • sending a $_SESSION array to my class and attempting to get the value fro it in a for loop .php

    - by eoin
    i have a class which in which iam using a method to get information from a session array via a for loop the 2 for loops in each method are used 1.to count total amount of items , and 2. to add up the total price but neither seem to be returning anything. im using the total amount to check it against the mc_gross posted value from an IPN and if the values are equal i plan to commit the sale to a database as it has been verified. for the method where im looking to get the total price im getting 0.00 returned to me. i think ive got the syntax wrong here somehow here is my class for purchase the two methods iam having trouble with are the gettotalcost() and gettotalitems() shown below is my class. <?php class purchase { private $itemnames; // from session array private $amountofitems; // from session array private $itemcost; //session array private $saleid; //posted transaction id value to be used in sale and sale item tables private $orderdate; //get current date time to be entered as datetime in sale table private $lastname; //posted from ipn private $firstname; //posted from ipn private $emailadd; //uses sessionarray value public function __construct($saleid,$firstname,$lastname) { $this->itemnames = $_SESSION['itemnames']; $this->amountofitems =$_SESSION['quantity']; $this->itemcost =$_SESSION['price']; $this->saleid = $saleid; $this->firstname = $firstname; $this->lastname = $lastname; $this->emailadd = $SESSION['username']; mail($myemail, "session vardump ", echo var_dump($_SESSION), "From: [email protected]" ); mail($myemail, "session vardump ",count($_SESSION['itemnames']) , "From: [email protected]" ); } public function commit() { $databaseinst = database::getinstance(); $databaseinst->connect(); $numrows = $databaseinst->querydb($query); //to be completed } public function gettotalitems() { $numitems; $i; for($i=0; $i < count($this->amountofitems);$i++) { $numitems += (int) $this->amountofitems[$i]; } return $numitems; } public function gettotalcost() { $totalcost; $i; for($i=0; $i < count($this->amountofitems);$i++) { $numitems = (int) $this->amountofitems[$i]; $costofitem =doubleval($this->itemcost [$i]); $totalcost += $numitems * $costofitem; } return $totalcost; } } ?> and here is where i create an instance of the class and attempt to use it. include("purchase.php"); $purchase = new purchase($_POST['txn_id'],$_POST['first_name'],$_POST['last_name']); $fullamount = $purchase->gettotalcost(); $fullAmount = number_format($fullAmount, 2); $grossAmount = $_POST['mc_gross']; if ($fullAmount != $grossAmount) { $message = "Possible Price Jack attempt! the amount the customer payed is : " . $grossAmount . " which is not equal to the full amount. the price of the products combined is " . $fullAmount. " the transaction number for which is " . $_POST['txn_id'] . "\n\n\n$req"; mail("XXXXXXXXXXXX", "Price Jack attempt ", $message, "From: [email protected]" ); exit(); // exit } thanks for the help in advance! ive also added these lines to my constructor. the mail returns that there is nothing in the vardump uh oh! mail($myemailaddress, "session vardump ", var_dump($_SESSION), "From: [email protected]" ); also added session_start(); at the top of the constructor and it dont work! help

    Read the article

  • Gratuitous CRLF in Subject: line - why is it there, and is it legal?

    - by MadHatter
    I'm running into a problem with a NAGIOS system sending emails to a popular email-to-SMS service. The email-to-SMS service takes emails with text in the Subject: line, and sends them on to the mobile number encoded in the To: field. So far so good. Sadly, sendmail (and postfix before it) seem to be inserting a gratuitous CRLF into the (necessarily long) Subject: line, and that's causing my SMS messages to be truncated at the CRLF if and only if the Subject: line contains one or more colons past the gratuitous CRLF. I am confident that the messages are being created correctly, but just to be sure, here's me creating a completely noddy test message to myself, with a long Subject: line: echo "foo" | mail -s "1234567 101234567 201234567 301234567 401234567 501234567 601234567 701234567 801234567 90123456789" [email protected] Note there's no extra colon in this Subject: line; all I'm doing here is showing that an extra CRLF is inserted on the wire. Here's the result of sudo ngrep -x port 25: 44 61 74 65 3a 20 46 72    69 2c 20 33 31 20 4d 61    Date: Fri, 31 Ma 79 20 32 30 31 33 20 31    30 3a 34 33 3a 35 35 20    y 2013 10:43:55 2b 30 31 30 30 0d 0a 54    6f 3a 20 72 65 61 70 65    +0100..To: reape 72 40 74 65 61 70 61 72    74 79 2e 6e 65 74 0d 0a    [email protected].. 53 75 62 6a 65 63 74 3a    20 31 32 33 34 35 36 37    Subject: 1234567 20 31 30 31 32 33 34 35    36 37 20 32 30 31 32 33     101234567 20123 34 35 36 37 20 33 30 31    32 33 34 35 36 37 20 34    4567 301234567 4 30 31 32 33 34 35 36 37    20 35 30 31 32 33 34 35    01234567 5012345 36 37 0d 0a 20 36 30 31    32 33 34 35 36 37 20 37    67.. 601234567 7 30 31 32 33 34 35 36 37    20 38 30 31 32 33 34 35    01234567 8012345 36 37 20 39 30 31 32 33    34 35 36 37 38 39 0d 0a    67 90123456789.. 55 73 65 72 2d 41 67 65    6e 74 3a 20 48 65 69 72    User-Agent: Heir 6c 6f 6f 6d 20 6d 61 69    6c 78 20 31 32 2e 34 20    loom mailx 12.4 37 2f 32 39 2f 30 38 0d    0a 4d 49 4d 45 2d 56 65    7/29/08..MIME-Ve 72 73 69 6f 6e 3a 20 31    2e 30 0d 0a 43 6f 6e 74    rsion: 1.0..Cont 65 6e 74 2d 54 79 70 65    3a 20 74 65 78 74 2f 70    ent-Type: text/p 6c 61 69 6e 3b 20 63 68    61 72 73 65 74 3d 75 73    lain; charset=us About half way down (marked in bold+italic), between the 501234567 and the 601234567 in the original Subject: header, you can see a CRLF being inserted (0x0d 0x0a, on the left-hand side hex dump, .. on the right-hand side plain text). The receiving MTA seems happy to post-process this, and when I look at the on-disc stored mail at the receiving end, I see only a LF (0x0a) in the Subject: line, and the line is parsed correctly and in its entirety by, eg, alpine. Nevertheless, the CRLF is there on the wire, and between me and the (excellent) email-to-SMS support people, we've established that these are the cause of the problem. So my question is: is it lawful for an MTA to insert a gratuitous CRLF on the wire? If it is, and I can prove it, then it's the email-to-SMS house's problem, because they are being intolerant. If it isn't, or it is but I can't prove it, then it becomes my problem, so an answer with references would be most useful. Edit: I can now come clean that the email-to-SMS service in question is kapow. Once this problem was explained to them, they got it, worked with me to develop and test a fix, and have deployed the fix. My long subject lines with colons in now get relayed correctly into SMSes. I don't normally trumpet individual companies, especially not on SF, but I thought it worthy of note that kapow Did The Right Thing. (Disclaimer: I have no connection with kapow except as a paying customer who's happy about the way they dealt with his problem.)

    Read the article

  • print individual tables ??

    - by LiveEn
    I am getting values from a database and displaying in a table. Im trying to print the results as individual . Im using the below javascript <script type="text/javascript"> function print_parent(element) { element.parentNode.className = 'print'; window.print(); element.parentNode.className = ''; return false; } </script> The problem that i have is when i try to print all the results it works great.can some one please tell me how can i print each individual table in each result please? below is my php code $sql="select * from cisdb where pids LIKE '%$pids%'"; $result=mysql_query($sql) or die(mysql_error()); if (mysql_num_rows($result)==0) { echo '<b><center>There was no records !</center></b>'."<br>"; } while ($row=mysql_fetch_array($result)) { $cat=str_replace('+', ' ', $row['category']); print "<center>"; print "<table width='472' border='1' align='center' class='noprint'>"; print "<tr>"; print "<td width='150'><div align='center'><a href='#' onclick='return print_parent(this)'>Print</a> </div></td>"; print "<td width='150'><div align='center'><a href='process.php?mode=ed&id={$row['id']}'>Edit</div></td>"; print "<td width='150'><div align='center'><a href='process.php?mode=del&id={$row['id']}' onclick='return confirm('Are you sure you want to delete?')'>Delete</a></div></td>"; print "</tr>"; print "</table><br>"; print "<div id='divToPrint'>"; print"<table width=700 style=height:900 border=1 cellpadding=1 cellspacing=1 bordercolor=#D6D6D6 class=sss title={$row['title']}> <tr> <td height=25 colspan=2 align=left valign=top><strong>Customer:{$row['name']}</strong></td> <td width=183 align=left valign=top><strong>Sales ID:{$row['said']} </strong></td> <td width=100 align=left valign=top><strong>Phone Cord. ID:{$row['pcid']}</strong></td> <td align=left valign=top><strong>Type:{$row['classtype']}</strong></td> </tr> <tr> <td height=25 colspan=2 valign=top><strong>Contact Name: </strong></td> <td colspan=2 valign=top><strong>Email:</strong></td> <td width=154 valign=top><strong>Phone:</strong></td> </tr <tr> <td height=15 colspan=5 valign=top><strong>Remarks:</strong></td> </tr> <tr> <td height=15 colspan=2 valign=top><strong>Date Added: </strong></td> <td valign=top><strong>Date Edited : </strong></td> <td colspan=2 valign=top><strong>Printed : </strong></td> </tr> </table></div><br>"; print "</center>"; }

    Read the article

  • Delphi 5: Ideas for simulating "Obsolete" or "Deprecated" methods?

    - by Ian Boyd
    i want to mark a method as obsolete, but Delphi 5 doesn't have such a feature. For the sake of an example, here is a made-up method with it's deprecated and new preferred form: procedure TStormPeaksQuest.BlowHodirsHorn; overload; //obsolete procedure TStormPeaksQuest.BlowHodirsHorn(UseProtection: Boolean); overload; Note: For this hypothetical example, we assume that using the parameterless version is just plain bad. There are problems with not "using protection" - which have no good solution. Nobody likes having to use protection, but nobody wants to not use protection. So we make the caller decide if they want to use protection or not when blowing Hodir's horn. If we default the parameterless version to continue not using protection: procedure TStormPeaksQuest.BlowHodirsHorn; begin BlowHodirsHorn(False); //No protection. Bad! end; then the developer is at risk of all kinds of nasty stuff. If we force the parameterless version to use protection: procedure TStormPeaksQuest.BlowHodirsHorn; begin BlowHodirsHorn(True); //Use protection; crash if there isn't any end; then there's a potential for problems if the developer didn't get any protection, or doesn't own any. Now i could rename the obsolete method: procedure TStormPeaksQuest.BlowHodirsHorn_Deprecatedd; overload; //obsolete procedure TStormPeaksQuest.BlowHodirsHorn(UseProtection: Boolean); overload; But that will cause a compile error, and people will bitch at me (and i really don't want to hear their whining). i want them to get a nag, rather than an actual error. i thought about adding an assertion: procedure TStormPeaksQuest.BlowHodirsHorn; //obsolete begin Assert(false, 'TStormPeaksQuest.BlowHodirsHorn is deprecated. Use BlowHodirsHorn(Boolean)'); ... end; But i cannot guarantee that the developer won't ship a version without assertions, causing a nasty crash for the customer. i thought about using only throwing an assertion if the developer is debugging: procedure TStormPeaksQuest.BlowHodirsHorn; //obsolete begin if DebugHook > 0 then Assert(false, 'TStormPeaksQuest.BlowHodirsHorn is deprecated. Use BlowHodirsHorn(Boolean)'); ... end; But i really don't want to be causing a crash at all. i thought of showing a MessageDlg if they're in the debugger (which is a technique i've done in the past): procedure TStormPeaksQuest.BlowHodirsHorn; //obsolete begin if DebugHook > 0 then MessageDlg('TStormPeaksQuest.BlowHodirsHorn is deprecated. Use BlowHodirsHorn(Boolean)', mtWarning, [mbOk], 0); ... end; but that is still too disruptive. And it has caused problems where the code is stuck at showing a modal dialog, but the dialog box wasn't obviously visible. i was hoping for some sort of warning message that will sit there nagging them - until they gouge their eyes out and finally change their code. i thought perhaps if i added an unused variable: procedure TStormPeaksQuest.BlowHodirsHorn; //obsolete var ThisMethodIsObsolete: Boolean; begin ... end; i was hoping this would cause a hint only if someone referenced the code. But Delphi shows a hint even if you don't call actually use the obsolete method. Can anyone think of anything else?

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >