Search Results

Search found 7651 results on 307 pages for 'execution plan'.

Page 77/307 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • Non use of persisted data

    - by Dave Ballantyne
    Working at a client site, that in itself is good to say, I ran into a set of circumstances that made me ponder, and appreciate, the optimizer engine a bit more. Working on optimizing a stored procedure, I found a piece of code similar to : select BillToAddressID, Rowguid, dbo.udfCleanGuid(rowguid) from sales.salesorderheaderwhere BillToAddressID = 985 A lovely scalar UDF was being used,  in actuality it was used as part of the WHERE clause but simplified here.  Normally I would use an inline table valued function here, but in this case it wasn't a good option. So this seemed like a pretty good case to use a persisted column to improve performance. The supporting index was already defined as create index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid) and the function code is Create Function udfCleanGuid(@GUID uniqueidentifier)returns varchar(255)with schemabindingasbegin Declare @RetStr varchar(255) Select @RetStr=CAST(@Guid as varchar(255)) Select @RetStr=REPLACE(@Retstr,'-','') return @RetStrend Executing the Select statement produced a plan of : Nothing surprising, a seek to find the data and compute scalar to execute the UDF. Lets get optimizing and remove the UDF with a persisted column Alter table sales.salesorderheaderadd CleanedGuid as dbo.udfCleanGuid(rowguid)PERSISTED A subtle change to the SELECT statement… select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 and our new optimized plan looks like… Not a lot different from before!  We are using persisted data on our table, where is the lookup to fetch it ?  It didnt happen,  it was recalculated.  Looking at the properties of the relevant Compute Scalar would confirm this ,  but a more graphic example would be shown in the profiler SP:StatementCompleted event. Why did the lookup happen ? Remember the index definition,  it has included the original guid to avoid the lookup.  The optimizer knows this column will be passed into the UDF, run through its logic and decided that to recalculate is cheaper than the lookup.  That may or may not be the case in actuality,  the optimizer has no idea of the real cost of a scalar udf.  IMO the default cost of a scalar UDF should be seen as a lot higher than it is, since they are invariably higher. Knowing this, how do we avoid the function call?  Dropping the guid from the index is not an option, there may be other code reliant on it.   We are left with only one real option,  add the persisted column into the index. drop index Sales.SalesOrderHeader.idxBillgocreate index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid,cleanedguid) Now if we repeat the statement select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 We still have a compute scalar operator, but this time it wasnt used to recalculate the persisted data.  This can be confirmed with profiler again. The takeaway here is,  just because you have persisted data dont automatically assumed that it is being used.

    Read the article

  • Retrofit Certification

    - by Bill Evjen
    Impact of Regulations on Cabin Systems Installation John Courtright, Structural Integrity Engineering There are “heightened” FAA attention to technical issues related to IFE and Wi-Fi Systems Installations The Aging Aircraft Safety Rule – EWIS & Damage Tolerance Analysis The Challenge: Maximize Flight Safety While Minimizing Costs Issue Papers & Testing, Testing, Testing The role of Airworthiness Directives (ADs) on the design of many IFE systems and all antenna systems. Goal is safety AND cost-effective maintenance intervals and inspection techniques The STC Process Briefly Stated Type Certifications (TC) Supplemental Type Certifications (STC) The STC Process Project Specific Certification Plan (PSCP) Managed by FAA Aircraft Certification Office (ACO) Type of Project (Electrical/Mechanical Systems or Structural) Specific Type of Aircraft Being Modified Schedule Design & Installation Location What does the STC Plan (PSCP) Cover? System Description – What does the system do? System qualification – Are the components qualified? Certification requirements – What FARs are applicable? Installation detail – what is being modified? Prototype installation – What is new? Functional hazard Assessment (FHA) – is it safe? EZAP-EWIS Requirements – Any aging aircraft issues? Certification Data – How is compliance achieved? Delegation and FAA involvement – Who is doing the work? Proposed certification schedule – When is the installation? Certification documentation – What the FAA Expects to see Cabin Systems Certification Concerns In addition to meeting the requirements for DO-160, Cabin System Certification needs to address issues related to: Power management: Generally, IFE and Wi-Fi Systems are classified as “Non-Essential Equipment” from a certification viewpoint. Connected to “non-essential” power buses Must be able to shed IFE & Wi-Fi Systems in a smoke/fire event or Other electrical emergency (FAA Policy 00-111-160) FAA is more relaxed with testing wi-fi. It used to be that you had to have 150 seats with laptops running wi-fi, but now it is down to around 50. Aging aircraft concerns – electrical and structural Issue papers addressing technical concerns involving: “Structural Certification Criteria for Large Antenna Installations” Antenna “Vibration/Buffeting Compliance Criteria” DO-160 : Environmental Test Procedures DO 160 – “Environmental Conditions and Test Procedures for Airborne Equipment”, Issued by RTCA Provides guidance to equipment manufacturers as to testing requirements Temperature: –40C to +55C Vibration and Shock Contaminant susceptibility – fluids and dust Electro-magnetic Interference Cabin systems are generally classified as “non-essential” Swissair 111 crashed (in part) due to non-standard wiring practices. EWIS Design Implications Installation design must take EWIS Requirements into account. This generally means: Aircraft surveys are needed to identify proper wire routing Ensure existing wiring diagrams are correct Identify primary/Secondary/Tertiary bus locations Verify proper separation of wire bundles exist Required separation from fuel quantity indicator system (FQIS) to prevent fuel tang ignition Enhanced Zonal Analysis Procedure (EZAP) Performed EZAP was developed by the Aging Transport Systems Rulemaking Advisory Committee (ATSRAC) EZAP is the method for analyzing airplane zones with an emphasis on evaluating wiring systems and the existence of combustibles  in the cabin. Certification Considerations for Wi-Fi Systems Electrical – All existing DO 160 testing required Issue papers required Onboard EMI testing – any interference with aircraft systems when multiple wi-fi users are logged on? Vibration/Buffeting compliance criteria – what is the effect of the antenna on aircraft flight characteristics? Structural certification criteria – what are the stress loads on the aircraft at the antenna location and what is the impact on maintenance inspection criteria for the airline? Damage tolerance analysis required Goal – minimize maintenance inspection intervals

    Read the article

  • The Rise of Project Intelligence and Why It Matters

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} By Amy DeWolf Are you doing any of these in your organization? How are you leveraging historical data to forecast projects? There’s a lot going on in government today. The economic pressures agencies feel from the uncertainty of budget cuts and sequestration effect every part of an organization, including the Project Management Office (PMO).  The PMO is responsible for monitoring and administering government IT projects. As time goes on, priorities shift, technology advances, and new regulations are imposed, all of which make planning and executing projects more difficult.  For example, think about your own projects.  How many boxes do you need to check and hoops do you need to jump through to ensure you comply with new regulations? While new regulations and technology advancements can be a good thing, they add an additional layer of complexity to already complex projects. To overcome some of these pressures, particularly new regulations, many in the PMO world are adopting a new approach- Project Intelligence (PI). According to a new Oracle Primavera white paper, The Rise of Project Intelligence: When Project Management is Just Not Enough, “PI uses Business Intelligence methods to leverage historical project data to make more informed decisions and greatly enhance project execution.” Currently, project managers plan and forecast the possible phases in an execution cycle.  However, most project managers don’t have the proper tools to do this as effectively as they would like. As the white paper noted, “The underlying deficiencies in most forecasting approaches are that 1) the PM fails in most instances to leverage historical data and 2) the PM doesn’t employ current Business Intelligence tools.” PI seeks to overturn this by combining modeling tools used in Business Intelligence for projects with the understanding of Emotional Intelligence for managing people.   Simply put, Project Intelligence is built off four main pillars: Actively use historical data to forecast project cycles Understand the intricacies of complex projects Enhance social and emotional intelligence in projects Actively use Business intelligence tools Read our complimentary whitepaper and discover the importance of emotional intelligence and best practices for improving projects, specifically in terms of communication.

    Read the article

  • Starting over and new to Ubuntu

    - by 2funnyyone
    We have been having repeated problems with our interent service and using windows xp & sp3 (users and premissions) I see no need for them. I started with computers long before windows. Every since sp 3 come out in 2009 I have had nothing but problems. I have lost so many computers to virius and trojans, we just stack them up. We are with Qwest/ Century link which is using advertising servers which I think is causing the problem. All the computers are networked together which is not how I set them up. I beleive Century link is networking them through assignment of a domain for our home. This causes all the computers to crash twice. This is getting expensive. We tried buying new harddrives but reinfect with hours of connecting to internet. I also beleive the modem, router and all computers are infected. I put combofix on this one and that is the only reason we are still online with this laptop. I am afraid to install new equipment because my partner and I are on SSDI and this cost a lot. I go to school at UOP and had to run off a flash and reboot this laptop to recovery every other day or so, this pass month. New plan is: We are getting ready to install new equipment but afraid to reinfect again. Need help to install new equipment. The plan is to use current internet services from Qwest/ now Century Link. The list of New equipment in order: Century link wireless modem is ZyXEL PK5000Z with 4 direct connect Ethernet ports Next Dell Optiplex 210L ( used auction purchase ) 2 gb ram 80 g hard drive Ubuntu 11.10 operating system Next Wireless D-Link router WBR-1310 with 4 direct connect Ethernet ports OK-------- Purchased Dell OEM disk for Repair or Reinstalling Windows XP Professional Operating system (2 roommates as well) All infected computers are Dell desktops or laptops with XP Pro Also purchasing Ubuntu 12.04 for 3 computers. We like the way it runs but still learning it. Questions 1] How do we fdisk the infected computers without infecting new system. We have Dos disks, but none have floppy dish drive. We do have a new floppy disk drive and usb adapter we purchased from Amazon. 2] We are thinking Avast internet security because of the boot scan. We want all software loaded before reconnecting. We can manually load our internet provider information. We purchased StopZilla $100 for 5 computers, but not sure that is what we need. But need how to setup ports security and services we will need. Really lost at this part. So we are safe when we go back on the internet. 3] Want to connect reloaded fdisk systems to router as public connection and no sharing. Do not want to network all computers. 4] Want parental/ ownership control from Ubuntu system for internet connection (Children and friends). Do we restrict at the modem and/ or router? Any help would be a blessing. I do not want to go alone on this anymore.

    Read the article

  • JavaOne 2012: Nashorn Edition

    - by $utils.escapeXML($entry.author)
    As with my JavaOne 2012: OpenJDK Edition post a while back (now updated to reflect the schedule of the talks), I find it convenient to have my JavaOne schedule ordered by subjects of interest. Beside OpenJDK in all its flavors, another subject I find very exciting is Nashorn. I blogged about the various material on Nashorn in the past, and we interviewed Jim Laskey, the Project Lead on Project Nashorn in the Java Spotlight podcast. So without further ado, here are the JavaOne 2012 talks and BOFs with Nashorn in their title, or abstract:CON5390 - Nashorn: Optimizing JavaScript and Dynamic Language Execution on the JVM - Monday, Oct 1, 8:30 AM - 9:30 AMThere are many implementations of JavaScript, meant to run either on the JVM or standalone as native code. Both approaches have their respective pros and cons. The Oracle Nashorn JavaScript project is based on the former approach. This presentation goes through the performance work that has gone on in Oracle’s Nashorn JavaScript project to date in order to make JavaScript-to-bytecode generation for execution on the JVM feasible. It shows that the new invoke dynamic bytecode gets us part of the way there but may not quite be enough. What other tricks did the Nashorn project use? The presentation also discusses future directions for increased performance for dynamic languages on the JVM, covering proposed enhancements to both the JVM itself and to the bytecode compiler.CON4082 - Nashorn: JavaScript on the JVM - Monday, Oct 1, 3:00 PM - 4:00 PMThe JavaScript programming language has been experiencing a renaissance of late, driven by the interest in HTML5. Nashorn is a JavaScript engine implemented fully in Java on the JVM. It is based on the Da Vinci Machine (JSR 292) and will be available with JDK 8. This session describes the goals of Project Nashorn, gives a top-level view of how it all works, provides the current status, and demonstrates examples of JavaScript and Java working together.BOF4763 - Meet the Nashorn JavaScript Team - Tuesday, Oct 2, 4:30 PM - 5:15 PMCome to this session to meet the Oracle JavaScript (Project Nashorn) language teamBOF6661 - Nashorn, Node, and Java Persistence - Tuesday, Oct 2, 5:30 PM - 6:15 PMWith Project Nashorn, developers will have a full and modern JavaScript engine available on the JVM. In addition, they will have support for running Node applications with Node.jar. This unique combination of capabilities opens the door for best-of-breed applications combining Node with Java SE and Java EE. In this session, you’ll learn about Node.jar and how it can be combined with Java EE components such as EclipseLink JPA for rich Java persistence. You’ll also hear about all of Node.jar’s mapping, caching, querying, performance, and scaling features.CON10657 - The Polyglot Java VM and Java Middleware - Thursday, Oct 4, 12:30 PM - 1:30 PMIn this session, Red Hat and Oracle discuss the impact of polyglot programming from their own unique perspectives, examining non-Java languages that utilize Oracle’s Java HotSpot VM. You’ll hear a discussion of topics relating to Ruby, Lisp, and Clojure and the intersection of other languages where they may touch upon individual frameworks and projects, and you’ll get perspectives on JavaScript via the Nashorn Project, an upcoming JavaScript engine, developed fully in Java.CON5251 - Putting the Metaobject Protocol to Work: Nashorn’s Java Bindings - Thursday, Oct 4, 2:00 PM - 3:00 PMProject Nashorn is Oracle’s new JavaScript runtime in Java 8. Being a JavaScript runtime running on the JVM, it provides integration with the underlying runtime by enabling JavaScript objects to manipulate Java objects, implement Java interfaces, and extend Java classes. Nashorn is invokedynamic-based, and for its Java integration, it does away with the concept of wrapper objects in favor of direct virtual machine linking to Java objects’ methods provided by a metaobject protocol, providing much higher performance than what could be expected from a scripting runtime. This session looks at the details of the integration, a topic of interest to other language implementers on the JVM and a wider audience of developers who want to understand how Nashorn works.That's 6 sessions tooting the Nashorn this year at JavaOne, up from 2 last year.

    Read the article

  • Perspective Is Everything

    - by juanlarios
    Sitting on a window seat on my way back from Seattle I looked out the window and saw the large body of water. I was reminded of childhood memories of running as hard as I could through burning hot sand with the anticipation of the splash of the ocean. Looking out the window the water appeared like a sheet draped over land. I couldn’t help but ponder how perspective changes everything.  Over the last several days I had a chance to attend the MVP Summit in Redmond. I had a great time with fellow MVP’s and the SharePoint Product Group. Although I can’t say much about what was discussed and what is coming in the future, I want to share some realizations I had while experiencing the MVP summit.  The SharePoint Product is ever-improving, full of innovation but also a reactionary embodiment of MVP, client and market feedback. There are several features that come to mind that clients complain about where I have felt helpless in informing them that the features are not as mature as they would like it. Together, we figure out a way to make it work and deal with the limitations. It became clear that there are features that have taken a different purpose in the market place from the original vision. The SP Product group is working hard to react to these changes in vision and make SharePoint better for real life implementations.  It is easy to think that SharePoint should be all things to all people. In reality there are products that are very detailed in specific composites, they do this one thing well but severely lack in other areas.  Its easy sometimes to say, “What was Microsoft thinking with this feature?” the Product group is doing all they can to make the moving pieces better and dealing with challenges with having all of them work together.  Sometimes the features don’t fully embody the vision because of the many challenges, but trust me when I say the product group is really focused on delivery and innovation.  As I was speaking with a fellow MVP throughout the session, we spoke about the iPad 2(ironically announced this past week during the MVP summit) and Microsoft’s possible product answer; I realized the days of reactionary products from MS is over. There are many users that will remember Vista and the painful execution in that product, but there has been a lot of success in Windows 7. There was no rush for a reactionary answer to the Nintendo Wii, as a result a ground breaking and game changing product was brought to market, the XBOX –Kinect! I can’t say much here, but it’s safe to say, expect innovation, and execution of products and technology that will change the market instead of react to them!       There are many things I learned and I would love to share that have to do with perspective, technology, etc… but this is far as I can go in details. This might not be new to you or specifically the message that was shared during the summit. These are just my impressions of the event and the spirit of future vision. Great things ahead!

    Read the article

  • Ops Center 12c - Update - Provisioning Solaris on x86 Using a Card-Based NIC

    - by scottdickson
    Last week, I posted a blog describing how to use Ops Center to provision Solaris over the network via a NIC on a card rather than the built-in NIC.  Really, that was all about how to install Solaris on a SPARC system.  This week, we'll look at how to do the same thing for an x86-based server. Really, the overall process is exactly the same, at least for Solaris 11, with only minor updates. We will focus on Solaris 11 for this blog.  Once I verify that the same approach works for Solaris 10, I will provide another update. Booting Solaris 11 on x86 Just as before, in order to configure the server for network boot across a card-based NIC, it is necessary to declare the asset to associate the additional MACs with the server.  You likely will need to access the server console via the ILOM to figure out the MAC and to get a good idea of the network instance number.  The simplest way to find both of these is to start a network boot using the desired NIC and see where it appears in the list of network interfaces and what MAC is used when it tries to boot.  Go to the ILOM for the server.  Reset the server and start the console.  When the BIOS loads, select the boot menu, usually with Ctrl-P.  This will give you a menu of devices to boot from, including all of the NICs.  Select the NIC you want to boot from.  Its position in the list is a good indication of what network number Solaris will give the device. In this case, we want to boot from the 5th interface (GB_4, net4).  Pick it and start the boot processes.  When it starts to boot, you will see the MAC address for the interface Once you have the network instance and the MAC, go through the same process of declaring the asset as in the SPARC case.  This associates the additional network interface with the server.. Creating an OS Provisioning Plan The simplest way to do the boot via an alternate interface on an x86 system is to do a manual boot.  Update the OS provisioning profile as in the SPARC case to reflect the fact that we are booting from a different interface.  Update, in this case, the network boot device to be GB_4/net4, or the device corresponding to your network instance number.  Configure the profile to support manual network boot by checking the box for manual boot in the OS Provisioning profile. Booting the System Once you have created a profile and plan to support booting from the additional NIC, we are ready to install the server. Again, from the ILOM, reset the system and start the console.  When the BIOS loads, select boot from the Boot Menu as above.  Select the network interface from the list as before and start the boot process.  When the grub bootloader loads, the default boot image is the Solaris Text Installer.  On the grub menu, select Automated Installer and Ops Center takes over from there. Lessons The key lesson from all of this is that Ops Center is a valuable tool for provisioning servers whether they are connected via built-in network interfaces or via high-speed NICs on cards.  This is great news for modern datacenters using converged network infrastructures.  The process works for both SPARC and x86 Solaris installations.  And it's easy and repeatable.

    Read the article

  • Get More From Your Service Request

    - by Get Proactive Customer Adoption Team
    Leveraging Service Request Best Practices Use best practices to get there faster. In the daily conversations I have with customers, they sometimes express frustration over their Service Requests. They often feel powerless to make needed changes, so their sense of frustration grows. To help you avoid some of the frustration you might feel in dealing with your Service Requests (SR), here are a few pointers that come from our best practice discussions. Be proactive. If you can anticipate some of the questions that Support will ask, or the information they may need, try to provide this up front, when you log the SR. This could be output from the Remote Diagnostic Agent (RDA), if this is a database issue, or the output from another diagnostic tool, if you’re an EBS customer. Any information you can supply that helps us understand the situation better, helps us resolve the issue sooner. As you use some of these tools proactively, you might even find the solution to the problem before you log an SR! Be right. Make sure you have the correct severity level. Since you select the initial severity level, it’s easy to accept the default without considering how significant this may be. Business impact is the driving factor, so make sure you take a moment to select the severity level that is appropriate to the situation. Also, make sure you ask us to change the severity level, should the situation dictate. Be responsive! If this is an important issue to you, quickly follow up on any action plan submitted to you by Oracle Support. The support engineer assigned to your Service Request will be able to move the issue forward more aggressively when they have the needed information. This is crucial in resolving your issues in a timely manner. Be thorough. If there are five questions in the action plan, make sure you provide an answer for all five questions in one response, rather than trickling them in one at a time. This will allow the engineer to look at all of the information as a whole and to avoid multiple trips to your SR, saving valuable time and getting you a resolution sooner. Be your own advocate! You know your situation best; make sure Oracle Support understands both how and why this issue is important to you and your company. Use the escalation process if you're concerned that your SR isn't going the right direction, the right pace, or through the right person. Don't wait until you're frustrated and angry. An escalation is as simple as a quick conversation on the phone and can be amazingly effective in getting your issues back on track. The support manager you speak with is empowered to make any needed changes. Be our partner. You can make your support experience better. When your SR has been resolved, you may receive a survey request. This is intended to get your feedback about how your SR went and what we can do to improve your overall support experience. Oracle Support is here to help you. Our goal with any Service Request is to provide the best possible solution as quickly as possible. With your help, we’ll be able to do this with your Service Request too.  

    Read the article

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • Concurrent Affairs

    - by Tony Davis
    I once wrote an editorial, multi-core mania, on the conundrum of ever-increasing numbers of processor cores, but without the concurrent programming techniques to get anywhere near exploiting their performance potential. I came to the.controversial.conclusion that, while the problem loomed for all procedural languages, it was not a big issue for the vast majority of programmers. Two years later, I still think most programmers don't concern themselves overly with this issue, but I do think that's a bigger problem than I originally implied. Firstly, is the performance boost from writing code that can fully exploit all available cores worth the cost of the additional programming complexity? Right now, with quad-core processors that, at best, can make our programs four times faster, the answer is still no for many applications. But what happens in a few years, as the number of cores grows to 100 or even 1000? At this point, it becomes very hard to ignore the potential gains from exploiting concurrency. Possibly, I was optimistic to assume that, by the time we have 100-core processors, and most applications really needed to exploit them, some technology would be around to allow us to do so with relative ease. The ideal solution would be one that allows programmers to forget about the problem, in much the same way that garbage collection removed the need to worry too much about memory allocation. From all I can find on the topic, though, there is only a remote likelihood that we'll ever have a compiler that takes a program written in a single-threaded style and "auto-magically" converts it into an efficient, correct, multi-threaded program. At the same time, it seems clear that what is currently the most common solution, multi-threaded programming with shared memory, is unsustainable. As soon as a piece of state can be changed by a different thread of execution, the potential number of execution paths through your program grows exponentially with the number of threads. If you have two threads, each executing n instructions, then there are 2^n possible "interleavings" of those instructions. Of course, many of those interleavings will have identical behavior, but several won't. Not only does this make understanding how a program works an order of magnitude harder, but it will also result in irreproducible, non-deterministic, bugs. And of course, the problem will be many times worse when you have a hundred or a thousand threads. So what is the answer? All of the possible alternatives require a change in the way we write programs and, currently, seem to be plagued by performance issues. Software transactional memory (STM) applies the ideas of database transactions, and optimistic concurrency control, to memory. However, working out how to break down your program into sufficiently small transactions, so as to avoid contention issues, isn't easy. Another approach is concurrency with actors, where instead of having threads share memory, each thread runs in complete isolation, and communicates with others by passing messages. It simplifies concurrent programs but still has performance issues, if the threads need to operate on the same large piece of data. There are doubtless other possible solutions that I haven't mentioned, and I would love to know to what extent you, as a developer, are considering the problem of multi-core concurrency, what solution you currently favor, and why. Cheers, Tony.

    Read the article

  • Passing a parameter using RelayCommand defined in the ViewModel (from Josh Smith example)

    - by eesh
    I would like to pass a parameter defined in the XAML (View) of my application to the ViewModel class by using the RelayCommand. I followed Josh Smith's excellent article on MVVM and have implemented the following. XAML Code <Button Command="{Binding Path=ACommandWithAParameter}" CommandParameter="Orange" HorizontalAlignment="Left" Style="{DynamicResource SimpleButton}" VerticalAlignment="Top" Content="Button"/> ViewModel Code public RelayCommand _aCommandWithAParameter; /// <summary> /// Returns a command with a parameter /// </summary> public RelayCommand ACommandWithAParameter { get { if (_aCommandWithAParameter == null) { _aCommandWithAParameter = new RelayCommand( param => this.CommandWithAParameter("Apple") ); } return _aCommandWithAParameter; } } public void CommandWithAParameter(String aParameter) { String theParameter = aParameter; } #endregion I set a breakpoint in the CommandWithAParameter method and observed that aParameter was set to "Apple", and not "Orange". This seems obvious as the method CommandWithAParameter is being called with the literal String "Apple". However, looking up the execution stack, I can see that "Orange", the CommandParameter I set in the XAML is the parameter value for RelayCommand implemenation of the ICommand Execute interface method. That is the value of parameter in the method below of the execution stack is "Orange", public void Execute(object parameter) { _execute(parameter); } What I am trying to figure out is how to create the RelayCommand ACommandWithAParameter property such that it can call the CommandWithAParameter method with the CommandParameter "Orange" defined in the XAML. Is there a way to do this? Why do I want to do this? Part of "On The Fly Localization" In my particular implementation I want to create a SetLanguage RelayCommand that can be bound to multiple buttons. I would like to pass the two character language identifier ("en", "es", "ja", etc) as the CommandParameter and have that be defined for each "set language" button defined in the XAML. I want to avoid having to create a SetLanguageToXXX command for each language supporting and hard coding the two character language identifier into each RelayCommand in the ViewModel.

    Read the article

  • Stack Overflow Problem in DotNetNuke

    - by Vivek
    Hi, I'm getting this error message when I try to access my website. Can someone please tell me what is going on? Thanks. V Server Error in '/' Application. Attempted to read or write protected memory. This is often an indication that other memory is corrupt. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.] AspDotNetStorefrontExcelWrapper.ExcelToXml.SetLicense() +0 AspDotNetStorefrontCommon.AppLogic.ApplicationStart() +150 AspDotNetStorefrontDNNComponents.AppStart..cctor() +103 [TypeInitializationException: The type initializer for 'AspDotNetStorefrontDNNComponents.AppStart' threw an exception.] AspDotNetStorefrontDNNComponents.AppStart.Execute() +0 AspDotNetStorefront.HttpModules.InitializerModule.System.Web.IHttpModule.Init(HttpApplication context) +42 System.Web.HttpApplication.InitModulesCommon() +65 System.Web.HttpApplication.InitModules() +43 System.Web.HttpApplication.InitInternal(HttpContext context, HttpApplicationState state, MethodInfo[] handlers) +729 System.Web.HttpApplicationFactory.GetNormalApplicationInstance(HttpContext context) +298 System.Web.HttpApplicationFactory.GetApplicationInstance(HttpContext context) +107 System.Web.HttpRuntime.ProcessRequestInternal(HttpWorkerRequest wr) +289 Version Information: Microsoft .NET Framework Version:2.0.50727.3082; ASP.NET Version:2.0.50727.3082

    Read the article

  • The HTTP verb POST used to access path '/' is not allowed.

    - by Ryan
    The entire error: Server Error in '/' Application. The HTTP verb POST used to access path '/' is not allowed. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: The HTTP verb POST used to access path '/' is not allowed. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [HttpException (0x80004005): The HTTP verb POST used to access path '/' is not allowed.] System.Web.DefaultHttpHandler.BeginProcessRequest(HttpContext context, AsyncCallback callback, Object state) +2871966 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +8679410 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 To be honest, I'm not even sure where the error came from. I'm running Visual Studio 2008 through the Virtual Server. I just put a button: <asp:Button ID="btnRegister" runat="server" Text="Register" CssClass="bt_register" onclick="btnRegister_Click" /> On a login user control, the onclick event is just a simple response.redirect Response.Redirect("~/register.aspx"); Debugging the project, it isn't even hitting the btnRegister_Click method anyway. I'm not sure where to even begin with debugging this error. Any information will help. I can post all the code I have, but like I said, I'm not sure where this error is even being thrown at. Edit It has nothing at all to do with the button click event. I got rid of the method and the onclick parameter on the aspx page. Still coming up with the same error problem found Okay so this is for a school project and its a group project. Some one in my group thought it would be a good idea to wrap a form tag around this area telling it to post. Found it doing a diff with a revision on Google code.

    Read the article

  • The trust relationship between the primary domain and the trusted domain failed. ASP.NET 2.0

    - by Dasupalouie
    Anyone run into this issue? Any help would be appretiated :) Server Error in '/CTCWeb' Application. The trust relationship between the primary domain and the trusted domain failed. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.SystemException: The trust relationship between the primary domain and the trusted domain failed. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SystemException: The trust relationship between the primary domain and the trusted domain failed. ] System.Security.Principal.NTAccount.TranslateToSids(IdentityReferenceCollection sourceAccounts, Boolean& someFailed) +1185 System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean& someFailed) +44 System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess) +47 System.Security.Principal.WindowsPrincipal.IsInRole(String role) +101 System.Web.Configuration.AuthorizationRule.IsTheUserInAnyRole(StringCollection roles, IPrincipal principal) +123 System.Web.Configuration.AuthorizationRule.IsUserAllowed(IPrincipal user, String verb) +256 System.Web.Configuration.AuthorizationRuleCollection.IsUserAllowed(IPrincipal user, String verb) +199 System.Web.Security.UrlAuthorizationModule.OnEnter(Object source, EventArgs eventArgs) +8771980 System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +68 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +75 -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:2.0.50727.3603; ASP.NET Version:2.0.50727.3053

    Read the article

  • How to do server-side validation using Jqgrid?

    - by Eoghan
    Hi, I'm using jqgrid to display a list of sites and I want to do some server side validation when a site is added or edited. (Form editing rather than inline. Validation needs to be server side for various reasons I won't go into.) I thought the best way would be to check the data via an ajax request when the beforeSubmit event is triggered. However this only seems to work when I'm editing an existing row in the grid - the function isn't called when I add a new row. Have I got my beforeSubmit in the wrong place? Thanks for your help. $("#sites-grid").jqGrid({ url:'/json/sites', datatype: "json", mtype: 'GET', colNames:['Code', 'Name', 'Area', 'Cluster', 'Date Live', 'Status', 'Lat', 'Lng'], colModel :[ {name:'code', index:'code', width:80, align:'left', editable:true}, {name:'name', index:'name', width:250, align:'left', editrules:{required:true}, editable:true}, {name:'area', index:'area', width:60, align:'left', editable:true}, {name:'cluster_id', index:'cluster_id', width:80, align:'right', editrules:{required:true, integer:true}, editable:true, edittype:"select", editoptions:{value:"<?php echo $cluster_options; ?>"}}, {name:'estimated_live_date', index:'estimated_live_date', width:120, align:'left', editable:true, editrules:{required:true}, edittype:"select", editoptions:{value:"<?php echo $this->month_options; ?>"}}, {name:'status', index:'status', width:80, align:'left', editable:true, edittype:"select", editoptions:{value:"Live:Live;Plan:Plan;"}}, {name:'lat', index:'lat', width:140, align:'right', editrules:{required:true}, editable:true}, {name:'lng', index:'lng', width:140, align:'right', editrules:{required:true}, editable:true}, ], height: '300', pager: '#pager-sites', rowNum:30, rowList:[10,30,90], sortname: 'cluster_id', sortorder: 'desc', viewrecords: true, multiselect: false, caption: 'Sites', editurl: '/json/sites' }); $("#sites-grid").jqGrid('navGrid','#pager-sites',{edit:true,add:true,del:true, beforeSubmit : function(postdata, formid) { $.ajax({ url : 'json/validate-site/', data : postdata, dataType : 'json', type : 'post', success : function(data) { alert(data.message); return[data.result, data.message]; } }); }});

    Read the article

  • asp.net:Invalid temp directory in chart handler configuration [c:\TempImageFiles\].

    - by veda
    I am getting this error Invalid temp directory in chart handler configuration [c:\TempImageFiles\]. While running my code. Intially I was getting No http handler was found for request type ‘GET’ error which I solved it by referring no http handler But now I am getting the above error The details of the error are Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.IO.DirectoryNotFoundException: Invalid temp directory in chart handler configuration [c:\TempImageFiles\]. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. The stackTrace of this error [DirectoryNotFoundException: Invalid temp directory in chart handler configuration [c:\TempImageFiles\].] System.Web.UI.DataVisualization.Charting.ChartHttpHandlerSettings.Inspect() +851 System.Web.UI.DataVisualization.Charting.ChartHttpHandlerSettings.ParseParams(String parameters) +1759 System.Web.UI.DataVisualization.Charting.ChartHttpHandlerSettings..ctor(String parameters) +619 System.Web.UI.DataVisualization.Charting.ChartHttpHandler.InitializeParameters() +237 System.Web.UI.DataVisualization.Charting.ChartHttpHandler.EnsureInitialized(Boolean hardCheck) +208 System.Web.UI.DataVisualization.Charting.ChartHttpHandler.EnsureInstalled() +33 System.Web.UI.DataVisualization.Charting.Chart.GetImageStorageMode() +57 System.Web.UI.DataVisualization.Charting.Chart.Render(HtmlTextWriter writer) +257 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +144 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +583 System.Web.UI.Control.RenderControl(HtmlTextWriter writer) +91 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +410 System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) +118 System.Web.UI.HtmlControls.HtmlForm.RenderChildren(HtmlTextWriter writer) +489 System.Web.UI.HtmlControls.HtmlContainerControl.Render(HtmlTextWriter writer) +84 System.Web.UI.HtmlControls.HtmlForm.Render(HtmlTextWriter output) +713 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +144 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +583 System.Web.UI.Control.RenderControl(HtmlTextWriter writer) +91 System.Web.UI.HtmlControls.HtmlForm.RenderControl(HtmlTextWriter writer) +91 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +410 System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) +118 System.Web.UI.Control.Render(HtmlTextWriter writer) +60 System.Web.UI.Page.Render(HtmlTextWriter writer) +66 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +144 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +583 System.Web.UI.Control.RenderControl(HtmlTextWriter writer) +91 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +7761 Can anyone tell me how to solve this problem... Should i have to create a temporary directory manually or what should i do...

    Read the article

  • Why am I getting: InvalidOperationException: Failed to map the path '/app42/App_Code/'.

    - by serialhobbyist
    I've been working on a little Silverlight utility which calls a Silverlight web service. It works from my dev machine (XPsp2). I've tried publishing it to a 2008 R2 IIS 7.5 server and it doesn't work when trying to contact the web service. So I've tried using the WcfTestClient to connect to the web service. That gave an error. So I turned off CustomErrors and used IE and I get the following. Any idea why? There's no App_Code folder in the app. Failed to map the path '/app42/App_Code/'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: Failed to map the path '/app42/App_Code/'. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [InvalidOperationException: Failed to map the path '/app42/App_Code/'.] System.Web.Configuration.ProcessHostConfigUtils.MapPathActual(String siteName, VirtualPath path) +320 System.Web.Configuration.ProcessHostServerConfig.System.Web.Configuration.IServerConfig.MapPath(IApplicationHost appHost, VirtualPath path) +34 System.Web.Hosting.MapPathBasedVirtualPathEnumerator..ctor(VirtualPath virtualPath, RequestedEntryType requestedEntryType) +169 System.Web.Hosting.MapPathBasedVirtualPathCollection.System.Collections.IEnumerable.GetEnumerator() +43 System.Web.Compilation.CodeDirectoryCompiler.ProcessDirectoryRecursive(VirtualDirectory vdir, Boolean topLevel) +147 System.Web.Compilation.CodeDirectoryCompiler.GetCodeDirectoryAssembly(VirtualPath virtualDir, CodeDirectoryType dirType, String assemblyName, StringSet excludedSubdirectories, Boolean isDirectoryAllowed) +11196502 System.Web.Compilation.BuildManager.CompileCodeDirectory(VirtualPath virtualDir, CodeDirectoryType dirType, String assemblyName, StringSet excludedSubdirectories) +185 System.Web.Compilation.BuildManager.CompileCodeDirectories() +654 System.Web.Compilation.BuildManager.EnsureTopLevelFilesCompiled() +658 [HttpException (0x80004005): Failed to map the path '/app42/App_Code/'.] System.Web.Compilation.BuildManager.ReportTopLevelCompilationException() +76 System.Web.Compilation.BuildManager.EnsureTopLevelFilesCompiled() +1012 System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters) +1025 [HttpException (0x80004005): Failed to map the path '/app42/App_Code/'.] System.Web.HttpRuntime.FirstRequestInit(HttpContext context) +11301302 System.Web.HttpRuntime.EnsureFirstRequestInit(HttpContext context) +88 System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest wr, HttpContext context) +4338644 Version Information: Microsoft .NET Framework Version:2.0.50727.4927; ASP.NET Version:2.0.50727.4927

    Read the article

  • How to use SMO.Scripter to generate a "full-script" of DB?

    - by ssg
    What I'm trying to do is a very simple task; I'd like to create a script to generate a database along with tables, SPs and UDFs. This is done with a couple of clicks on SSMS interface. However db.Script() only scripts CREATE DATABASE. Ok, so I iterate over objects one by one and script them individually. Now, what I have is an arbitrary order of CREATEs naturally failing during execution because dependent objects aren't created first. Ok so I set WithDependencies flag so dependent objects ARE scripted first. However this causes redundant CREATE scripts for objects that are already created, and causes around 20x growth in SQL file and generation time. Not to mention the errors hit during execution. I don't know if there is a way to mark objects "already walked in dependency tree", it doesn't seem likely. I might be missing a bigger picture somewhere, but MSDN recommends "Scripter" to generate scripts like the one I want. I had used Transfer class before to transfer table definitions but it fails to create a failsafe script. It doesn't make sense to use a Transfer object to generate a script anyway. I want to do this the way it should be done, and without losing my faith in SMO.

    Read the article

  • properties-maven-plugin: Error loading properties-file

    - by yournamehere
    I want to extract all the properties from my pom.xml into a properties-file. These are the common properties like dependency-versions, plugin-versions and directories. I'm using the properties-maven-plugin, but its not working as i want it to. The essential part of my pom.xml: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>properties-maven-plugin</artifactId> <version>1.0-alpha-1</version> <executions> <execution> <phase>initialize</phase> <goals> <goal>read-project-properties</goal> </goals> <configuration> <files> <file>${basedir}/pom.properties</file> </files> </configuration> </execution> </executions> </plugin> Now when i run "mvn properties:read-project-properties" i get the following error: [INFO] One or more required plugin parameters are invalid/missing for 'properties:read-project-properties' [0] Inside the definition for plugin 'properties-maven-plugin' specify the following: <configuration> ... <files>VALUE</files> </configuration>. The pom.properties-file is located in the same dir as the pom.xml. What can i do to let the properties-maven-plugin read my properties-file?

    Read the article

  • How do I use PerformanceCounterType AverageTimer32?

    - by Patrick J Collins
    I'm trying to measure the time it takes to execute a piece of code on my production server. I'd like to monitor this information in real time, so I decided to give Performance Analyser a whizz. I understand from MSDN that I need to create both an AverageTimer32 and an AverageBase performance counter, which I duly have. I increment the counter in my program, and I can see the CallCount go up and down, but the AverageTime is always zero. What am I doing wrong? Thanks! Here's a snippit of code : long init_call_time = Environment.TickCount; // *** // Lots and lots of code... // *** // Count number of calls PerformanceCounter perf = new PerformanceCounter("Cat", "CallCount", "Instance", false); perf.Increment(); perf.Close(); // Count execution time PerformanceCounter perf2 = new PerformanceCounter("Cat", "CallTime", "Instance", false); perf2.NextValue(); perf2.IncrementBy(Environment.TickCount - init_call_time); perf2.Close(); // Average base for execution time PerformanceCounter perf3 = new PerformanceCounter("Cat", "CallTimeBase", "Instance", false); perf3.Increment(); perf3.Close(); perf2.NextValue();

    Read the article

  • Imagemagick PDF to JPG conversion failing

    - by sbressler
    I'm trying to convert the first page of a PDF to a JPG. I'm pretty sure I got this to work with certain PDFs, but is it really possible that certain PDFs are made incorrectly and cannot be converted? I tried running this first: $ convert 10-03-26.pdf[1] test.jpg And I got the follow: Error: /syntaxerror in readxref Operand stack: Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1 3 %oparray_pop 1 3 %oparray_pop --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- Dictionary stack: --dict:1062/1417(ro)(G)-- --dict:0/20(G)-- --dict:73/200(L)-- --dict:73/200(L)-- --dict:97/127(ro)(G)-- --dict:229/230(ro)(G)-- --dict:14/15(L)-- Current allocation mode is local ESP Ghostscript 7.07.1: Unrecoverable error, exit code 1 convert: Postscript delegate failed `10-03-26.pdf'. Running this instead: $ convert -verbose -colorspace rgb '10-03-26.pdf[1]' test.jpg I get the following: Error: /syntaxerror in readxref Operand stack: Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1 3 %oparray_pop 1 3 %oparray_pop --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- Dictionary stack: --dict:1062/1417(ro)(G)-- --dict:0/20(G)-- --dict:73/200(L)-- --dict:73/200(L)-- --dict:97/127(ro)(G)-- --dict:229/230(ro)(G)-- --dict:14/15(L)-- Current allocation mode is local ESP Ghostscript 7.07.1: Unrecoverable error, exit code 1 "gs" -q -dBATCH -dSAFER -dMaxBitmap=500000000 -dNOPAUSE -dAlignToPixels=0 "-sDEVICE=pnmraw" -dTextAlphaBits=4 -dGraphicsAlphaBits=4 "-g792x1611" "-r72x72" -dFirstPage=2 -dLastPage=2 "-sOutputFile=/tmp/magick-XXU3T44P" "-f/tmp/magick-XXoMKL8Z" "-f/tmp/magic2eec1F"Start of Image Define Huffman Table 0x00 0 1 5 1 1 1 1 1 1 0 0 0 0 0 0 0 Define Huffman Table 0x01 0 3 1 1 1 1 1 1 1 1 1 0 0 0 0 0 Define Huffman Table 0x10 0 2 1 3 3 2 4 3 5 5 4 4 0 0 1 125 Define Huffman Table 0x11 0 2 1 2 4 4 3 4 7 5 4 4 0 1 2 119 End Of Image convert: Postscript delegate failed `10-03-26.pdf'. Why would the conversion fail? Thanks!

    Read the article

  • Roadmap to Android development

    - by Matthew
    Hello, I've done a little research, and am interested in developing for Android. I've never programmed before, and have no idea how to go from zero experience to developing for a mobile device. My interest is in eventually making some sort of 2d game. Is there a lesson plan for starting from the ground up? I would think one would need to learn the Java language to start. Looking at the Sun website, it's a bit daunting. Is there a book, specifically, that would wrap up this knowledge in a bit of a directed lesson plan? I'm not sure if opengl-es is what would be required for 2d games. I've done a little research on this, and it's even far more daunting than Java itself. I can't even begin to figure out where to start with even just opengl, sans -es. My best guess would be that I need further knowledge in Java to continue with this, but even still, is it possible to learn concurrently with Java?

    Read the article

  • what can be causes of http server crash?

    - by mithunmo
    Hello , I am using WAMP server on Windows XP. - Apache 2.2.11 - MySQL 5.1.36 (INNODB engine) - PHP 5.3.0 I observe that my WAMP server crashes in the following scenarios 1) IF I use a Low end PC ( low processor speed and low RAM) 2) After making some changes to httpd.conf file .For eg changing the Allow from IP address . But here it crashes only once and then it starts to work fine. 3) Random crashes CRASH LOG szAppName : httpd.exe szAppVer : 2.2.11.0 szModName : php5ts.dll szModVer : 5.3.0.0 offset : 0000c309 C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\httpd.exe.mdmp C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\appcompat.txt My questions 1) Does high CPU utilization/LOW RAM can also cause the HTTP server to crash ? 2) excessive file reading as in every 10 seconds ? 3) unlimited script execution time . I have set the maximum execution time in php script to 0 as my script has to execute for sometimes 2-3 days. Is there any way to avoid this ? 4) Access to Database ? Should we use lock before reading and writing Can these be the reasons for random wamp server crashes ? OR is is some other programming error ? Please guide me . Regards, Mithun

    Read the article

  • Bamboo to Build Specific SVN Revision

    - by Anton Gogolev
    Hi! Imagine there's a project in Bamboo with two build plans: Staging Deployment (SD) and Production Deployment (PD). Building SD checks out latest sources, builds them and deploys a web site to a staging server. Currently, PD does all the same, namely deploys the latest version of a web site to a production server. Clearly, this is not very good: I want to be able to deploy the same exact version of a web site that was previously deployed on a staging server, not the latest one. To illustrate: suppose we're at r101 in SVN repo. Clicking "Build SD" will deploy a web site version, say, 2.1.0.101 to staging server. Now we commit a breaking change and end up at r102. Now I want to deploy to a production server. If I hit "Build PD", Bamboo will happily check out r102 and build it, resulting in version 2.1.0.102 being deployed to a production server. What I want it to do, however, is to build and deploy a version which was previously built in an SD plan (that is, 2.1.0.101). Of course I can make SD plan to tag latest-successful build as tags/builds/latest, but I would rather have Bamboo itself handle that.

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >