Search Results

Search found 1882 results on 76 pages for 'recurring maintenance'.

Page 35/76 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • The Application Architecture Domain

    - by Michael Glas
    I have been spending a lot of time thinking about Application Architecture in the context of EA. More specifically, as an Enterprise Architect, what do I need to consider when looking at/defining/designing the Application Architecture Domain?There are several definitions of Application Architecture. TOGAF says “The objective here [in Application Architecture] is to define the major kinds of application system necessary to process the data and support the business”. FEA says the Application Architecture “Defines the applications needed to manage the data and support the business functions”.I agree with these definitions. They reflect what the Application Architecture domain does. However, they need to be decomposed to be practical.I find it useful to define a set of views into the Application Architecture domain. These views reflect what an EA needs to consider when working with/in the Applications Architecture domain. These viewpoints are, at a high level:Capability View: This view reflects how applications alignment with business capabilities. It is a super set of the following views when viewed in aggregate. By looking at the Application Architecture domain in terms of the business capabilities it supports, you get a good perspective on how those applications are directly supporting the business.Technology View: The technology view reflects the underlying technology that makes up the applications. Based on the number of rationalization activities I have seen (more specifically application rationalization), the phrase “complexity equals cost” drives the importance of the technology view, especially when attempting to reduce that complexity through standardization type activities. Some of the technology components to be considered are: Software: The application itself as well as the software the application relies on to function (web servers, application servers). Infrastructure: The underlying hardware and network components required by the application and supporting application software. Development: How the application is created and maintained. This encompasses development components that are part of the application itself (i.e. customizable functions), as well as bolt on development through web services, API’s, etc. The maintenance process itself also falls under this view. Integration: The interfaces that the application provides for integration as well as the integrations to other applications and data sources the application requires to function. Type: Reflects the kind of application (mash-up, 3 tiered, etc). (Note: functional type [CRM, HCM, etc.] are reflected under the capability view). Organization View: Organizations are comprised of people and those people use applications to do their jobs. Trying to define the application architecture domain without taking the organization that will use/fund/change it into consideration is like trying to design a car without thinking about who will drive it (i.e. you may end up building a formula 1 car for a family of 5 that is really looking for a minivan). This view reflects the people aspect of the application. It includes: Ownership: Who ‘owns’ the application? This will usually reflect primary funding and utilization but not always. Funding: Who funds both the acquisition/creation as well as the on-going maintenance (funding to create/change/operate)? Change: Who can/does request changes to the application and what process to the follow? Utilization: Who uses the application, how often do they use it, and how do they use it? Support: Which organization is responsible for the on-going support of the application? Information View: Whether or not you subscribe to the view that “information drives the enterprise”, it is a fact that information is critical. The management, creation, and organization of that information are primary functions of enterprise applications. This view reflects how the applications are tied to information (or at a higher level – how the Application Architecture domain relates to the Information Architecture domain). It includes: Access: The application is the mechanism by which end users access information. This could be through a primary application (i.e. CRM application), or through an information access type application (a BI application as an example). Creation: Applications create data in order to provide information to end-users. (I.e. an application creates an order to be used by an end-user as part of the fulfillment process). Consumption: Describes the data required by applications to function (i.e. a product id is required by a purchasing application to create an order. Application Service View: Organizations today are striving to be more agile. As an EA, I need to provide an architecture that supports this agility. One of the primary ways to achieve the required agility in the application architecture domain is through the use of ‘services’ (think SOA, web services, etc.). Whether it is through building applications from the ground up utilizing services, service enabling an existing application, or buying applications that are already ‘service enabled’, compartmentalizing application functions for re-use helps enable flexibility in the use of those applications in support of the required business agility. The applications service view consists of: Services: Here, I refer to the generic definition of a service “a set of related software functionalities that can be reused for different purposes, together with the policies that should control its usage”. Functions: The activities within an application that are not available / applicable for re-use. This view is helpful when identifying duplication functions between applications that are not service enabled. Delivery Model View: It is hard to talk about EA today without hearing the terms ‘cloud’ or shared services.  Organizations are looking at the ways their applications are delivered for several reasons, to reduce cost (both CAPEX and OPEX), to improve agility (time to market as an example), etc.  From an EA perspective, where/how an application is deployed has impacts on the overall enterprise architecture. From integration concerns to SLA requirements to security and compliance issues, the Enterprise Architect needs to factor in how applications are delivered when designing the Enterprise Architecture. This view reflects how applications are delivered to end-users. The delivery model view consists of different types of delivery mechanisms/deployment options for applications: Traditional: Reflects non-cloud type delivery options. The most prevalent consists of an application running on dedicated hardware (usually specific to an environment) for a single consumer. Private Cloud: The application runs on infrastructure provisioned for exclusive use by a single organization comprising multiple consumers. Public Cloud: The application runs on infrastructure provisioned for open use by the general public. Hybrid: The application is deployed on two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability. While by no means comprehensive, I find that applying these views to the application domain gives a good understanding of what an EA needs to consider when effecting changes to the Application Architecture domain.Finally, the application architecture domain is one of several architecture domains that an EA must consider when developing an overall Enterprise Architecture. The Oracle Enterprise Architecture Framework defines four Primary domains: Business Architecture, Application Architecture, Information Architecture, and Technology Architecture. Each domain links to the others either directly or indirectly at some point. Oracle links them at a high level as follows:Business Capabilities and/or Business Processes (Business Architecture), links to the Applications that enable the capability/process (Applications Architecture – COTS, Custom), links to the Information Assets managed/maintained by the Applications (Information Architecture), links to the technology infrastructure upon which all this runs (Technology Architecture - integration, security, BI/DW, DB infrastructure, deployment model). There are however, times when the EA needs to narrow focus to a particular domain for some period of time. These views help me to do just that.

    Read the article

  • Premature-Optimization and Performance Anxiety

    - by James Michael Hare
    While writing my post analyzing the new .NET 4 ConcurrentDictionary class (here), I fell into one of the classic blunders that I myself always love to warn about.  After analyzing the differences of time between a Dictionary with locking versus the new ConcurrentDictionary class, I noted that the ConcurrentDictionary was faster with read-heavy multi-threaded operations.  Then, I made the classic blunder of thinking that because the original Dictionary with locking was faster for those write-heavy uses, it was the best choice for those types of tasks.  In short, I fell into the premature-optimization anti-pattern. Basically, the premature-optimization anti-pattern is when a developer is coding very early for a perceived (whether rightly-or-wrongly) performance gain and sacrificing good design and maintainability in the process.  At best, the performance gains are usually negligible and at worst, can either negatively impact performance, or can degrade maintainability so much that time to market suffers or the code becomes very fragile due to the complexity. Keep in mind the distinction above.  I'm not talking about valid performance decisions.  There are decisions one should make when designing and writing an application that are valid performance decisions.  Examples of this are knowing the best data structures for a given situation (Dictionary versus List, for example) and choosing performance algorithms (linear search vs. binary search).  But these in my mind are macro optimizations.  The error is not in deciding to use a better data structure or algorithm, the anti-pattern as stated above is when you attempt to over-optimize early on in such a way that it sacrifices maintainability. In my case, I was actually considering trading the safety and maintainability gains of the ConcurrentDictionary (no locking required) for a slight performance gain by using the Dictionary with locking.  This would have been a mistake as I would be trading maintainability (ConcurrentDictionary requires no locking which helps readability) and safety (ConcurrentDictionary is safe for iteration even while being modified and you don't risk the developer locking incorrectly) -- and I fell for it even when I knew to watch out for it.  I think in my case, and it may be true for others as well, a large part of it was due to the time I was trained as a developer.  I began college in in the 90s when C and C++ was king and hardware speed and memory were still relatively priceless commodities and not to be squandered.  In those days, using a long instead of a short could waste precious resources, and as such, we were taught to try to minimize space and favor performance.  This is why in many cases such early code-bases were very hard to maintain.  I don't know how many times I heard back then to avoid too many function calls because of the overhead -- and in fact just last year I heard a new hire in the company where I work declare that she didn't want to refactor a long method because of function call overhead.  Now back then, that may have been a valid concern, but with today's modern hardware even if you're calling a trivial method in an extremely tight loop (which chances are the JIT compiler would optimize anyway) the results of removing method calls to speed up performance are negligible for the great majority of applications.  Now, obviously, there are those coding applications where speed is absolutely king (for example drivers, computer games, operating systems) where such sacrifices may be made.  But I would strongly advice against such optimization because of it's cost.  Many folks that are performing an optimization think it's always a win-win.  That they're simply adding speed to the application, what could possibly be wrong with that?  What they don't realize is the cost of their choice.  For every piece of straight-forward code that you obfuscate with performance enhancements, you risk the introduction of bugs in the long term technical debt of the application.  It will become so fragile over time that maintenance will become a nightmare.  I've seen such applications in places I have worked.  There are times I've seen applications where the designer was so obsessed with performance that they even designed their own memory management system for their application to try to squeeze out every ounce of performance.  Unfortunately, the application stability often suffers as a result and it is very difficult for anyone other than the original designer to maintain. I've even seen this recently where I heard a C++ developer bemoaning that in VS2010 the iterators are about twice as slow as they used to be because Microsoft added range checking (probably as part of the 0x standard implementation).  To me this was almost a joke.  Twice as slow sounds bad, but it almost never as bad as you think -- especially if you're gaining safety.  The only time twice is really that much slower is when once was too slow to begin with.  Think about it.  2 minutes is slow as a response time because 1 minute is slow.  But if an iterator takes 1 microsecond to move one position and a new, safer iterator takes 2 microseconds, this is trivial!  The only way you'd ever really notice this would be in iterating a collection just for the sake of iterating (i.e. no other operations).  To my mind, the added safety makes the extra time worth it. Always favor safety and maintainability when you can.  I know it can be a hard habit to break, especially if you started out your career early or in a language such as C where they are very performance conscious.  But in reality, these type of micro-optimizations only end up hurting you in the long run. Remember the two laws of optimization.  I'm not sure where I first heard these, but they are so true: For beginners: Do not optimize. For experts: Do not optimize yet. This is so true.  If you're a beginner, resist the urge to optimize at all costs.  And if you are an expert, delay that decision.  As long as you have chosen the right data structures and algorithms for your task, your performance will probably be more than sufficient.  Chances are it will be network, database, or disk hits that will be your slow-down, not your code.  As they say, 98% of your code's bottleneck is in 2% of your code so premature-optimization may add maintenance and safety debt that won't have any measurable impact.  Instead, code for maintainability and safety, and then, and only then, when you find a true bottleneck, then you should go back and optimize further.

    Read the article

  • SQL Sentry First Impressions

    - by AjarnMark
    After struggling to defend my SQL Servers from a political attack recently, I realized that I needed better tools to back me up, and SQL Sentry is the leading candidate. A couple of weeks ago, seemingly from out of nowhere, complaints from the business users started coming in that one of the core internal applications was running dramatically slower than normal, and fingers were being pointed at the SQL Server.  Unfortunately, we don’t have a production DBA whose entire job is to monitor and maintain our SQL Servers.  The responsibility falls to me to do the best I can, investing only a small portion of my time, because there are so many other responsibilities to take care of, and our industry is still deep in recession.  I inherited these SQL Servers and have made significant improvements in process and procedure, but I had not yet made the time to take real baseline measurements or keep a really close eye on the performance.  Like many DBAs, I wrote several of my own tools and used the “built-in tools” like Profiler, PerfMon, and sp_who2 (did I mention most of our instances are SQL Server 2000?).  These have all served me well for in-the-moment troubleshooting and maintenance, but they really fell down on the job when I was called upon to “prove” that SQL Server performance was acceptable and more importantly had not degraded recently (i.e. historical comparisons).  I really didn’t have anything from a historical comparison perspective, but I was able to show that current performance was acceptable, and deflect attention back onto other components (which in fact turned out to be the real culprit). That experience dramatically illustrated the need for better monitoring tools.  Coincidentally, I had been talking recently to my boss about the mini nightmare of monitoring several critical and interdependent overnight jobs that operate on separate instances of SQL Server.  Among other tools, I had been using Idera’s SQL Job Manager which is a free tool and did a nice job of showing me job schedules and histories in a nice calendar view.  This worked fairly well, and for the money (did I mention it was free?) it couldn’t be beat.  But it is based on the stored job history in MSDB, and there were other performance problems that we ran into when we started changing the settings for how much job history to retain, in order to be able to look back a month or more in the calendar view.  Another coincidence (if you believe in such things) was that when we had some of those performance challenges, I posted a couple of questions to the #sqlhelp hashtag on Twitter and Greg Gonzalez (@SQLSensei) suggested I check out SQL Sentry’s Event Manager.  At the time, I just thought he worked there, but later found out that he founded the company.  When I took a quick look at the features & benefits, the one that really jumped out at me is Chaining and Queueing which sounded like it would really help with our “interdependent jobs on different servers” issue. I know that is a lot of background story and coincidences, but hopefully you have stuck with me so far, and now we have arrived at the point where last week I downloaded and installed the 30-day trial of the SQL Sentry Power Suite, which is Event Manager plus Performance Advisor.  And I must say that I really like what I see so far.  Here are a few highlights: Great Support.  I had two issues getting the trial setup and monitoring a handful of our servers.  One of which was entirely my fault (missed a security setting in SQL 2008) and the other was mostly my fault (late change to some config settings that were apparently cached and did not get refreshed properly).  In both cases, the support staff at SQL Sentry were very responsive and rather quickly figured out what the cause and fix was for each of them.  This left me with a great impression of the company.  Kudos to them! Chaining and Queueing.  While I have not yet activated this feature, I am very excited about the possibilities.  We have jobs on three different instances of SQL Server that have to be run in a certain order, and each has to finish before the next can successfully begin, and I believe this feature will ensure just that.  It has been a real pain in the backside when one of those jobs runs just a little too long and does not finish before the job on another instance starts, thus triggering a chain reaction of either outright job failures, or worse, successful completion of completely invalid processing. Calendar View.  I really, really like the Event Manager calendar view where I can see all jobs and events across all instances and identify potential resource contention as well as windows of opportunity for maintenance activity.  Very well done, and based on Event Manager’s own database of accumulated historical information rather than querying the source instances every time. Performance Advisor Dashboard History View.  This view let’s me quickly select a date and time range and it displays graphs of key SQL Server and Windows metrics.  This is exactly the thing I needed to answer the “has performance changed recently” question at the beginning of this post. Reporting Services Subscription Jobs with Report Name.  This was a big and VERY pleasant surprise.  If you have ever looked at the list of SQL Server jobs that SQL Server Reporting Services creates when you make a Subscription, you will notice that they all have some sort of GUID as the name of the job.  This is really ugly, and really annoying because when you are just looking at the SQL Agent and Job Activity Monitor, if you see that Job X failed, you really do not have any indication in the name or the properties of the Job itself, as to what Report that was for.  But with SQL Sentry Event Manager you do.  The Jobs list in the Navigator pane in SQL Sentry, amazingly, displays the name of the Report that the Subscription Job is for.  And when you open it to see more details, it shows you the full Reporting Services path to that Report, so you can immediately track it down in the Report Manager in case you want to identify/notify the owner or edit the Subscription information.  I did not expect this at all, but I sure do like it.  HOORAY! That is just my first impressions from using the tools for a few days.  And I haven’t even gotten into how it showed me where I was completely mistaken about one aspect of my SQL Server disk configurations.  I’ll share that lesson in another blog entry.  But I have to say it again, the combination of Event Manager and Performance Advisor working together have really made me a fan.

    Read the article

  • Codeigniter and Paypal: How it works

    - by Abs
    Hello all, Two random question as I try to integerate Paypal IPN into my Codeigniter based web app. 1) Are these two lines the same? $data['pp_info'] = $this->input->post(); $data['pp_info'] = $_POST; 2) A user agrees to pay a monthly recurring fee to use your service using paypal - first payment you are aware they have paid as you get data returned from paypal. But how do you keep track if users has paid for the following months? How do you know the user has not cancelled from their paypal account? Thanks all for any help

    Read the article

  • How to create "recurData" in Google Calendar? in C#.Net

    - by Pari
    Hi, I want to create recurring events of Calendar using Google API. I am following links: Google Calendar API I am not getting how to create "recurData". I can't modify String and pass it as parameter. Tried DDay.iCal Version 0.80. also. DDay.iCal There are some Example code given.I tried them. I am able to create ".ics" file. But when i pass this file content as "recurData" Getting Error : {"Execution of request failed: http://www.google.com/calendar/feeds/[email protected]/private/full?gsessionid=AHItK5wrSIoJVawFjGt-0g"} My icf File content is: BEGIN:VCALENDAR VERSION:2.0 PRODID:-//DDay.iCal//NONSGML ddaysoftware.com//EN BEGIN:VEVENT CREATED:20100309T132930Z DESCRIPTION:The event description DTEND:20100310T020000 DTSTAMP:20100309T132930Z DTSTART:20100309T080000 LOCATION:Event location SEQUENCE:0 SUMMARY:18 hour event summary UID:396c6b22-277f-4496-bbe1-d3692dc1b223 END:VEVENT BEGIN:VEVENT CREATED:20100309T132930Z DTEND;VALUE=DATE:20100315 DTSTAMP:20100309T132930Z DTSTART;VALUE=DATE:20100314 SEQUENCE:0 SUMMARY:All-day event UID:ac25cdaf-4e95-49ad-a770-f04f3afc1a2f END:VEVENT END:VCALENDAR I made it using "Example6".

    Read the article

  • Hashes vs Numeric id's

    - by Karan Bhangui
    When creating a web application that some how displays the display of a unique identifier for a recurring entity (videos on YouTube, or book section on a site like mine), would it be better to use a uniform length identifier like a hash or the unique key of the item in the database (1, 2, 3, etc). Besides revealing a little, what I think is immaterial, information about the internals of your app, why would using a hash be better than just using the unique id? In short: Which is better to use as a publicly displayed unique identifier - a hash value, or a unique key from the database? Edit: I'm opening up this question again because Dmitriy brought up the good point of not tying down the naming to db specific property. Will this sort of tie down prevent me from optimizing/normalizing the database in the future? The platform uses php/python with ISAM /w MySQL.

    Read the article

  • Python/Django: Which authorize.net library should I use?

    - by Parand
    I need authorize.net integration for subscription payments, likely using CIM. The requirements are simple - recurring monthly payments, with a few different price points. Customer credit card info will be stored a authorize.net . There are quite a few libraries and code snippets around, I'm looking for recommendations as to which work best. Satchmo seems more than I need, and it looks like it's complex. Django-Bursar seems like what I need, but it's listed as alpha. The adroll/authorize library also looks pretty good. The CIM XML APIs don't look too bad, I could connect directly with them. And there are quite a few other code snippets. What's the best choice right now, given my fairly simple requirements?

    Read the article

  • How to create "recurData" in Google Calendar?

    - by Pari
    I want to create recurring events of Calendar using Google API. I am following links: Google Calendar API I am not getting how to create "recurData". I can't modify String and pass it as parameter. Tried DDay.iCal Version 0.80. also. DDay.iCal There are some Example code given.I tried them. I am able to create ".ics" file. But when i pass this file content as "recurData" Getting Error : {"Execution of request failed: http://www.google.com/calendar/feeds/[email protected]/private/full?gsessionid=AHItK5wrSIoJVawFjGt-0g"} My icf File content is: BEGIN:VCALENDAR VERSION:2.0 PRODID:-//DDay.iCal//NONSGML ddaysoftware.com//EN BEGIN:VEVENT CREATED:20100309T132930Z DESCRIPTION:The event description DTEND:20100310T020000 DTSTAMP:20100309T132930Z DTSTART:20100309T080000 LOCATION:Event location SEQUENCE:0 SUMMARY:18 hour event summary UID:396c6b22-277f-4496-bbe1-d3692dc1b223 END:VEVENT BEGIN:VEVENT CREATED:20100309T132930Z DTEND;VALUE=DATE:20100315 DTSTAMP:20100309T132930Z DTSTART;VALUE=DATE:20100314 SEQUENCE:0 SUMMARY:All-day event UID:ac25cdaf-4e95-49ad-a770-f04f3afc1a2f END:VEVENT END:VCALENDAR I made it using "Example6".

    Read the article

  • Adhoc Data processing / ETL

    - by Dane
    I've just started at a new company in outsourced communications (e.g. print and mail, email, fax). One of the requirements is to process clients data and get it ready for mailing. For recurring jobs, this is easy using an ETL tool linked in with some addressing software, but for adhoc stuff it's a bit overkill. I've used inhouse developed stuff before (clunky but usable), but I don't want to have to re-develop that here. Any recommendations? Some features : Basic DBMS functionality (preferably with a proper DBMS backend for SQL support) Field concatenation (e.g. combine Firstname + Surname) "Pushing columns" (e.g. with address fields 1 - 8, push them left so if one is blank, the next one gets pushed up) Australia post mail sorting and dpid allocation (or can link into external tools relatively easily)

    Read the article

  • NVP request - CreateRecurringPaymentsProfile

    - by jiwanje.mp
    Im facing problem with Trail period and first month payemnt. My requirement is users can signup with trial period which allow new users to have a 30 day free trial. This means they will not be charged the monthly price until after the first 30 days the regular amount will be charged to user. but the next billing date should be one month later the profile start date. but next billing data and profile start date shows same when i query by GetRecurringPaymentProfile? Please help me how can i send the Recurring bill payment for this functionality. Thanks in advance, jiwan

    Read the article

  • Why am I getting this WSDL SOAP error with authorize.net?

    - by Chad Johnson
    I have my script email me when there is a problem creating a recurring transaction with authorize.net. I received the following at 5:23AM Pacific time: SOAP-ERROR: Parsing WSDL: Couldn't load from 'https://api.authorize.net/soap/v1/service.asmx?wsdl' : failed to load external entity "https://api.authorize.net/soap/v1/service.asmx?wsdl" And of course, when I did exactly the same thing that the user did, it worked fine for me. Does this mean authorize.net's API is down? Their knowledge base simply sucks and provides no information whatsoever about this problem. I've contacted the company, but I'm not holding my breath for a response. Google reveals nothing. Looking through their code, nothing stands out. Maybe an authentication error? Has anyone seen an error like this before? What causes this?

    Read the article

  • Drupal: Content in blocks from node_reference fields?

    - by Marco
    After only a few weeks of working with Drupal I've come up with a recurring problem, which I don't really have an optimal solution to, so I'm hoping that someone here might be able to give some best practice pointers. What I have is a region inside my node.tpl.php, which is populated with blocks that display content from two different CCK fields of the type node_reference. This works fine when displaying a single node. The problem appears when I need to use a view. For example, lets say I have a news listing, and a single news item view. When I display the single news item I can use the news node node_reference field to reference whatever material I would like to have in my sidebar, but when on the news listing view I would like to reference nodes separately. What would be the best practice to solve this? I'm having a few ideas, but none seem like the logical choice, how would you do?

    Read the article

  • Selecting multiple columns and rows for formatting - Excel

    - by Joyce
    I have a report which I used the command subtotals. Aesthetically, I just want to make these subtotal rows (columns A to P) filled with color, be in Bold and have a surrounding border. There are hundreds of totals generated in my report. And they do not have a recurring row position. So basically in order for it to look good, I do it manually per row. Is there a faster way? Thanks!

    Read the article

  • Redoundant code in exception handling

    - by Nicola Leoni
    Hi, I've a recurrent problem, I don't find an elegant solution to avoid the resource cleaning code duplication: resource allocation: try { f() } catch (...) { resource cleaning code; throw; } resource cleaning code; return rc; So, I know I can do a temporary class with cleaning up destructor, but I don't really like it because it breaks the code flow and I need to give the class the reference to the all stack vars to cleanup, the same problem with a function, and I don't figure out how does not exists an elegant solution to this recurring problem.

    Read the article

  • What tech stack/platform to use for a project?

    - by danny z
    Hey guys, This is a bit of a weird meta-programming question, but I've realized that my new project doesn't need a full MVC framework, and being a rails guy, I'm not sure what to use now. To give you a gist of the necessary functionality; this website will display static pages, but users will be able to log in and 'edit their current plans'. All purchasing and credit card editing is being handled by a recurring payment subscriber, I just need a page to edit their current plan. All of that will be done through (dynamic) XML API calls, so no database is necessary. Should I stick with my typical rails/nginx stack, or is there something I could use that would lighten the load, since I don't need the Rails heft. I'm familiar with python and PHP but would prefer not to go that route. Is Sinatra a good choice here? tl;dr: What's a good way to quickly serve mostly static pages, preferably in Ruby, with some pages requiring dynamic XML rendering?

    Read the article

  • Solving a SQL Server Deadlock situation

    - by mjh41
    I am trying to find a solution that will resolve a recurring deadlock situation in SQL server. I have done some analysis on the deadlock graph generated by the profiler trace and have come up with this information: The first process (spid 58) is running this query: UPDATE cds.dbo.task_core SET nstate = 1 WHERE nmboxid = 89 AND ndrawerid = 1 AND nobjectid IN (SELECT nobjectid FROM ( SELECT nobjectid, count(nobjectid) AS counting FROM cds.dbo.task_core GROUP BY nobjectid) task_groups WHERE task_groups.counting > 1) The second process (spid 86) is running this query: INSERT INTO task_core (…) VALUES (…) spid 58 is waiting for a Shared Page lock on CDS.dbo.task_core (spid 86 holds a conflicting intent exclusive (IX) lock) spid 86 is waiting for an Intent Exclusive (IX) page lock on CDS.dbo.task_core (spid 58 holds a conflicting Update lock)

    Read the article

  • Paypal subscription trial extra charge?

    - by DucDigital
    I tried to implement paypal pro for my site. Which will let user enter their info and charge 1$ for the trial, and 10$ for the recursive fee. But when I check my merchant account, it show up 1$ and 10$ in separate order, but within 1 day (it charge 10$ that I don't want) PROFILEID=I%2d0xxxxxx1HCKEF &PROFILESTATUS=PendingProfile &TRANSACTIONID=0NP43842KS810000T &TIMESTAMP=2010%2d05%2d16T18%3a56%3a55Z &CORRELATIONID=89adac79d0d6 &ACK=Success &VERSION=57%2e0 &BUILD=1298200 &METHOD=CreateRecurringPaymentsProfile &VERSION=57.0 &PWD=1274sss7 &USER=sand_12sdsad7629_biz_api1.dital.com &SIGNATURE=IacdATZe5XHmKJs1n2w3uWMRDWyaOGDb &PAYMENTACTION=Sale &AMT=10 &CREDITCARDTYPE=Visa &ACCT=4804270925925835 &EXPDATE=052015 &CVV2=243 &FIRSTNAME= &LASTNAME= &STREET=223232323 &CITY=3232 &STATE=IA &ZIP=5452 &COUNTRYCODE=US &CURRENCYCODE=USD &BILLINGPERIOD=Month &BILLINGFREQUENCY=1 &PROFILESTARTDATE=2010-05-6+02%3A56%3A57 &INITAMT=10 &FAILEDINITAMTACTION=ContinueOnFailure &DESC=Recurring+%2410 &AUTOBILLAMT=AddToNextBilling &PROFILEREFERENCE=Anonymous &TRIALBILLINGPERIOD=Day &TRIALBILLINGFREQUENCY=5 &TRIALAMT=1 &TRIALTOTALBILLINGCYCLES=1 &SALUTE=Mr. &EMAIL=dsads%40dsads.com Was there any problem with this query string?

    Read the article

  • Redundant code in exception handling

    - by Nicola Leoni
    Hi, I've a recurrent problem, I don't find an elegant solution to avoid the resource cleaning code duplication: resource allocation: try { f() } catch (...) { resource cleaning code; throw; } resource cleaning code; return rc; So, I know I can do a temporary class with cleaning up destructor, but I don't really like it because it breaks the code flow and I need to give the class the reference to the all stack vars to cleanup, the same problem with a function, and I don't figure out how does not exists an elegant solution to this recurring problem.

    Read the article

  • "No database selected" even when db clearly selected

    - by Someone
    One of my webpages gets a recurring error: "No database selected", even though the DB is selected. Right about now it's a 50-50 chance whether the page will load just fine, or whether I receive this error. After one or two reloads, the page works again. I am including the exact same connection file on my other pages, and I don't have this problem. What could be the cause of this? I'm using ensim pro for webhosting. TIA.

    Read the article

  • What is the Simplest Possible Payment Gateway to Implement? (using Django)

    - by b14ck
    I'm developing a web application that will require users to either make one time deposits of money into their account, or allow users to sign up for recurring billing each month for a certain amount of money. I've been looking at various payment gateways, but most (if not all) of them seem complex and difficult to get working. I also see no real active Django projects which offer simple views for making payments. Ideally, I'd like to use something like Amazon FPS, so that I can see online transaction logs, refund money, etc., but I'm open to other things. I just want the EASIEST possible payment gateway to integrate with my site. I'm not looking for anything fancy, whatever does the job, and requires < 10 hours to get working from start to finish would be perfect. I'll give answer points to whoever can point out a good one. Thanks!

    Read the article

  • Creating a simple wcf service publishing it to my webhotel, and get it to work

    - by H4mm3rHead
    Hi, This seems to be a recurring problem to me. I want to get started doing wcf services. I create a new Wcf Service Library, compile it, and publish it using FTP to my providers webhotel. But its not working. I somehow cant get access. I dont want some fancy security model - i just want to get a hole through to my simple webservice. Seems that its the part when i publish it to my webhotel (in a subdomain) that breaks the webservice - its working perfectly when starting it locally. How to proceed anyone?

    Read the article

  • IE downloads and installs CAB dialog popup upon every page refresh

    I have a signed cab on an aspx page. I am seeing the following inconsistent behavior. Any insights would be highly appreciated. On some machines, the cab is downloaded and installed on every page refresh. On few of those machines, the IE "install cab" dialog pops up on every page refresh, while on the others it pops up only once. Additional info: The CAB contains a .NET DLL The CAB is slightly large (around 30 MB), hence recurring download behavior is a pain Target browsers are IE6 and IE7, and the behavior is common to both!

    Read the article

  • Can I use PayPal to charge a Credit Card automatically?

    - by Mark
    If I have a Visa card number saved in my database, is there a way I can charge that Visa automatically through the PayPal API without the user having to enter anything? We want to keep this site as easy and hassle-free to use as possible. It would be a variable amount, based on how they use the site. (Don't worry, proper disclaimers will be in place, and the user will be notified) What about these "recurring payments"? That way I don't have to store the CC info on my website, but do they allow variable amounts that I could periodically send to PayPal?

    Read the article

  • Threaded Django task doesn't automatically handle transactions or db connections?

    - by Gabriel Hurley
    I've got Django set up to run some recurring tasks in their own threads, and I noticed that they were always leaving behind unfinished database connection processes (pgsql "Idle In Transaction"). I looked through the Postgres logs and found that the transactions weren't being completed (no ROLLBACK). I tried using the various transaction decorators on my functions, no luck. I switched to manual transaction management and did the rollback manually, that worked, but still left the processes as "Idle". So then I called connection.close(), and all is well. But I'm left wondering, why doesn't Django's typical transaction and connection management work for these threaded tasks that are being spawned from the main Django thread?

    Read the article

  • Conditional SELECT MySQL

    - by user188693
    Don't know if this is possible, but I'd like to select records based on the field value of recur_type, where the 'm' is the day of the week. If it's a weekly recurring event, I need to make sure this is a day it recurs on, otherwise, I want to return all days. however, I'm getting an empty result set: SELECT * FROM wp_fun_bec_events WHERE start_date <= '2009-10-12' AND ( end_date >= '2009-10-12' OR (recur_end > '0' AND recur_end >= '2009-10-12' ) ) AND ('m' IN ( CASE WHEN 'recur_type' = 'weekly' THEN recur_days ELSE 's/m/t/w/r/f/a' END ) ) ORDER BY start_date, start_time Any ideas??

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >