Search Results

Search found 4866 results on 195 pages for 'crm failure'.

Page 69/195 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Run command on init and restart on errors

    - by chersanya
    I have internet access on my PC through proxy through SSH, so every time I need to execute ssh -L PORT:SERVER:PORT LOGIN@SERVER and then type a password. After each network failure or reconnect this command has to be executed again. I've got bored of it and look for a way to do this automatically: first run this after boot (it doesn't seem to be a problem - put this command in some init file and that's all) and then rerun it (if possible, then type password) on each network failure. Is it possible, and how? OS Linux (Debian)

    Read the article

  • How to perform a nested mount when using chroot?

    - by user55542
    Note that this question is prompted by the circumstances detailed by me (as Xl1NntniNH7F) in http://www.linuxquestions.org/questions/linux-desktop-74/boot-failure-upon-updating-e2fsprogs-in-ubuntu-10-10-a-947328/. Thus if you could address the underlying cause of the boot failure, I would very much appreciate it. I'm trying to replicate the environment in my ubuntu installation (where the home folder is on a separate partition) in order to run make uninstall. I'm using a live cd. How to mount a dir in one partition (sda2, mounted in ubuntu as the home folder) into a directory on another mounted partition (sda3)? I did chroot /mnt/sda2 but I don't know how to mount sda3 to /home, and my various attempts didn't work. As I am unfamiliar with chroot, my approach could be wrong, so it would be great if you could suggest what I should do, given my circumstances.

    Read the article

  • SQL SERVER – Concurrency Basics – Guest Post by Vinod Kumar

    - by pinaldave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions from SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. Learning is always fun when it comes to SQL Server and learning the basics again can be more fun. I did write about Transaction Logs and recovery over my blogs and the concept of simplifying the basics is a challenge. In the real world we always see checks and queues for a process – say railway reservation, banks, customer supports etc there is a process of line and queue to facilitate everyone. Shorter the queue higher is the efficiency of system (a.k.a higher is the concurrency). Every database does implement this using checks like locking, blocking mechanisms and they implement the standards in a way to facilitate higher concurrency. In this post, let us talk about the topic of Concurrency and what are the various aspects that one needs to know about concurrency inside SQL Server. Let us learn the concepts as one-liners: Concurrency can be defined as the ability of multiple processes to access or change shared data at the same time. The greater the number of concurrent user processes that can be active without interfering with each other, the greater the concurrency of the database system. Concurrency is reduced when a process that is changing data prevents other processes from reading that data or when a process that is reading data prevents other processes from changing that data. Concurrency is also affected when multiple processes are attempting to change the same data simultaneously. Two approaches to managing concurrent data access: Optimistic Concurrency Model Pessimistic Concurrency Model Concurrency Models Pessimistic Concurrency Default behavior: acquire locks to block access to data that another process is using. Assumes that enough data modification operations are in the system that any given read operation is likely affected by a data modification made by another user (assumes conflicts will occur). Avoids conflicts by acquiring a lock on data being read so no other processes can modify that data. Also acquires locks on data being modified so no other processes can access the data for either reading or modifying. Readers block writer, writers block readers and writers. Optimistic Concurrency Assumes that there are sufficiently few conflicting data modification operations in the system that any single transaction is unlikely to modify data that another transaction is modifying. Default behavior of optimistic concurrency is to use row versioning to allow data readers to see the state of the data before the modification occurs. Older versions of the data are saved so a process reading data can see the data as it was when the process started reading and not affected by any changes being made to that data. Processes modifying the data is unaffected by processes reading the data because the reader is accessing a saved version of the data rows. Readers do not block writers and writers do not block readers, but, writers can and will block writers. Transaction Processing A transaction is the basic unit of work in SQL Server. Transaction consists of SQL commands that read and update the database but the update is not considered final until a COMMIT command is issued (at least for an explicit transaction: marked with a BEGIN TRAN and the end is marked by a COMMIT TRAN or ROLLBACK TRAN). Transactions must exhibit all the ACID properties of a transaction. ACID Properties Transaction processing must guarantee the consistency and recoverability of SQL Server databases. Ensures all transactions are performed as a single unit of work regardless of hardware or system failure. A – Atomicity C – Consistency I – Isolation D- Durability Atomicity: Each transaction is treated as all or nothing – it either commits or aborts. Consistency: ensures that a transaction won’t allow the system to arrive at an incorrect logical state – the data must always be logically correct.  Consistency is honored even in the event of a system failure. Isolation: separates concurrent transactions from the updates of other incomplete transactions. SQL Server accomplishes isolation among transactions by locking data or creating row versions. Durability: After a transaction commits, the durability property ensures that the effects of the transaction persist even if a system failure occurs. If a system failure occurs while a transaction is in progress, the transaction is completely undone, leaving no partial effects on data. Transaction Dependencies In addition to supporting all four ACID properties, a transaction might exhibit few other behaviors (known as dependency problems or consistency problems). Lost Updates: Occur when two processes read the same data and both manipulate the data, changing its value and then both try to update the original data to the new value. The second process might overwrite the first update completely. Dirty Reads: Occurs when a process reads uncommitted data. If one process has changed data but not yet committed the change, another process reading the data will read it in an inconsistent state. Non-repeatable Reads: A read is non-repeatable if a process might get different values when reading the same data in two reads within the same transaction. This can happen when another process changes the data in between the reads that the first process is doing. Phantoms: Occurs when membership in a set changes. It occurs if two SELECT operations using the same predicate in the same transaction return a different number of rows. Isolation Levels SQL Server supports 5 isolation levels that control the behavior of read operations. Read Uncommitted All behaviors except for lost updates are possible. Implemented by allowing the read operations to not take any locks, and because of this, it won’t be blocked by conflicting locks acquired by other processes. The process can read data that another process has modified but not yet committed. When using the read uncommitted isolation level and scanning an entire table, SQL Server can decide to do an allocation order scan (in page-number order) instead of a logical order scan (following page pointers). If another process doing concurrent operations changes data and move rows to a new location in the table, the allocation order scan can end up reading the same row twice. Also can happen if you have read a row before it is updated and then an update moves the row to a higher page number than your scan encounters later. Performing an allocation order scan under Read Uncommitted can cause you to miss a row completely – can happen when a row on a high page number that hasn’t been read yet is updated and moved to a lower page number that has already been read. Read Committed Two varieties of read committed isolation: optimistic and pessimistic (default). Ensures that a read never reads data that another application hasn’t committed. If another transaction is updating data and has exclusive locks on data, your transaction will have to wait for the locks to be released. Your transaction must put share locks on data that are visited, which means that data might be unavailable for others to use. A share lock doesn’t prevent others from reading but prevents them from updating. Read committed (snapshot) ensures that an operation never reads uncommitted data, but not by forcing other processes to wait. SQL Server generates a version of the changed row with its previous committed values. Data being changed is still locked but other processes can see the previous versions of the data as it was before the update operation began. Repeatable Read This is a Pessimistic isolation level. Ensures that if a transaction revisits data or a query is reissued the data doesn’t change. That is, issuing the same query twice within a transaction cannot pickup any changes to data values made by another user’s transaction because no changes can be made by other transactions. However, this does allow phantom rows to appear. Preventing non-repeatable read is a desirable safeguard but cost is that all shared locks in a transaction must be held until the completion of the transaction. Snapshot Snapshot Isolation (SI) is an optimistic isolation level. Allows for processes to read older versions of committed data if the current version is locked. Difference between snapshot and read committed has to do with how old the older versions have to be. It’s possible to have two transactions executing simultaneously that give us a result that is not possible in any serial execution. Serializable This is the strongest of the pessimistic isolation level. Adds to repeatable read isolation level by ensuring that if a query is reissued rows were not added in the interim, i.e, phantoms do not appear. Preventing phantoms is another desirable safeguard, but cost of this extra safeguard is similar to that of repeatable read – all shared locks in a transaction must be held until the transaction completes. In addition serializable isolation level requires that you lock data that has been read but also data that doesn’t exist. Ex: if a SELECT returned no rows, you want it to return no. rows when the query is reissued. This is implemented in SQL Server by a special kind of lock called the key-range lock. Key-range locks require that there be an index on the column that defines the range of values. If there is no index on the column, serializable isolation requires a table lock. Gets its name from the fact that running multiple serializable transactions at the same time is equivalent of running them one at a time. Now that we understand the basics of what concurrency is, the subsequent blog posts will try to bring out the basics around locking, blocking, deadlocks because they are the fundamental blocks that make concurrency possible. Now if you are with me – let us continue learning for SQL Server Locking Basics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Concurrency

    Read the article

  • ArchBeat Link-o-Rama for November 28, 2012

    - by Bob Rhubart
    Oracle BPM and Oracle Application Development Framework (ADF) | Dan Atwood Oracle ACE Dan Atwood shares an excerpt from "Oracle BPM and ADF (Part 1)," part of Avio Consulting's new self-paced online Oracle BPM Developer Workshop training. BPEL and Fire-and-Forget Web Services | Lonneke Dikmans Oracle ACE Director Lonneke Dikmans shares two use cases to illustrate the use of fire-and-forget web services. Backup and Recovery of an Exalogic vServer via rsync | Donald "On Exalogic a vServer will consist of a number of resources from the underlying machine," says the man known only as Donald. "These resources include compute power, networking and storage. In order to recover a vServer from a failure in the underlying rack all of these components have to be thoughts about. This article only discusses the backup and recovery strategies that apply to the storage system of a vServer." Making Architecture Matter | Harald Wesenberg and Einar Landre "As Architects, we want our architecture to matter. We want projects to implement our grand designs, one little step at a time, with each piece fitting perfectly into the big puzzle that is software architecture," say authors Harald Wesenberg and Einar Landre. "But reality is a bit trickier." Thought for the Day "A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable." — Leslie Lamport Source: SoftwareQuotes.com

    Read the article

  • Best Practices - updated: which domain types should be used to run applications

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains). This is an updated and enlarged version of the post on this topic originally posted October 2012. One frequent question "what type of domain should I use to run applications?" There used to be a simple answer: "run applications in guest domains in almost all cases", but now there are more things to consider. Enhancements to Oracle VM Server for SPARC and introduction of systems like the current SPARC servers including the T4 and T5 systems, the Oracle SuperCluster T5-8 and Oracle SuperCluster M6-32 provide scale and performance much higher than the original servers that ran domains. Single-CPU performance, I/O capacity, memory sizes, are much larger now, and far more demanding applications are now being hosted in logical domains. The general advice continues to be "use guest domains in almost all cases", meaning, "use virtual I/O rather than physical I/O", unless there is a specific reason to use the other domain types. The sections below will discuss the criteria for choosing between domain types. Review: division of labor and types of domain Oracle VM Server for SPARC offloads management and I/O functionality from the hypervisor to domains (also called virtual machines), providing a modern alternative to older VM architectures that use a "thick", monolithic hypervisor. This permits a simpler hypervisor design, which enhances reliability, and security. It also reduces single points of failure by assigning responsibilities to multiple system components, further improving reliability and security. Oracle VM Server for SPARC defines the following types of domain, each with their own roles: Control domain - management control point for the server, runs the logical domain daemon and constraints engine, and is used to configure domains and manage resources. The control domain is the first domain to boot on a power-up, is always an I/O domain, and is usually a service domain as well. It doesn't have to be, but there's no reason to not leverage it for virtual I/O services. There is one control domain per T-series system, and one per Physical Domain (PDom) on an M5-32 or M6-32 system. M5 and M6 systems can be physically domained, with logical domains within the physical ones. I/O domain - a domain that has been assigned physical I/O devices. The devices may be one more more PCIe root complexes (in which case the domain is also called a root complex domain). The domain has native access to all the devices on the assigned PCIe buses. The devices can be any device type supported by Solaris on the hardware platform. a SR-IOV (Single-Root I/O Virtualization) function. SR-IOV lets a physical device (also called a physical function) or PF) be subdivided into multiple virtual functions (VFs) which can be individually assigned directly to domains. SR-IOV devices currently can be Ethernet or InfiniBand devices. direct I/O ownership of one or more PCI devices residing in a PCIe bus slot. The domain has direct access to the individual devices An I/O domain has native performance and functionality for the devices it owns, unmediated by any virtualization layer. It may also have virtual devices. Service domain - a domain that provides virtual network and disk devices to guest domains. The services are defined by commands that are run in the control domain. It usually is an I/O domain as well, in order for it to have devices to virtualize and serve out. Guest domain - a domain whose devices are all virtual rather than physical: virtual network and disk devices provided by one or more service domains. In common practice, this is where applications are run. Device considerations Consider the following when choosing between virtual devices and physical devices: Virtual devices provide the best flexibility - they can be dynamically added to and removed from a running domain, and you can have a large number of them up to a per-domain device limit. Virtual devices are compatible with live migration - domains that exclusively have virtual devices can be live migrated between servers supporting domains. On the other hand: Physical devices provide the best performance - in fact, native "bare metal" performance. Virtual devices approach physical device throughput and latency, especially with virtual network devices that can now saturate 10GbE links, but physical devices are still faster. Physical I/O devices do not add load to service domains - all the I/O goes directly from the I/O domain to the device, while virtual I/O goes through service domains, which must be provided sufficient CPU and memory capacity. Physical I/O devices can be other than network and disk - we virtualize network, disk, and serial console, but physical devices can be the wide range of attachable certified devices, including things like tape and CDROM/DVD devices. In some cases the lines are now blurred: virtual devices have better performance than previously: starting with Oracle VM Server for SPARC 3.1 there is near-native virtual network performance. There is more flexibility with physical devices than before: SR-IOV devices can now be dynamically reconfigured on domains. Tradeoffs one used to have to make are now relaxed: you can often have the flexibility of virtual I/O with performance that previously required physical I/O. You can have the performance and isolation of SR-IOV with the ability to dynamically reconfigure it, just like with virtual devices. Typical deployment A service domain is generally also an I/O domain: otherwise it wouldn't have access to physical device "backends" to offer to its clients. Similarly, an I/O domain is also typically a service domain in order to leverage the available PCI buses. Control domains must be I/O domains, because they boot up first on the server and require physical I/O. It's typical for the control domain to also be a service domain too so it doesn't "waste" the I/O resources it uses. A simple configuration consists of a control domain that is also the one I/O and service domain, and some number of guest domains using virtual I/O. In production, customers typically use multiple domains with I/O and service roles to eliminate single points of failure, as described in Availability Best Practices - Avoiding Single Points of Failure . Guest domains have virtual disk and virtual devices provisioned from more than one service domain, so failure of a service domain or I/O path or device does not result in an application outage. This also permits "rolling upgrades" in which service domains are upgraded one at a time while their guests continue to operate without disruption. (It should be noted that resiliency to I/O device failures can also be provided by the single control domain, using multi-path I/O) In this type of deployment, control, I/O, and service domains are used for virtualization infrastructure, while applications run in guest domains. Changing application deployment patterns The above model has been widely and successfully used, but more configuration options are available now. Servers got bigger than the original T2000 class machines with 2 I/O buses, so there is more I/O capacity that can be used for applications. Increased server capacity made it attractive to run more vertically-scaled applications, such as databases, with higher resource requirements than the "light" applications originally seen. This made it attractive to run applications in I/O domains so they could get bare-metal native I/O performance. This is leveraged by the Oracle SuperCluster engineered systems mentioned previously. In those engineered systems, I/O domains are used for high performance applications with native I/O performance for disk and network and optimized access to the Infiniband fabric. Another technical enhancement is Single Root I/O Virtualization (SR-IOV), which make it possible to give domains direct connections and native I/O performance for selected I/O devices. Not all I/O domains own PCI complexes, and there are increasingly more I/O domains that are not service domains. They use their I/O connectivity for performance for their own applications. However, there are some limitations and considerations: at this time, a domain using physical I/O cannot be live-migrated to another server. There is also a need to plan for security and introducing unneeded dependencies: if an I/O domain is also a service domain providing virtual I/O to guests, it has the ability to affect the correct operation of its client guest domains. This is even more relevant for the control domain. where the ldm command must be protected from unauthorized (or even mistaken) use that would affect other domains. As a general rule, running applications in the service domain or the control domain should be avoided. For reference, an excellent guide to secure deployment of domains by Stefan Hinker is at Secure Deployment of Oracle VM Server for SPARC. To recap: Guest domains with virtual I/O still provide the greatest operational flexibility, including features like live migration. They should be considered the default domain type to use unless there is a specific requirement that mandates an I/O domain. I/O domains can be used for applications with the highest performance requirements. Single Root I/O Virtualization (SR-IOV) makes this more attractive by giving direct I/O access to more domains, and by permitting dynamic reconfiguration of SR-IOV devices. Today's larger systems provide multiple PCIe buses - for example, 16 buses on the T5-8 - making it possible to configure multiple I/O domains each owning their own bus. Service domains should in general not be used for applications, because compromised security in the domain, or an outage, can affect domains that depend on it. This concern can be mitigated by providing guests' their virtual I/O from more than one service domain, so interruption of service in one service domain does not cause an application outage. The control domain should in general not be used to run applications, for the same reason. Oracle SuperCluster uses the control domain for applications, but it is an exception. It's not a general purpose environment; it's an engineered system with specifically configured applications and optimization for optimal performance. These are recommended "best practices" based on conversations with a number of Oracle architects. Keep in mind that "one size does not fit all", so you should evaluate these practices in the context of your own requirements. Summary Higher capacity servers that run Oracle VM Server for SPARC are attractive for applications with the most demanding resource requirements. New deployment models permit native I/O performance for demanding applications by running them in I/O domains with direct access to their devices. This is leveraged in SPARC SuperCluster, and can be leveraged in T-series servers to provision high-performance applications running in domains. Carefully planned, this can be used to provide peak performance for critical applications. That said, the improved virtual device performance in Oracle VM Server means that the default choice should still be guest domains with virtual I/O.

    Read the article

  • Refactor This (Ugly Code)!

    - by Alois Kraus
    Ayende has put on his blog some ugly code to refactor. First and foremost it is nearly impossible to reason about other peoples code without knowing the driving forces behind the current code. It is certainly possible to make it much cleaner when potential sources of errors cannot happen in the first place due to good design. I can see what the intention of the code is but I do not know about every brittle detail if I am allowed to reorder things here and there to simplify things. So I decided to make it much simpler by identifying the different responsibilities of the methods and encapsulate it in different classes. The code we need to refactor seems to deal with a handler after a message has been sent to a message queue. The handler does complete the current transaction if there is any and does handle any errors happening there. If during the the completion of the transaction errors occur the transaction is at least disposed. We can enter the handler already in a faulty state where we try to deliver the complete event in any case and signal a failure event and try to resend the message again to the queue if it was not inside a transaction. All is decorated with many try/catch blocks, duplicated code and some state variables to route the program flow. It is hard to understand and difficult to reason about. In other words: This code is a mess and could be written by me if I was under pressure. Here comes to code we want to refactor:         private void HandleMessageCompletion(                                      Message message,                                      TransactionScope tx,                                      OpenedQueue messageQueue,                                      Exception exception,                                      Action<CurrentMessageInformation, Exception> messageCompleted,                                      Action<CurrentMessageInformation> beforeTransactionCommit)         {             var txDisposed = false;             if (exception == null)             {                 try                 {                     if (tx != null)                     {                         if (beforeTransactionCommit != null)                             beforeTransactionCommit(currentMessageInformation);                         tx.Complete();                         tx.Dispose();                         txDisposed = true;                     }                     try                     {                         if (messageCompleted != null)                             messageCompleted(currentMessageInformation, exception);                     }                     catch (Exception e)                     {                         Trace.TraceError("An error occured when raising the MessageCompleted event, the error will NOT affect the message processing"+ e);                     }                     return;                 }                 catch (Exception e)                 {                     Trace.TraceWarning("Failed to complete transaction, moving to error mode"+ e);                     exception = e;                 }             }             try             {                 if (txDisposed == false && tx != null)                 {                     Trace.TraceWarning("Disposing transaction in error mode");                     tx.Dispose();                 }             }             catch (Exception e)             {                 Trace.TraceWarning("Failed to dispose of transaction in error mode."+ e);             }             if (message == null)                 return;                 try             {                 if (messageCompleted != null)                     messageCompleted(currentMessageInformation, exception);             }             catch (Exception e)             {                 Trace.TraceError("An error occured when raising the MessageCompleted event, the error will NOT affect the message processing"+ e);             }               try             {                 var copy = MessageProcessingFailure;                 if (copy != null)                     copy(currentMessageInformation, exception);             }             catch (Exception moduleException)             {                 Trace.TraceError("Module failed to process message failure: " + exception.Message+                                              moduleException);             }               if (messageQueue.IsTransactional == false)// put the item back in the queue             {                 messageQueue.Send(message);             }         }     You can see quite some processing and handling going on there. Yes this looks like real world code one did put together to make things work and he does not trust his callbacks. I guess these are event handlers which are optional and the delegates were extracted from an event to call them back later when necessary.  Lets see what the author of this code did intend:          private void HandleMessageCompletion(             TransactionHandler transactionHandler,             MessageCompletionHandler handler,             CurrentMessageInformation messageInfo,             ErrorCollector errors             )         {               // commit current pending transaction             transactionHandler.CallHandlerAndCommit(messageInfo, errors);               // We have an error for a null message do not send completion event             if (messageInfo.CurrentMessage == null)                 return;               // Send completion event in any case regardless of errors             handler.OnMessageCompleted(messageInfo, errors);               // put message back if queue is not transactional             transactionHandler.ResendMessageOnError(messageInfo.CurrentMessage, errors);         }   I did not bother to write the intention here again since the code should be pretty self explaining by now. I have used comments to explain the still nontrivial procedure step by step revealing the real intention about all this complex program flow. The original complexity of the problem domain does not go away but by applying the techniques of SRP (Single Responsibility Principle) and some functional style but we can abstract the necessary complexity away in useful abstractions which make it much easier to reason about it. Since most of the method seems to deal with errors I thought it was a good idea to encapsulate the error state of our current message in an ErrorCollector object which stores all exceptions in a list along with a description what the error all was about in the exception itself. We can log it later or not depending on the log level or whatever. It is really just a simple list that encapsulates the current error state.          class ErrorCollector          {              List<Exception> _Errors = new List<Exception>();                public void Add(Exception ex, string description)              {                  ex.Data["Description"] = description;                  _Errors.Add(ex);              }                public Exception Last              {                  get                  {                      return _Errors.LastOrDefault();                  }              }                public bool HasError              {                  get                  {                      return _Errors.Count > 0;                  }              }          }   Since the error state is global we have two choices to store a reference in the other helper objects (TransactionHandler and MessageCompletionHandler)or pass it to the method calls when necessary. I did chose the latter one because a second argument does not hurt and makes it easier to reason about the overall state while the helper objects remain stateless and immutable which makes the helper objects much easier to understand and as a bonus thread safe as well. This does not mean that the stored member variables are stateless or thread safe as well but at least our helper classes are it. Most of the complexity is located the transaction handling I consider as a separate responsibility that I delegate to the TransactionHandler which does nothing if there is no transaction or Call the Before Commit Handler Commit Transaction Dispose Transaction if commit did throw In fact it has a second responsibility to resend the message if the transaction did fail. I did see a good fit there since it deals with transaction failures.          class TransactionHandler          {              TransactionScope _Tx;              Action<CurrentMessageInformation> _BeforeCommit;              OpenedQueue _MessageQueue;                public TransactionHandler(TransactionScope tx, Action<CurrentMessageInformation> beforeCommit, OpenedQueue messageQueue)              {                  _Tx = tx;                  _BeforeCommit = beforeCommit;                  _MessageQueue = messageQueue;              }                public void CallHandlerAndCommit(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  if (_Tx != null && !errors.HasError)                  {                      try                      {                          if (_BeforeCommit != null)                          {                              _BeforeCommit(currentMessageInfo);                          }                            _Tx.Complete();                          _Tx.Dispose();                      }                      catch (Exception ex)                      {                          errors.Add(ex, "Failed to complete transaction, moving to error mode");                          Trace.TraceWarning("Disposing transaction in error mode");                          try                          {                              _Tx.Dispose();                          }                          catch (Exception ex2)                          {                              errors.Add(ex2, "Failed to dispose of transaction in error mode.");                          }                      }                  }              }                public void ResendMessageOnError(Message message, ErrorCollector errors)              {                  if (errors.HasError && !_MessageQueue.IsTransactional)                  {                      _MessageQueue.Send(message);                  }              }          } If we need to change the handling in the future we have a much easier time to reason about our application flow than before. After we did complete our transaction and called our callback we can call the completion handler which is the main purpose of the HandleMessageCompletion method after all. The responsiblity o the MessageCompletionHandler is to call the completion callback and the failure callback when some error has occurred.            class MessageCompletionHandler          {              Action<CurrentMessageInformation, Exception> _MessageCompletedHandler;              Action<CurrentMessageInformation, Exception> _MessageProcessingFailure;                public MessageCompletionHandler(Action<CurrentMessageInformation, Exception> messageCompletedHandler,                                              Action<CurrentMessageInformation, Exception> messageProcessingFailure)              {                  _MessageCompletedHandler = messageCompletedHandler;                  _MessageProcessingFailure = messageProcessingFailure;              }                  public void OnMessageCompleted(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  try                  {                      if (_MessageCompletedHandler != null)                      {                          _MessageCompletedHandler(currentMessageInfo, errors.Last);                      }                  }                  catch (Exception ex)                  {                      errors.Add(ex, "An error occured when raising the MessageCompleted event, the error will NOT affect the message processing");                  }                    if (errors.HasError)                  {                      SignalFailedMessage(currentMessageInfo, errors);                  }              }                void SignalFailedMessage(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  try                  {                      if (_MessageProcessingFailure != null)                          _MessageProcessingFailure(currentMessageInfo, errors.Last);                  }                  catch (Exception moduleException)                  {                      errors.Add(moduleException, "Module failed to process message failure");                  }              }            }   If for some reason I did screw up the logic and we need to call the completion handler from our Transaction handler we can simple add to the CallHandlerAndCommit method a third argument to the MessageCompletionHandler and we are fine again. If the logic becomes even more complex and we need to ensure that the completed event is triggered only once we have now one place the completion handler to capture the state. During this refactoring I simple put things together that belong together and came up with useful abstractions. If you look at the original argument list of the HandleMessageCompletion method I have put many things together:   Original Arguments New Arguments Encapsulate Message message CurrentMessageInformation messageInfo         Message message TransactionScope tx Action<CurrentMessageInformation> beforeTransactionCommit OpenedQueue messageQueue TransactionHandler transactionHandler        TransactionScope tx        OpenedQueue messageQueue        Action<CurrentMessageInformation> beforeTransactionCommit Exception exception,             ErrorCollector errors Action<CurrentMessageInformation, Exception> messageCompleted MessageCompletionHandler handler          Action<CurrentMessageInformation, Exception> messageCompleted          Action<CurrentMessageInformation, Exception> messageProcessingFailure The reason is simple: Put the things that have relationships together and you will find nearly automatically useful abstractions. I hope this makes sense to you. If you see a way to make it even more simple you can show Ayende your improved version as well.

    Read the article

  • unable to install anything that depends upon spamassassin. Cant even install spamassasin

    - by Harbhag
    I am trying to install mailscanner using apt-get install mailscanner and I got the following error Setting up spamassassin (3.3.1-1) ... Starting SpamAssassin Mail Filter Daemon: child process [21344] exited or timed out without signaling production of a PID file: exit 255 at /usr/sbin/spamd line 2588. invoke-rc.d: initscript spamassassin, action "start" failed. dpkg: error processing spamassassin (--configure): subprocess installed post-installation script returned error exit status 255 dpkg: dependency problems prevent configuration of mailscanner: mailscanner depends on spamassassin (>= 3.1); however: Package spamassassin is not configured yet. dpkg: error processing mailscanner (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: spamassassin mailscanner E: Sub-process /usr/bin/dpkg returned an error code (1) and when I tried to install spamassassin I got the following error : Setting up spamassassin (3.3.1-1) ... Starting SpamAssassin Mail Filter Daemon: child process [21389] exited or timed out without signaling production of a PID file: exit 255 at /usr/sbin/spamd line 2588. invoke-rc.d: initscript spamassassin, action "start" failed. dpkg: error processing spamassassin (--configure): subprocess installed post-installation script returned error exit status 255 dpkg: dependency problems prevent configuration of mailscanner: mailscanner depends on spamassassin (>= 3.1); however: Package spamassassin is not configured yet. dpkg: error processing mailscanner (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: spamassassin mailscanner E: Sub-process /usr/bin/dpkg returned an error code (1) I am using Ubuntu Server 10.04

    Read the article

  • ubuntu software center not working after update

    - by HOS
    today I'm updated my Ubuntu (I have KDE Desktop installed on it before) & After update , my Ubuntu software center said : items cannot be installed or removed until the package catalog is repaired .do you want to repair it ? and when I'm clicked on Repair ,after 2 seconds it says : the installation or removal of software package Failed , with this details : installArchives() failed: dpkg: dependency problems prevent configuration of kdm: kdm depends on libkworkspace4abi1 (= 4:4.8.5-0ubuntu0.1); however: Version of libkworkspace4abi1 on system is 4:4.8.5-0ubuntu0.2. kdm depends on kde-workspace-kgreet-plugins (= 4:4.8.5-0ubuntu0.1); however: Version of kde-workspace-kgreet-plugins on system is 4:4.8.5-0ubuntu0.2. dpkg: error processing kdm (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of kubuntu-desktop: kubuntu-desktop depends on kdm; however: Package kdm is not configured yet. dpkg: error processing kubuntu-desktop (--configure): No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. dependency problems - leaving unconfigured Errors were encountered while processing: kdm kubuntu-desktop Error in function: dpkg: dependency problems prevent configuration of kdm: kdm depends on libkworkspace4abi1 (= 4:4.8.5-0ubuntu0.1); however: Version of libkworkspace4abi1 on system is 4:4.8.5-0ubuntu0.2. kdm depends on kde-workspace-kgreet-plugins (= 4:4.8.5-0ubuntu0.1); however: Version of kde-workspace-kgreet-plugins on system is 4:4.8.5-0ubuntu0.2. dpkg: error processing kdm (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of kubuntu-desktop: kubuntu-desktop depends on kdm; however: Package kdm is not configured yet. dpkg: error processing kubuntu-desktop (--configure): dependency problems - leaving unconfigured please help me i love Ubuntu & i want to repair it . Thanks

    Read the article

  • dpkg behaving strangely?

    - by Tom Henderson
    When I use apt to get a package, I have been receiving the same error message. Here is an example trying to install wicd (which is already installed): Reading package lists... Building dependency tree... Reading state information... wicd is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 3 not fully installed or removed. After this operation, 0B of additional disk space will be used. Setting up tex-common (2.06) ... debconf: unable to initialize frontend: Dialog debconf: (Dialog frontend requires a screen at least 13 lines tall and 31 columns wide.) debconf: falling back to frontend: Readline Running mktexlsr. This may take some time... done. No packages found matching texlive-base. dpkg: error processing tex-common (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of texlive-binaries: texlive-binaries depends on tex-common (>= 2.00); however: Package tex-common is not configured yet. dpkg: error processing texlive-binaries (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of dvipng: dvipng depends on texlive-base-bin; however: Package texlive-base-bin is not installed. Package texlive-binaries which provides texlive-base-bin is not configured yet. dpkg: error processing dvipng (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: tex-common texlive-binaries dvipng E: Sub-process /usr/bin/dpkg returned an error code (1) I'm not sure if this is a problem with apt or with dpkg, but it certainly doesn't look good!

    Read the article

  • Why do some programmers think there is a contrast between theory and practice?

    - by Giorgio
    Comparing software engineering with civil engineering, I was surprised to observe a different way of thinking: any civil engineer knows that if you want to build a small hut in the garden you can just get the materials and go build it whereas if you want to build a 10-storey house you need to do quite some maths to be sure that it won't fall apart. In contrast, speaking with some programmers or reading blogs or forums I often find a wide-spread opinion that can be formulated more or less as follows: theory and formal methods are for mathematicians / scientists while programming is more about getting things done. What is normally implied here is that programming is something very practical and that even though formal methods, mathematics, algorithm theory, clean / coherent programming languages, etc, may be interesting topics, they are often not needed if all one wants is to get things done. According to my experience, I would say that while you do not need much theory to put together a 100-line script (the hut), in order to develop a complex application (the 10-storey building) you need a structured design, well-defined methods, a good programming language, good text books where you can look up algorithms, etc. So IMO (the right amount of) theory is one of the tools for getting things done. So my question is why do some programmers think that there is a contrast between theory (formal methods) and practice (getting things done)? Is software engineering (building software) perceived by many as easy compared to, say, civil engineering (building houses)? Or are these two disciplines really different (apart from mission-critical software, software failure is much more acceptable than building failure)?

    Read the article

  • SMART Status Data Interpretation - Disk Utility

    - by Mah
    Last week my external harddisk (Seagate Barracuda 1.5TB in a custom enclosure) showed signs of failure (Disk Utility SMART Pre-failure status - several bad sectors) and I decided to change it. I bought a new HDD (Seagate Barracuda 2TB) and connected it to my Ubuntu box with a SATA to USB cable that could not report SMART status. I copied all the contents of the old HDD to the new HDD (one partition with rsync, the other with parted cp) and then gently replaced the old HDD with the new one inside my aluminum enclosure. For obscure reasons after reconnecting the new HDD through the old enclosure, the Linux box could not detect my partitions. I recovered the partitions with testdisk and restarted the computer. After the restart I checked the SMART status of the new HDD an I get this: Read Error Rate --------------- Normalized 108 Worst 99 Threshold 6 Value 16737944 I got a high value on the Seek Error Rate as well. Wondering why this happens I copied 2 GB directory from one partition to the other and rechecked the SMART status (5 minutes later). This time I got the following: Read Error Rate --------------- Normalized 109 Worst 99 Threshold 6 Value 24792504 As you see there has been an increase in the error rate. I am unable to interpret these numbers. Is my new hard disk already dying? What are the acceptable values in these fields for Seagate hard disks? Then why the assessment is still good? While I could get temperature and airflow temperature data from my old HDD, I can not fetch them for the new one. I noticed that my old hdd had got really hot sometimes. Is it possible that the enclosure is killing the harddisks due to high temperature?... Thanks

    Read the article

  • What Poor Project Management Might Be Costing You

    - by Sylvie MacKenzie, PMP
    For project-intensive organizations, capital investment decisions define both success and failure. Getting them wrong—the risk of delays and schedule and cost overruns are ever present—introduces the potential for huge financial losses. The resulting consequences can be significant, and directly impact both a company’s profit outlook and its share price performance—which in turn is the fundamental measure of executive performance. This intrinsic link between long-term investment planning and short-term market performance is investigated in the independent report Stock Shock, written by a consultant from Clarity Economics and commissioned by the EPPM Board. A new international steering group organized by Oracle, the EPPM Board brings together senior executives from leading public and private sector organizations to explore the critical role played by enterprise project and portfolio management (EPPM). Stock Shock reviews several high-profile recent project failures, and combined with other research reviews the lessons to be learned. It analyzes how portfolio management is an exercise in balancing risk and reward, a process that places the emphasis firmly on executives to correctly determine which potential investments will deliver the greatest value and contribute most to the bottom line. Conversely, it also details how poor evaluation decisions can quickly impact the overall value of an organization’s project portfolio and compromise long-range capital planning goals. Failure to Deliver—In Search of ROI The report also cites figures from the Economist Intelligence Unit survey that found that more organizations (12 percent) expected to deliver planned ROI less than half the time, than those (11 percent) who claim to deliver it 90 percent or more of the time. This fact is linked to a recent report from Booz & Co. that shows how the average tenure of a global chief executive has fallen from 8.1 years to 6.3 years. “Senior executives need to begin looking at effective project delivery not as a bonus, but as an essential facet of business success,” according to Stock Shock author Phil Thornton. “Consolidated and integrated visibility into individual projects is the most practical solution to overcoming these challenges, which explains the increasing popularity of PPM technologies as an effective oversight and delivery platform.” Stock Shock is available for download on the EPPM microsite at http://www.oracle.com/oms/eppm/us/stock-shock-report-1691569.html

    Read the article

  • Getting a lot of postmaster undeliverable notices for non-existent users

    - by Mike Walsh
    I've had my domain (straightpathsql.com) for a few years now. I host my e-mail with Google Accounts for business and have for awhile. ALl of the sudden in the past week I am starting to get a lot of postmaster delivery fail notices from various domains, most of them involving bogus e-mail addresses at my domain ([email protected], for example)... My assumption here is that someone is trying to relay on some other host (not my hosts which are secure through google apps for business, I presume) and there isn't much I can do to stop it. But I just want to make sure there isn't something else I need to be looking at here.. An example delivery fail notice is below.. I know nothing of those addresses below and they look like garbage... (Quick edit: the reason I get these messages is I set myself up as a catch all, so it doesn't matter what e-mail you send a note to at my domain, I'll get it if the account isn't setup... All of the failure messages are sent to bogus addresses on my domain) The following message to <[email protected]> was undeliverable. The reason for the problem: 5.1.0 - Unknown address error 553-'sorry, this recipient is in my badrecipientto list (#5.7.1)' Final-Recipient: rfc822;[email protected] Action: failed Status: 5.0.0 (permanent failure) Remote-MTA: dns; [118.82.83.11] Diagnostic-Code: smtp; 5.1.0 - Unknown address error 553-'sorry, this recipient is in my badrecipientto list (#5.7.1)' (delivery attempts: 0) ---------- Forwarded message ---------- From: Howard Blankenship <[email protected]> To: omiivi2922 <[email protected]> Cc: Date: Subject: Hi omiivi2922

    Read the article

  • Be liberal in what you accept... or not?

    - by Matthieu M.
    [Disclaimer: this question is subjective, but I would prefer getting answers backed by facts and/or reflexions] I think everyone knows about the Robustness Principle, usually summed up by Postel's Law: Be conservative in what you send; be liberal in what you accept. I would agree that for the design of a widespread communication protocol this may make sense (with the goal of allowing easy extension), however I have always thought that its application to HTML / CSS was a total failure, each browser implementing its own silent tweak detection / behavior, making it near impossible to obtain a consistent rendering across multiple browsers. I do notice though that there the RFC of the TCP protocol deems "Silent Failure" acceptable unless otherwise specified... which is an interesting behavior, to say the least. There are other examples of the application of this principle throughout the software trade that regularly pop up because they have bitten developpers, from the top off my head: Javascript semi-colon insertion C (silent) builtin conversions (which would not be so bad if it did not truncated...) and there are tools to help implement "smart" behavior: name matching phonetic algorithms (Double Metaphone) string distances algorithms (Levenshtein distance) However I find that this approach, while it may be helpful when dealing with non-technical users or to help users in the process of error recovery, has some drawbacks when applied to the design of library/classes interface: it is somewhat subjective whether the algorithm guesses "right", and thus it may go against the Principle of Least Astonishment it makes the implementation more difficult, thus more chances to introduce bugs (violation of YAGNI ?) it makes the behavior more susceptible to change, as any modification of the "guess" routine may break old programs, nearly excluding refactoring possibilities... from the start! And this is what led me to the following question: When designing an interface (library, class, message), do you lean toward the robustness principle or not ? I myself tend to be quite strict, using extensive input validation on my interfaces, and I was wondering if I was perhaps too strict.

    Read the article

  • Architectural and Design Challenges with SOA

    With all of the hype about service oriented architecture (SOA) primarily through the use of web services, not much has been said about potential issues of using SOA in the design of an application. I am personally a fan of SOA, but it is not the solution for every application. Proper evaluation should be done on all requirements and use cases prior to deciding to go down the SOA road. It is important to consider how your application/service will handle the following perils as it executes. Example Challenges of SOA Network Connectivity Issues Handling Connectivity Issues Longer Processing/Transaction Times How many of us have had issues visiting our favorite web sites from time to time? The same issue will occur when using service based architecture especially if it is implemented using web services. Forcing applications to access services via a network connection introduces a lot of new failure points to the application. Potential failure points include: DNS issues, network hardware issues, remote server issues, and the lack of physical network connections. When network connectivity issues do occur, how are the service clients are implemented is very important. Should the client wait and poll the service until it is accessible again? If so what is the maximum wait time or number of attempts it should retry. Due to the fact of services being distributed across a network automatically increase the responsiveness of client applications due to the fact that processing time must now also include time to send and receive messages from called services. This could add nanoseconds to minutes per each request based on network load and server usage of the service provider. If speed highly desirable quality attribute then I would consider creating components that are hosted where the client application is located. References: Rader, Dave. (2002). Overcoming Web Services Challenges with Smart Design: http://soa.sys-con.com/node/39458

    Read the article

  • LiveMeeting VC PowerShell PASS – Troubleshooting SQL Server with PowerShell

    - by Laerte Junior
    Guys, join me on Wednesday July 18th 12 noon EDT (GMT -4) for a presentation called Troubleshooting SQL Server With PowerShell. It will be in English, so please make allowances for this. I’m sure that you’re aware that my English is not perfect, but it is not so bad. I will do my best, you can be sure. The registration link will be available soon from PowerShell.sqlpass.org, so I hope to see you there. It will be a session without slides. Just code; pure PowerShell code. Trust me, We will see a lot of COOL stuff.Big thanks to Aaron Nelson (@sqlvariant) for the opportunity! Here are some more details about the presentation: “Troubleshooting SQL Server with PowerShell – The Next Level’ It is normal for us to have to face poorly performing queries or even complete failure in our SQL server environments. This can happen for a variety of reasons including poor Database Designs, hardware failure, improperly-configured systems and OS Updates applied without testing. As Database Administrators, we need to take precaution to minimize the impact of these problems when they occur, and so we need the tools and methodology required to identify and solve issues quickly. In this Session we will use PowerShell to explore some common troubleshooting techniques used in our day-to-day work as s DBA. This will include a variety of such activities including Gathering Performance Counters in several servers at the same time using background jobs, identifying Blocked Sessions and Reading & filtering the SQL Error Log even if the Instance is offline The approach will be using some advanced PowerShell techniques that allow us to scale the code for multiple servers and run the data collection in asynchronous mode.

    Read the article

  • two-part dice pool mechanic

    - by bythenumbers
    I'm working on a dice mechanic/resolution system based off of the Ghost/Echo (hereafter shortened to G/E) tabletop RPG. Specifically, since G/E can be a little harsh with dealing out consequences and failure, I was hoping to soften the system and add a little more player control, as well as offer the chance for players to evolve their characters into something unique, right from creation. So, here's the mechanic: Players roll 2d12 against the two statistics for their character (each is a number from 2-11, and may be rolled above or below depending on the nature of the action attempted, rolling your stat exactly always fails). Depending on the success for that roll, they add dice to the pool rolled for a modified G/E style action. The acting player gets two dice anyhow, and I am debating offering a bonus die for each success, or a single bonus die for succeeding on both of the statistic-compared rolls. One the size of the dice pool is set, the entire pool is rolled, and the players are allowed to assign rolled dice to a goal and a danger. Assigned results are judged as follows: 1-4 means the attempted goal fails, or the danger comes true. 5-8 is a partial success at the goal, or partially avoiding the danger. 9-12 means the goal is achieved, or the danger avoided. My concerns are twofold: Firstly, that the two-stage action is too complicated, with two rolls to judge separately before anything can happen. Secondly, that the statistics involved go too far in softening the game. I've run some basic simulations, and the approximate statistics follow: 2 dice (up to) 3 dice (up to) 4 dice failure ~33% ~25% ~20% partial ~33% ~35% ~35% success ~33% ~40% ~45% I'd appreciate any advice that addresses my concerns or offers to refine my simulation (right now the first roll is statistically modeled as sign(1d12-1d12), where 0 is a success).

    Read the article

  • How to kill msvsmon.exe when finished remote debugging?

    - by dferraro
    Hi, We are a .NET LOB shop using MS CRM as our CRM platform. To this end, we many times a day during development phases are using remote debugging due to 2 connection limit to the server. We are able to setup remote debugging without logging onto the machine by using PsExec. This works great - but how the heck do we kill the remote debugger for that user, once we are finished debugging? In fact, not even sure how to kill the remote debugger in general, even when manually opening it... without remoting into server and using task manager, or keeping the server open and doing File-Exit on the debugger. Any advice?

    Read the article

  • Keeping a large volume of data in Session - Suggestions / alternatives?

    - by Fishcake
    I'm developing a web app for which the client wants us to query their data as little as possible. The data will be coming from a Microsoft CRM instance. So we've agreed that data will only be queried as and when it is needed, therefore if a web user wants to see a list of contacts (for example) that list is fetched into a local DataTable. Then if a new contact is created on the website the new contact is sent to CRM and added to the local DataTable at the same time. Likewise for edits. If the user then looks at their contacts again the data will just come from the local DataTable. At the moment local data is being kept in Session but my concern is that too much memory will start being used up. However traffic is expected to be pretty small, perhaps no more than 20 concurrent users so am I worrying about nothing or is there a better way you can suggest to handle this?

    Read the article

  • Iphone & Web App synch

    - by Jack Welsh
    I am trying to build an Iphone App client for our CRM solution so our sales people would be able to access the information available in our CRM through an Iphone App. I am trying to find out what is the recommended approach to architect the application when it comes to the database. I am not sure where should the data reside, should I save a subset of the data on the iPhone, or should I just rely on webservices and pull the information I need from the database whenever I need it. Also, is there any best practices or frameworks to build such applications on the Iphone.

    Read the article

  • Can LGPL licenses be used in our proprietary systems?

    - by jon
    We want to use either the FCK/CKeditor or the TinyMCE editor in our CRM application. We charge the customer use of the CRM system which is bespoke to use and them. I noticed both the editors have GPL and LGPL licenses. However CKEditor has a Closed License too in addition to the LGPL. My questions are: 1) Although we don't sell our code or binaries, we provide software as a hosted service, can we use LGPL licenses? 2) Why does CKEditor provide LGPL and Closed licenses if the LGPL should make it easy for any proprietary software to use it? Is it only for systems which are sold as a binary where the product is for example physically installed as an .exe?

    Read the article

  • Execute PHP after new Order in Magento

    - by Israel Lopez
    Hello There, I'm trying to figure out where I could drop in some PHP code to notify a CRM we are using (Solve360) that a new order has been placed, and that an event should be created (API) to fulfill the order. Order Product Checkout Complete Checkout & Capture CC Side notify CRM Done Not sure where to start, but I have had to make some small tweaks to fix the Quantum Gateway payment processor to work. In that module it appears that the objects for the order (email, amt, details) were available. However it seems it would be quite 'dirty' to insert more PHP code in there. Ideas? PHP 5.2.x & Magento 1.4.x

    Read the article

  • Symfony2 same form, different entities NOT related

    - by user1381537
    I'm trying to write one form for submitting against MySQL DB, but I can't get it working, I've tried a lot of things (separate forms, create an ->add('foo', new foo()) to a field, and trying to parse plain SQL with a normal HTML form is my only solution, which is obviously not the best. This is my DB structure: As you can see I need to insert the comments textarea to ticketcomments among the user who wrote it, etc. On crmentity the description field. Then on ticketcf the fields that I need to submit from form, are this (because you wont know if I don't tell you because of the field names): tcf.cf594 AS Type, tcf.cf675 AS Suscription, tcf.cf770 AS ID_PRODUCT, tcf.cf746 AS NotificationDate, tcf.cf747 AS ResponseDate, tcf.cf748 AS ResolutionDate, And, of course, every table needs to have the same ticketid id for the submitted form, so we can retrieve it with one simple query. It will be easy to do with plain SQL instead of using DQL and Symfony2 forms, but is not a good way to do it. Also, here's my "Ticket list" query, if you need it to have it more clear... SELECT t.ticketNo AS Ticket, t.title AS Asunto, t.status AS Estado, t.updateLog AS LOG, t.hours AS Horas, t.solution AS Solucion, t.priority AS Prioridad, tcf.cf594 AS Tipo, tcf.cf675 AS Suscripcion, tcf.cf770 AS IDPROD, tcf.cf746 AS F_Noti, tcf.cf747 AS F_Resp, tcf.cf748 AS F_Reso, CONCAT (cd.firstname, cd.lastname) AS Contacto, crm.description AS Descripcion, crm.crmid AS id FROM WbsGoclientsBundle:VtigerTroubletickets t INNER JOIN WbsGoclientsBundle:VtigerTicketcf tcf WITH t.ticketid = tcf.ticketid INNER JOIN WbsGoclientsBundle:VtigerContactdetails cd WITH t.parentId = cd.contactid INNER JOIN WbsGoclientsBundle:VtigerCrmentity crm WITH t.ticketid = crm.crmid WHERE t.parentId IN ( SELECT cd1.contactid FROM WbsGoclientsBundle:VtigerContactdetails cd1 WHERE cd1.accountid = ( SELECT cd2.accountid FROM WbsGoclientsBundle:VtigerContactdetails cd2 WHERE cd2.contactid = :contactid)) AND t.status <> \'Closed\' And also "Ticket details" query (which is not in DQL format yet, only SQL) is so simple, it only retrieve the comments field and createdtime from ticketcomments appended to this query so we have all the fields... Thank you. This is a test form, using troubletickets and ticketcomments, it's returning errores because I can't set a comments field because troubletickets doesn't has it, but I need that field to be submitted to ticketcomments ... VtigerTicketcommentsType <?php namespace WbsGo\clientsBundle\Form\Type; use Symfony\Component\Form\AbstractType, Symfony\Component\Form\FormBuilderInterface; use Symfony\Component\OptionsResolver\OptionsResolverInterface; class VtigerTicketcommentsType extends AbstractType { public function buildForm(FormBuilderInterface $builder, array $options) { $builder ->add('ticketid') ->add('comments') ->add('ownerid') ->add('ownertype') ->add('createdtype') ; } public function setDefaultOptions(OptionsResolverInterface $resolver) { $resolver->setDefaults(array( 'data_class' => 'WbsGo\clientsBundle\Entity\VtigerTicketcomments' )); } public function getName() { return 'comments'; } } OpenTicketType.php <?php namespace WbsGo\clientsBundle\Form; use Symfony\Component\Form\AbstractType, Symfony\Component\Form\FormBuilderInterface ; use WbsGo\clientsBundle\Form\Type\VtigerTicketcommentsType; use Symfony\Component\OptionsResolver\OptionsResolverInterface; class OpenTicketType extends AbstractType { public function buildForm(FormBuilderInterface $builder, array $options) { $builder ->add('title') ->add('priority') ->add('solution') ->add('comments', 'collection', array( 'type' => new VtigerTicketcommentsType() )) ; } public function setDefaultOptions(OptionsResolverInterface $resolver) { $resolver->setDefaults(array( 'data_class' => 'WbsGo\clientsBundle\Entity\VtigerTroubletickets' )); } public function getName() { return 'ticket'; } } TicketController.php <?php namespace WbsGo\clientsBundle\Controller; use Symfony\Bundle\FrameworkBundle\Controller\Controller; use WbsGo\clientsBundle\Entity\VtigerTroubletickets; use WbsGo\clientsBundle\Entity\VtigerTicketcomments; use WbsGo\clientsBundle\Form\OpenTicketType; use Symfony\Component\HttpFoundation\Request; class TicketController extends Controller { public function indexAction() { $em = $this->getDoctrine()->getManager(); $tickets = $em ->getRepository('WbsGoclientsBundle:VtigerTroubletickets') ->findAllOpenByCustomerId($this->getUser()->getId()); $userdata = $this->getDoctrine()->getManager() ->getRepository('WbsGoclientsBundle:VtigerContactdetails') ->findContact($this->getUser()->getId()); return $this ->render('WbsGoclientsBundle:Ticket:index.html.twig', array('tickets' => $tickets, 'userdata' => $userdata)); } public function addAction() { $assets = $this->getDoctrine()->getManager() ->getRepository('WbsGoclientsBundle:VtigerAssets') ->findAssetByAccountId($this->getUser()->getId()); $assetlist = array(); foreach ($assets as $key => $v) { $assetlist[$key] = $key; } $form = $this->createForm(new OpenTicketType(), new VtigerTroubletickets()); return $this ->render('WbsGoclientsBundle:Ticket:add.html.twig', array('form' => $form->createView(), 'assets' => $assets,)); } } This is the error Symfony2 is returning Neither the property "comments" nor one of the methods "getComments()", "isComments()", "hasComments()", "_get()" or "_call()" exist and have public access in class "WbsGo\clientsBundle\Entity\VtigerTroubletickets". EDIT 2 This code is actually rendering my forms, but I need help in order to submit each XXXType form to its corresponding table. public function buildForm(FormBuilderInterface $builder, array $options) { $builder ->add('descripcion') ->add('prioridad') ->add('solucion') ->add('comment', new VtigerTicketcommentsType() ) ->add('contacto') ->add('suscripcion') ->add('producto', 'entity', array( 'class' => 'WbsGo\clientsBundle\Entity\VtigerAssets', 'property' => 'assetname', 'empty_value' => '--SELECT--', 'query_builder' => function(\WbsGo\clientsBundle\Entity\VtigerAssetsRepository $repository) { //return $repository->findAssetByAccountId($this->customerId); return $repository->createQueryBuilder('a') ->select('a') ->where('a.account = (SELECT cd.accountid FROM WbsGoclientsBundle:VtigerContactdetails cd WHERE cd.contactid = ?1)') ->setParameter(1, $this->customerId); } ) ) ->add('hardware') ->add('backup') ->add('web') ->add('restore') ->add('customerId') ; } I also removed ->add('ticketid') from VtigerTicketcommentsType.php because it has relationship and is not needed. it's auto_incremental and must be generated once everything is submitted.

    Read the article

  • Tomcat6 can't connect to MySql (The driver has not received any packets from the server)

    - by Tobias Wiesenthal
    Hi all, i'm running an Apache Tomcat 6.0.20 / MySQL 5.1.37-lubuntu / sun-java6-jdk /sun-java6-jre / sun-java6-bin on my local machine using Ubuntu 9.10 as OS. I'm trying to get a simple DB-query example running for 2 days now, but i still get this Exception: org.apache.jasper.JasperException: javax.servlet.ServletException: javax.servlet.jsp.JspException: Unable to get connection, DataSource invalid: "org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.)" org.apache.jasper.servlet.JspServletWrapper.handleJspException(JspServletWrapper.java:522) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:398) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:342) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:267) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) root cause javax.servlet.ServletException: javax.servlet.jsp.JspException: Unable to get connection, DataSource invalid: "org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.)" org.apache.jasper.runtime.PageContextImpl.doHandlePageException(PageContextImpl.java:862) org.apache.jasper.runtime.PageContextImpl.handlePageException(PageContextImpl.java:791) org.apache.jsp.index_jsp._jspService(index_jsp.java:104) org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:374) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:342) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:267) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) root cause javax.servlet.jsp.JspException: Unable to get connection, DataSource invalid: "org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.)" org.apache.taglibs.standard.tag.common.sql.QueryTagSupport.getConnection(QueryTagSupport.java:285) org.apache.taglibs.standard.tag.common.sql.QueryTagSupport.doStartTag(QueryTagSupport.java:168) org.apache.jsp.index_jsp._jspx_meth_sql_005fquery_005f0(index_jsp.java:274) org.apache.jsp.index_jsp._jspx_meth_c_005fotherwise_005f0(index_jsp.java:216) org.apache.jsp.index_jsp._jspx_meth_c_005fchoose_005f0(index_jsp.java:130) org.apache.jsp.index_jsp._jspService(index_jsp.java:93) org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:374) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:342) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:267) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) my web.xml looks like this : <?xml version="1.0" encoding="utf-8"?> <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" version="2.5"> <resource-ref> <description>DB Connection</description> <res-ref-name>jdbc/testDB</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> </web-app> the context.xml looks like this : <?xml version="1.0" encoding="UTF-8"?> <Context path="/my1stApp" docBase="/var/www/jsp/my1stApp" debug="5" reloadable="true" crossContext="true"> <Resource name="jdbc/testDB" auth="Container" type="javax.sql.DataSource" maxActive="5" maxIdle="5" maxWait="10000" username="user" password="password" driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/some"/> </Context> and the jsp file looks like this: <%@ page contentType="text/html" %> <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %> <%@ taglib prefix="fn" uri="http://java.sun.com/jsp/jstl/functions" %> <%@ taglib prefix="fmt" uri="http://java.sun.com/jsp/jstl/fmt" %> <%@ taglib prefix="sql" uri="http://java.sun.com/jsp/jstl/sql" %> <html> <head> <title>DroneLootTool</title> </head> <body bgcolor="white"> <sql:query var="res" dataSource="jdbc/testDB"> select name, othername from mytable </sql:query> <h2>Results</h2> <c:forEach var="row" items="${res.rows}"> Name ${row.name}<br/> MoreName ${row.othername}<br/><br/> </c:forEach> </body> </html> read lots of forum entries / tried lots of different settings (always changed back to original settings when it didnt' work) set TOMCAT6_SECURITY=no in /etc/default/tomcat6 because TOMCAT6_SECURITY=yes was causing trouble too the skip-networking flag is not set for the DB (BIND 127.0.0.1 is set) firewall is swiched off (sudo ufw disable) MySQL works (tested several times with user used in this skript) telnet localhost 3306 says Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Connection closed by foreign host. The TestConnection.java produced the following output: me@my-laptop:~/Desktop$ java -classpath '/usr/share/java/mysql.jar:./' TestConnection com.mysql.jdbc.Driver jdbc:mysql://localhost:3306/testDB myuser mypassword com.mysql.jdbc.CommunicationsException: Communications link failure Last packet sent to the server was 0 ms ago. at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1070) at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2103) at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:718) at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:298) at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:282) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:185) at TestConnection.checkConnection(TestConnection.java:40) at TestConnection.main(TestConnection.java:21) Caused by: com.mysql.jdbc.CommunicationsException: Communications link failure Last packet sent to the server was 0 ms ago. at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1070) at com.mysql.jdbc.MysqlIO.readPacket(MysqlIO.java:666) at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1069) at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2031) ... 7 more Caused by: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost. at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:2431) at com.mysql.jdbc.MysqlIO.readPacket(MysqlIO.java:590) ... 9 more Connection failed. i don't know if there is a difference between the way the java driver connects to the DB and the Perl DBI module does, but this PERL skript works #!/usr/bin/perl -w use CGI; use DBI; use strict; print CGI::header(); my $dbh = DBI->connect("dbi:mysql:some:localhost", "user", "password"); my $sSql = "SELECT * from mytable"; my $ppl = $dbh->selectall_arrayref( $sSql ); foreach my $pl (@$ppl) { my @array = @$pl; print @array; } $dbh->disconnect; enabled --log-warnings on the mysql, but i didn't get any new warnings. When i was searching the logs for warnings i found this messages when i restart the tomcat, don't know if it helps to find the problem : Feb 2 19:50:37 tobias-laptop jsvc.exec[3129]: 02.02.2010 19:50:37 org.apache.catalina.startup.HostConfig checkResources#012INFO: Undeploying context [/myapp] Feb 2 19:50:37 tobias-laptop jsvc.exec[3129]: 02.02.2010 19:50:37 org.apache.catalina.loader.WebappClassLoader clearReferencesThreads#012SCHWERWIEGEND: A web application appears to have started a thread named [MySQL Statement Cancellation Timer] but has failed to stop it. This is very likely to create a memory leak. Feb 2 19:50:37 tobias-laptop jsvc.exec[3129]: 02.02.2010 19:50:37 org.apache.catalina.startup.HostConfig deployDescriptor#012INFO: Deploying configuration descriptor myapp.xml

    Read the article

  • DynaTrace Beefs Up Support for Microsoft Applications

    The application performance management vendor ups support for platforms such as SharePoint Server, Dynamics CRM, and Visual Studio 2010....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >