Search Results

Search found 1376 results on 56 pages for 'ricardo sa'.

Page 52/56 | < Previous Page | 48 49 50 51 52 53 54 55 56  | Next Page >

  • Using smartctl to get vendor specific Attributes from ssd drive behind a SmartArray P410 controller

    - by Lairsdragon
    Hi! Recently I have deployed some HP server with SSD's behind a SmartArray P410 controller. While not official supported from HP the server work well sofar. Now I like to get wear level info's, error statistics etc from the drive. While the SA P410 supports a passthru of the SMART Command to a single drive in the array the output I was not able to the the interesting things from the drive. In this case especially the value the Wear level indicator is from interest for me (Attr.ID 233), but this is ony present if the drive is directly attanched to a SATA Controller. smartctl on directly connected ssd: # smartctl -A /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 5 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 3 Spin_Up_Time 0x0000 100 000 000 Old_age Offline In_the_past 0 4 Start_Stop_Count 0x0000 100 000 000 Old_age Offline In_the_past 0 5 Reallocated_Sector_Ct 0x0002 100 100 000 Old_age Always - 0 9 Power_On_Hours 0x0002 100 100 000 Old_age Always - 8561 12 Power_Cycle_Count 0x0002 100 100 000 Old_age Always - 55 192 Power-Off_Retract_Count 0x0002 100 100 000 Old_age Always - 29 232 Unknown_Attribute 0x0003 100 100 010 Pre-fail Always - 0 233 Unknown_Attribute 0x0002 088 088 000 Old_age Always - 0 225 Load_Cycle_Count 0x0000 198 198 000 Old_age Offline - 508509 226 Load-in_Time 0x0002 255 000 000 Old_age Always In_the_past 0 227 Torq-amp_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 228 Power-off_Retract_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 smartctl on P410 connected ssd: # ./smartctl -A -d cciss,0 /dev/cciss/c1d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net (Right, it is complety empty) smartctl on P410 connected hdd: # ./smartctl -A -d cciss,0 /dev/cciss/c0d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Current Drive Temperature: 27 C Drive Trip Temperature: 68 C Vendor (Seagate) cache information Blocks sent to initiator = 1871654030 Blocks received from initiator = 1360012929 Blocks read from cache and sent to initiator = 2178203797 Number of read and write commands whose size <= segment size = 46052239 Number of read and write commands whose size > segment size = 0 Vendor (Seagate/Hitachi) factory information number of hours powered up = 3363.25 number of minutes until next internal SMART test = 12 Do I hunt here a bug, or is this a limitation of the p410 SMART cmd Passthru?

    Read the article

  • How do I configure a site in IIS 7 for SSL with a wildcard certificate?

    - by michielvoo
    We have an Windows 2008 server with IIS 7 to test sites we develop for our clients. Each site has a binding on a subdomain: clienta.example.com clientb.example.com clientc.example.com (* Using example.com to protect the innocent) For one of these sites we now have to test if it works over https. So I have created a wildcard certificate request with *.example.com as the common name. I have received the certificate (issued by PositiveSSL SA) and completed the request. The certificate is now installed in IIS. Now I have added an https binding to the second site with the following settings: type: https IP address: All Unassigned Port: 443 Host name: clientb.example.com SSL certificate: *.example.com Browsing the site over regular http works fine. When I try to browse the site over https I get the following errors (depending on the browser used): Chrome This webpage is not available Error 102 (net::ERR_CONNECTION_REFUSED): Unknown error. Firefox Unable to connect Firefox can't establish a connection to the server at clientb.example.com Firebug says Status: Aborted Internet Explorer Internet Explorer cannot display the webpage I have checked Failed Request Tracing, and according to the log the request was completed with status 200. I have run the SSL Diagnostics Tool with the following result: System time: Fri, 04 Mar 2011 14:04:35 GMT Connecting to 192.168.2.95:443 Connected Handshake: 115 bytes sent Handshake: 3877 bytes received Handshake: 326 bytes sent Handshake: 59 bytes received Handshake succeeded Verifying server certificate, it might take a while... Server certificate name: *.example.com Server certificate subject: OU=Domain Control Validated, OU=PositiveSSL Wildcard, CN=*.example.com Server certificate issuer: C=GB, S=Greater Manchester, L=Salford, O=Comodo CA Limited, CN=PositiveSSL CA Server certificate validity: From 2-3-2011 1:00:00 To 2-3-2012 0:59:59 1:00:00 To 2-3-2012 0:59:59 HTTPS request: GET / HTTP/1.0 User-Agent: SSLDiag Accept:*/* HTTPS: 85 bytes of encrypted data sent HTTPS: 533 bytes of encrypted data received Status: HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found Content-Type: text/html; charset=us-ascii Server: Microsoft-HTTPAPI/2.0 Date: Fri, 04 Mar 2011 14:04:35 GMT Connection: close Content-Length: 315 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd"> <HTML><HEAD><TITLE>Not Found</TITLE> <META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD> <BODY><h2>Not Found</h2> <hr><p>HTTP Error 404. The requested resource is not found.</p> </BODY></HTML> HTTPS: server disconnected Final handshake: 37 bytes sent successfully Q: What can I do to make this work?

    Read the article

  • New-ManagedContentSettings - not working properly under Exchange 2010

    - by mfinni
    I have a client that is divesting a business unit into a new AD forest, Exchange org, etc. We're using Quest tools to migrate users and mailboxes. However, I have to build the new infrastructure to match the old one. In the old one, we're using Managed Folder Mailbox Policies to limit (or allow) retention. They started with Exchange 2007 and never upgraded to Retention Policies; oh well. So, in the old environment, when you use a 2007 server to define a new Managed Content Setting, you can pick "Email" from the dropdown for MessageClass. This is a display name; the actual MessageClass values are thus: MessageClass : IPM.Note;IPM.Note.AS/400 Move Notification Form v1.0;IPM.Note.Delayed;IPM.Note.Exchange.ActiveSync.Report;IPM.Note.JournalReport.Msg;IPM.Note.JournalReport.Tnef;IPM.Note.Microsoft.Missed.Voice;IPM.Note.Rules.OofTemplate.Microsoft;IPM.Note.Rules.ReplyTemplate.Microsoft;IPM.Note.Secure.Sign;IPM.Note.SMIME;IPM.Note.SMIME.MultipartSigned;IPM.Note.StorageQuotaWarning;IPM.Note.StorageQuotaWarning.Warning;IPM.Notification.Meeting.Forward;IPM.Outlook.Recall;IPM.Recall.Report.Success;IPM.Schedule.Meeting.*;REPORT.IPM.Note.NDR If I take that and try to mangle it into a new cmdlet for Ex2010 in my new environment here's what I get New-ManagedContentSettings -Name "Delete Messages older then 90 days" -FolderName "Entire Mailbox" -RetentionEnabled $True -AgeLimitForRetention 90 -TriggerForRetention WhenDelivered -RetentionAction DeleteAndAllowRecovery -MessageClass "IPM.Note","IPM.Note.AS/400MoveNotificationFormv1.0","IPM.Note.Delayed","IPM.Note.Exchange.ActiveSync.Report","IPM.Note.JournalReport.Msg","IPM.Note.JournalReport.Tnef","IPM.Note.Microsoft.Missed.Voice","IPM.Note.Rules.OofTemplate.Microsoft","IPM.Note.Rules.ReplyTemplate.Microsoft","IPM.Note.Secure.Sign","IPM.Note.SMIME","IPM.Note.SMIME.MultipartSigned","IPM.Note.StorageQuotaWarning","IPM.Note.StorageQuotaWarning.Warning","IPM.Notification.Meeting.Forward","IPM.Outlook.Recall","IPM.Recall.Report.Success","IPM.Schedule.Meeting.*","REPORT.IPM.Note.NDR" -whatif Invoke-Command : Cannot bind parameter 'MessageClass' to the target. Exception setting "MessageClass": "The length of t he property is too long. The maximum length is 255 and the length of the value provided is 518." At C:\Users\MFinnigan.sa\AppData\Roaming\Microsoft\Exchange\RemotePowerShell\pfexcas02.fve.ad.5ssl.com\pfexcas02.fve.ad .5ssl.com.psm1:28204 char:29 + $scriptCmd = { & <<<< $script:InvokeCommand ` + CategoryInfo : WriteError: (:) [New-ManagedContentSettings], ParameterBindingException + FullyQualifiedErrorId : ParameterBindingFailed,Microsoft.Exchange.Management.SystemConfigurationTasks.NewManaged ContentSettings So, the config object can store all that mess, but I can't fit it in through the cmdlet to create the object. Lovely. Any ideas?

    Read the article

  • Linux-Containers — Part 1: Overview

    - by Lenz Grimmer
    "Containers" by Jean-Pierre Martineau (CC BY-NC-SA 2.0). Linux Containers (LXC) provide a means to isolate individual services or applications as well as of a complete Linux operating system from other services running on the same host. To accomplish this, each container gets its own directory structure, network devices, IP addresses and process table. The processes running in other containers or the host system are not visible from inside a container. Additionally, Linux Containers allow for fine granular control of resources like RAM, CPU or disk I/O. Generally speaking, Linux Containers use a completely different approach than "classicial" virtualization technologies like KVM or Xen (on which Oracle VM Server for x86 is based on). An application running inside a container will be executed directly on the operating system kernel of the host system, shielded from all other running processes in a sandbox-like environment. This allows a very direct and fair distribution of CPU and I/O-resources. Linux containers can offer the best possible performance and several possibilities for managing and sharing the resources available. Similar to Containers (or Zones) on Oracle Solaris or FreeBSD jails, the same kernel version runs on the host as well as in the containers; it is not possible to run different Linux kernel versions or other operating systems like Microsoft Windows or Oracle Solaris for x86 inside a container. However, it is possible to run different Linux distribution versions (e.g. Fedora Linux in a container on top of an Oracle Linux host), provided it supports the version of the Linux kernel that runs on the host. This approach has one caveat, though - if any of the containers causes a kernel crash, it will bring down all other containers (and the host system) as well. For example, Oracle's Unbreakable Enterprise Kernel Release 2 (2.6.39) is supported for both Oracle Linux 5 and 6. This makes it possible to run Oracle Linux 5 and 6 container instances on top of an Oracle Linux 6 system. Since Linux Containers are fully implemented on the OS level (the Linux kernel), they can be easily combined with other virtualization technologies. It's certainly possible to set up Linux containers within a virtualized Linux instance that runs inside Oracle VM Server for Oracle VM Virtualbox. Some use cases for Linux Containers include: Consolidation of multiple separate Linux systems on one server: instances of Linux systems that are not performance-critical or only see sporadic use (e.g. a fax or print server or intranet services) do not necessarily need a dedicated server for their operations. These can easily be consolidated to run inside containers on a single server, to preserve energy and rack space. Running multiple instances of an application in parallel, e.g. for different users or customers. Each user receives his "own" application instance, with a defined level of service/performance. This prevents that one user's application could hog the entire system and ensures, that each user only has access to his own data set. It also helps to save main memory — if multiple instances of a same process are running, the Linux kernel can share memory pages that are identical and unchanged across all application instances. This also applies to shared libraries that applications may use, they are generally held in memory once and mapped to multiple processes. Quickly creating sandbox environments for development and testing purposes: containers that have been created and configured once can be archived as templates and can be duplicated (cloned) instantly on demand. After finishing the activity, the clone can safely be discarded. This allows to provide repeatable software builds and test environments, because the system will always be reset to its initial state for each run. Linux Containers also boot significantly faster than "classic" virtual machines, which can save a lot of time when running frequent build or test runs on applications. Safe execution of an individual application: if an application running inside a container has been compromised because of a security vulnerability, the host system and other containers remain unaffected. The potential damage can be minimized, analyzed and resolved directly from the host system. Note: Linux Containers on Oracle Linux 6 with the Unbreakable Enterprise Kernel Release 2 (2.6.39) are still marked as Technology Preview - their use is only recommended for testing and evaluation purposes. The Open-Source project "Linux Containers" (LXC) is driving the development of the technology behind this, which is based on the "Control Groups" (CGroups) and "Name Spaces" functionality of the Linux kernel. Oracle is actively involved in the Linux Containers development and contributes patches to the upstream LXC code base. Control Groups provide means to manage and monitor the allocation of resources for individual processes or process groups. Among other things, you can restrict the maximum amount of memory, CPU cycles as well as the disk and network throughput (in MB/s or IOP/s) that are available for an application. Name Spaces help to isolate process groups from each other, e.g. the visibility of other running processes or the exclusive access to a network device. It's also possible to restrict a process group's access and visibility of the entire file system hierarchy (similar to a classic "chroot" environment). CGroups and Name Spaces provide the foundation on which Linux containers are based on, but they can actually be used independently as well. A more detailed description of how Linux Containers can be created and managed on Oracle Linux will be explained in the second part of this article. Additional links related to Linux Containers: OTN Article: The Role of Oracle Solaris Zones and Linux Containers in a Virtualization Strategy Linux Containers on Wikipedia - Lenz Grimmer Follow me on: Personal Blog | Facebook | Twitter | Linux Blog |

    Read the article

  • The Winds of Change are a Blowin&rsquo;

    - by Ajarn Mark Caldwell
    For six years I have been an avid and outspoken fan and paying customer of SourceGear products…from Vault to Dragnet to Fortress and on to Vault Professional, but that is all changing now.  Not the fan part, but the paying customer part.  I’m still a huge fan.  I think that SourceGear does a great job with their product and support has been fantastic when needed (which is not very often).  I think that Eric Sink has done a fine job building a quality company and products, and I appreciate his contributions to the tech community through this blogging and books.  I still think their products are high quality and do a fantastic job of what they do.  But there’s the rub…what they do is no longer enough for me. As I have rebuilt our development team over the last couple of years, and we have begun to investigate Scrum and Kanban, I realize that I need more visibility into the progress of the team.  I need better project management tools, and this is where Vault Professional lags behind several other tools.  Granted, in the latest release (Vault 6.0) they added a nice time tracking feature, but I want more.  (Note, I did contact SourceGear about my quest for more, but apparently, the rest of their customer base has not been clamoring for this and so they have not built it.  Granted, I wasn’t clamoring for it either until just recently, but unfortunately for SourceGear, I want it now and don’t want to wait for them to build it into their system.) Ironically, it was SourceGear themselves who started to turn me on to the possibilities of other tools.  They built a limited integration with Axosoft OnTime which I read about several times on their support site (I used to regularly read and occasionally comment on their Support Forum).  I decided to check out OnTime and was very impressed with the tool for work item tracking and project management (not to mention their great Scrum Master in 10 Minutes video).  I fell in love with the capabilities of OnTime.  Unfortunately, the integration with Vault for source control management was, as I mentioned, limited.  I could have forfeited the integration between work items and source code, but there is too much benefit to linking check-ins to work items for me to give that up.  So then I did what was previously unthinkable for me, I considered switching not just the work tracking tool, but also the source code management tool.  This was really stepping outside my comfort zone because source code is Gold, and not to be trifled with.  When you find a good weapon to protect your gold, stick with it. I looked at Git and Tortoise SVN, but the integration methods for those was pretty rough compared to what I was used to.  The recommended tool from Axosoft’s point of view appeared to be RocketSVN, but I really wasn’t sure I wanted to go the “flavor of Subversion” route.  Then I started thinking about that other tool I liked back when I first chose to go with Vault, but couldn’t afford:  Team Foundation Server.  And what do you know…Microsoft has not only radically improved it over that version from back in 2006, but they also came to their senses about how it should be licensed, and it is much more affordable now.  So I started looking into the latest capabilities in the 2012 version, and I fell in love all over again. I really went deep on checking out the tools.  I watched numerous webcasts from Microsoft partners, went to a beta preview on Microsoft’s campus, and watched a lot of Channel 9 videos on the new ALM features (oooh…shiny).  Frankly, I was very impressed with the capabilities of the newest version, and figured this was probably our direction.  As an interesting twist of fate, one of my employees crossed paths with an ALM Consultant from Northwest Cadence, a local Microsoft Partner, and one of the companies that produced several of the webcasts that I had been watching.  So I gave Bryon a call and started grilling him to see if he really knew anything or was just another guy who couldn’t find a job so he called himself a consultant.  It turns out Bryon actually knows a lot, especially in an area that was becoming a frustration point for us: Branching strategies and automated builds (that’s probably a whole separate blog entry).  As we talked, Bryon suggested we look into doing a DTDPS (Developer Tools Deployment Planning Services) session with his company.  This is a service that can be paid for by Microsoft Enterprise Agreement planning services credits or SA training benefits, and, again, coincidentally, we had several that were just about to expire, so I put them to good use. The DTDPS sessions were great; and Bryon, Rick, and the rest of the folks at Northwest Cadence have been a pleasure to work with.  We have just purchased a new server for our TFS rollout and are planning the steps and options right now.  This is still a big project ahead of us to not only install and configure TFS, but also to load all of our source code (many different systems, not just one program) and transition to the new way of life with TFS, but I am convinced that it is the right move for my team at this point in time.  We need the new capabilities that are in alignment with Scrum and Kanban methodologies in order to more efficiently manage all the different projects that we have going on at one time. I would still wholeheartedly endorse SourceGear’s products and Axosoft’s OnTime for those whose needs are met by those tools, but for me and my team, I think that TFS is the right fit, and I am looking forward to the change.

    Read the article

  • SQL SERVER – SSMS: Database Consistency History Report

    - by Pinal Dave
    Doctor and Database The last place I like to visit is always a hospital. With the monsoon season starting, intermittent rains, it has become sort of a routine to get a cycle of fever every other year (seriously I hate it). So when I visit my doctor, it is always interesting in the way he quizzes me. The routine question of – “How many days have you had this?”, “Is there any pattern?”, “Did you drench in rain?”, “Do you have any other symptom?” and so on. The idea here is that the doctor wants to find any anomaly or a pattern that will guide him to a viral or bacterial type. Most of the time they get it based on experience and sometimes after a battery of tests. So if there is consistent behavior to your problem, there is always a solution out. SQL Server has its way to find if the server data / files are in consistent state using the DBCC commands. Back to SQL Server In real life, Database consistency check is one of the critical operations a DBA generally doesn’t give much priority. Many readers of my blogs have asked many times, how do we know if the database is consistent? How do I read output of DBCC CHECKDB and find if everything is right or not? My common answer to all of them is – look at the bottom of checkdb (or checktable) output and look for below line. CHECKDB found 0 allocation errors and 0 consistency errors in database ‘DatabaseName’. Above is a “good sign” because we are seeing zero allocation and zero consistency error. If you are seeing non-zero errors then there is some problem with the database. Sample output is shown as below: CHECKDB found 0 allocation errors and 2 consistency errors in database ‘DatabaseName’. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (DatabaseName). If we see non-zero error then most of the time (not always) we get repair options depending on the level of corruption. There is risk involved with above option (repair_allow_data_loss), that is – we would lose the data. Sometimes the option would be repair_rebuild which is little safer. Though these options are available, it is important to find the root cause to the problem. In standard report, there is a report which can show the history of checkdb executed for the selected database. Since this is a database level report, we need to right click on database, click Reports, click Standard Reports and then choose “Database Consistency History” report. The information in this report is picked from default trace. If default trace is disabled or there is no checkdb run or information is not there in default trace (because it’s rolled over), we would get report like below. As we can see report says it very clearly: Currently, no execution history of CHECKDB is available or default trace is not enabled. To demonstrate, I have caused corruption in one of the database and did below steps. Run CheckDB so that errors are reported. Fix the corruption by losing the data using repair option Run CheckDB again to check if corruption is cleared. After that I have launched the report and below is what we would see. If you are lazy like me and don’t want to run the report manually for each database then below query would be handy to provide same report for all database. This query is runs behind the scenes by the report. All I have done is remove the filter for database name (at the last – highlighted). DECLARE @curr_tracefilename VARCHAR(500); DECLARE @base_tracefilename VARCHAR(500); DECLARE @indx INT; SELECT @curr_tracefilename = path FROM sys.traces WHERE is_default = 1; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SELECT @indx  = PATINDEX('%\%', @curr_tracefilename) ; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SET @base_tracefilename = LEFT( @curr_tracefilename,LEN(@curr_tracefilename) - @indx) + '\log.trc'; SELECT  SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),36, PATINDEX('%executed%',TEXTData)-36) AS command ,       LoginName ,       StartTime ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%found%',TEXTData) +6,PATINDEX('%errors %',TEXTData)-PATINDEX('%found%',TEXTData)-6)) AS errors ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%repaired%',TEXTData) +9,PATINDEX('%errors.%',TEXTData)-PATINDEX('%repaired%',TEXTData)-9)) repaired ,       SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%time:%',TEXTData)+6,PATINDEX('%hours%',TEXTData)-PATINDEX('%time:%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%hours%',TEXTData) +6,PATINDEX('%minutes%',TEXTData)-PATINDEX('%hours%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%minutes%',TEXTData) +8,PATINDEX('%seconds.%',TEXTData)-PATINDEX('%minutes%',TEXTData)-8) AS time FROM::fn_trace_gettable( @base_tracefilename, DEFAULT) WHERE EventClass = 22 AND SUBSTRING(TEXTData,36,12) = 'DBCC CHECKDB' -- AND DatabaseName = @DatabaseName; Don’t get worried about the logic above. All it is doing is reading the trace files, parsing below entry and getting out information for underlined words. DBCC CHECKDB (CorruptedDatabase) executed by sa found 2 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.  Internal database snapshot has split point LSN = 00000029:00000030:0001 and first LSN = 00000029:00000020:0001. Hopefully now onwards you would run checkdb and understand the importance of it. As responsible DBAs I am sure you are already doing it, let me know how often do you actually run them on you production environment? Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • User-Defined Customer Events & their impact (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what is the impact in having a user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events that extend or simulate base customer events, the ones that are included in the base product, ensure that there is no other impact either direct or indirect to other business functions that the application has to offer.  

    Read the article

  • Limitations of User-Defined Customer Events (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what are limitations in having user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events, you should be aware that you don't extend or simulate base customer events, the ones that are included in the base product, as they are further used to provide/validate additional business functions.  

    Read the article

  • Coping with infrastructure upgrades

    - by Fatherjack
    A common topic for questions on SQL Server forums is how to plan and implement upgrades to SQL Server. Moving from old to new hardware or moving from one version of SQL Server to another. There are other circumstances where upgrades of other systems affect SQL Server DBAs. For example, where I work at the moment there is an Microsoft Exchange (email) server upgrade in progress. It it being handled by a different team so I’m not wholly sure on the details but we are in a situation where there are currently 2 Exchange email servers – the old one and the new one. Users mail boxes are being transferred in a planned process but as we approach the old server being turned off we have to also make sure that our SQL Servers get updated to use the new SMTP server for all of the SQL Agent notifications, SSIS packages etc. My servers have a number of profiles so that various jobs can send emails on behalf of various departments and different systems. This means there are lots of places that the old server name needs to be replaced by the new one. Anyone who has set up DBMail and enjoyed the click-tastic odyssey of screens to create Profiles and Accounts and so on and so forth ought to seek some professional help in my opinion. It’s a nightmare of back and forth settings changes and it stinks. I wasn’t looking forward to heading into this mess of a UI and changing the old Exchange server name for the new one on all my SQL Instances for all of the accounts I have set up. So I did what any Englishmen with a shed would do, I decided to take it apart and see if I can fix it another way. I took a guess that we are going to be working in MSDB and Books OnLine was remarkably helpful and amongst a lot of information told me about a couple of procedures that can be used to interrogate DBMail settings. USE [msdb] -- It's where all the good stuff is kept GO EXEC dbo.sysmail_help_profile_sp; EXEC dbo.sysmail_help_account_sp; Both of these procedures take optional parameters with the same name – ID and Name. If you provide an ID or a name then the results you get back are for that specific Profile or Account. Otherwise you get details of all Profiles and Accounts on the server you are connected to. As you can see (click for a bigger image), the Account has the SMTP server information in the servername column. We want to change that value to NewSMTP.Contoso.com. Now it appears that the procedure we are looking at gets it’s data from the sysmail_account and sysmail_server tables, you can get the results the stored procedure provides if you run the code below. SELECT [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [last_mod_datetime] , [last_mod_user] FROM dbo.sysmail_account AS sa; SELECT [account_id] , [servertype] , [servername] , [port] , [username] , [credential_id] , [use_default_credentials] , [enable_ssl] , [flags] , [last_mod_datetime] , [last_mod_user] , [timeout] FROM dbo.sysmail_server AS sms Now, we have no real idea how these tables are linked and whether making an update direct to one or other of them is going to do what we want or whether it will entirely cripple our ability to send email from SQL Server so we wont touch those tables with any UPDATE TSQL. So, back to Books OnLine then and we find sysmail_update_account_sp. It’s exactly what we need. The examples in BOL take the form (as below) of having every parameter explicitly defined. Not wanting to totally obliterate the existing values by not passing values in all of the parameters I set to writing some code to gather the existing data from the tables and re-write the SMTP server name and then execute the resulting TSQL. IF OBJECT_ID('tempdb..#sysmailprofiles') IS NOT NULL DROP TABLE #sysmailprofiles GO CREATE TABLE #sysmailprofiles ( account_id INT , [name] VARCHAR(50) , [description] VARCHAR(500) , email_address VARCHAR(500) , display_name VARCHAR(500) , replyto_address VARCHAR(500) , servertype VARCHAR(10) , servername VARCHAR(100) , port INT , username VARCHAR(100) , use_default_credentials VARCHAR(1) , ENABLE_ssl VARCHAR(1) ) INSERT [#sysmailprofiles] ( [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [servertype] , [servername] , [port] , [username] , [use_default_credentials] , [ENABLE_ssl] ) EXEC [dbo].[sysmail_help_account_sp] DECLARE @TSQL NVARCHAR(1000) SELECT TOP 1 @TSQL = 'EXEC [dbo].[sysmail_update_account_sp] @account_id = ' + CAST([s].[account_id] AS VARCHAR(20)) + ', @account_name = ''' + [s].[name] + '''' + ', @email_address = N''' + [s].[email_address] + '''' + ', @display_name = N''' + [s].[display_name] + '''' + ', @replyto_address = N''' + s.replyto_address + '''' + ', @description = N''' + [s].[description] + '''' + ', @mailserver_name = ''NEWSMTP.contoso.com''' + +', @mailserver_type = ' + [s].[servertype] + ', @port = ' + CAST([s].[port] AS VARCHAR(20)) + ', @username = ' + COALESCE([s].[username], '''''') + ', @use_default_credentials =' + CAST(s.[use_default_credentials] AS VARCHAR(1)) + ', @enable_ssl =' + [s].[ENABLE_ssl] FROM [#sysmailprofiles] AS s WHERE [s].[servername] = 'SMTP.Contoso.com' SELECT @tsql EXEC [sys].[sp_executesql] @tsql This worked well for me and testing the email function EXEC dbo.sp_send_dbmail afterwards showed that the settings were indeed using our new Exchange server. It was only later in writing this blog that I tried running the sysmail_update_account_sp procedure with only the SMTP server name parameter value specified. Despite what Books OnLine might intimate, you can do this and only the values for parameters specified get changed. If a parameter is not specified in the execution of the procedure then the values remain unchanged. This renders most of the above script unnecessary as I could have simply specified the account_id that I want to amend and the new value for the parameter I want to update. EXEC sysmail_update_account_sp @account_id = 1, @mailserver_name = 'NEWSMTP.Contoso.com' This wasn’t going to be the main reason for this post, it was meant to describe how to capture values from a stored procedure and use them in dynamic TSQL but instead we are here and (re)learning the fact that Books Online is a little flawed in places. It is a fantastic resource for anyone working with SQL Server but the reader must adopt an enquiring frame of mind and use a little curiosity to try simple variations on examples to fully understand the code you are working with. I think the author(s) of this part of Books OnLine missed an opportunity to include a third example that had fewer than all parameters specified to give a lead to this method existing.

    Read the article

  • useFastClick in JQuery Mobile

    - by Yousef_Jadallah
      Normal 0 false false false EN-US X-NONE AR-SA /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi;} For who want to convert the application from JQM Alpha to JQM Beta 1, needs to bind  click  events to the new vclick one. Click event is working in general browsers butt that is needed for iOS and Android, useFastClick  is (touch + mouse click). Moreover if you have this event alot in your project you can turn useFastClick off in mobileinit event: $(document).bind("mobileinit", function () {             $.mobile.useFastClick = false; });   vclick event is needed to support touch events to make the page changes to happen faster, and to perform the URL hiding. So you need to change something like this  $('btnShow').live("click", function (evt) {   To :  $('btnShow').live("vclick", function (evt) {     For more information : http://jquerymobile.com/test/docs/api/globalconfig.html   Here you can find full example in this case : <!DOCTYPE ><html xmlns="http://www.w3.org/1999/xhtml"><head>    <link rel="stylesheet" href="http://code.jquery.com/mobile/1.0b1/jquery.mobile-1.0b1.min.css" />    <script src="http://code.jquery.com/jquery-1.6.1.min.js"></script>    <script src="http://code.jquery.com/mobile/1.0b1/jquery.mobile-1.0b1.min.js"></script>    <script type="text/javascript">     //Here you need to use vclick instead of click event         $('ul[id="MylistView"] a').live("vclick", function (evt) {            alert('list click');        });      </script>    <title></title></head><body>    <div id="FirstPage" data-role="page" data-theme="b">        <div data-role="header">            <h1>                Page Title</h1>        </div>        <div data-role="content">            <ul id="MylistView" data-role="listview" data-theme="g">                <li><a href="#SecondPage">Acura</a></li>                <li><a href="#SecondPage">Audi</a></li>                <li><a href="#SecondPage">BMW</a></li>            </ul>        </div>        <div data-role="footer">            <h4>                Page Footer</h4>        </div>    </div>    <div id="SecondPage" data-role="page" data-theme="b"   >        <div data-role="header" >            <h1>                Page Title</h1>        </div>        Second Page        <div data-role="footer">            <h4>                Page Footer</h4>        </div>    </div></body></html>     Hope that helps.

    Read the article

  • Alternative way of developing for ASP.NET to WebForms - Any problems with this?

    - by John
    So I have been developing in ASP.NET WebForms for some time now but often get annoyed with all the overhead (like ViewState and all the JavaScript it generates), and the way WebForms takes over a lot of the HTML generation. Sometimes I just want full control over the markup and produce efficient HTML of my own so I have been experimenting with what I like to call HtmlForms. Essentially this is using ASP.NET WebForms but without the form runat="server" tag. Without this tag, ASP.NET does not seem to add anything to the page at all. From some basic tests it seems that it runs well and you still have the ability to use code-behind pages, and many ASP.NET controls such as repeaters. Of course without the form runat="server" many controls won't work. A post at Enterprise Software Development lists the controls that do require the tag. From that list you will see that all of the form elements like TextBoxes, DropDownLists, RadioButtons, etc cannot be used. Instead you use normal HTML form controls. But how do you access these HTML controls from the code behind? Retrieving values on post back is easy, you just use Request.QueryString or Request.Form. But passing data to the control could be a little messy. Do you use a ASP.NET Literal control in the value field or do you use <%= value % in the markup page? I found it best to add runat="server" to my HTML controls and then you can access the control in your code-behind like this: ((HtmlInputText)txtName).Value = "blah"; Here's a example that shows what you can do with a textbox and drop down list: Default.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="NoForm.Default" %> <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="NoForm.Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form action="" method="post"> <label for="txtName">Name:</label> <input id="txtName" name="txtName" runat="server" /><br /> <label for="ddlState">State:</label> <select id="ddlState" name="ddlState" runat="server"> <option value=""></option> </select><br /> <input type="submit" value="Submit" /> </form> </body> </html> Default.aspx.cs using System; using System.Web.UI.HtmlControls; using System.Web.UI.WebControls; namespace NoForm { public partial class Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { //Default values string name = string.Empty; string state = string.Empty; if (Request.RequestType == "POST") { //If form submitted (post back) name = Request.Form["txtName"]; state = Request.Form["ddlState"]; //Server side form validation would go here //and actions to process form and redirect } ((HtmlInputText)txtName).Value = name; ((HtmlSelect)ddlState).Items.Add(new ListItem("ACT")); ((HtmlSelect)ddlState).Items.Add(new ListItem("NSW")); ((HtmlSelect)ddlState).Items.Add(new ListItem("NT")); ((HtmlSelect)ddlState).Items.Add(new ListItem("QLD")); ((HtmlSelect)ddlState).Items.Add(new ListItem("SA")); ((HtmlSelect)ddlState).Items.Add(new ListItem("TAS")); ((HtmlSelect)ddlState).Items.Add(new ListItem("VIC")); ((HtmlSelect)ddlState).Items.Add(new ListItem("WA")); if (((HtmlSelect)ddlState).Items.FindByValue(state) != null) ((HtmlSelect)ddlState).Value = state; } } } As you can see, you have similar functionality to ASP.NET server controls but more control over the final markup, and less overhead like ViewState and all the JavaScript ASP.NET adds. Interestingly you can also use HttpPostedFile to handle file uploads using your own input type="file" control (and necessary form enctype="multipart/form-data"). So my question is can you see any problems with this method, and any thoughts on it's usefulness? I have further details and tests on my blog.

    Read the article

  • SQL 2008 R2 login/network issue

    - by martinjd
    I have a Windows Server 2008 R2 new clean install , not a VM, that I have added to a Windows Server 2003 based domain using my account which has domain admin rights. The domain functional level is 2003. I performed a clean install of SQL Server 2008 R2 using my account which has domain admin rights. The installation completed without any errors. I logged into SSMS locally and attempted to add another domain account by clicking Search, Advanced and finding the user in the domain. When I return to the "Dialog - New" window and click OK I receive the following error: Create failed for Login 'Domain\User'. (Microsoft.SqlServer.Smo) An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) Windows NT user or group 'Domain\User' not found. Check the name again. (Microsoft SQL Server, Error: 15401) I have verified that the firewall is off, tried adding a different domain user, tried using SA to add a user, installed the hotfix for KB 976494 and verified that the Local Security Policy for Domain Member: Digitally encrypt or sign secure channel Domain Member: Digitally encrypt secure channel Domain Member: Digitally sign secure channel are disabled none of which have made a difference. I can RDP to a Server 2003 server running SQL 2008 and add the same domain user without issue. Also if I try to connect with SSMS to the sql server from another system on the domain using my account I get the following error: Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452) and on the database server I see the following in the security event log: An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: myUserName Account Domain: MYDOMAIN Failure Information: Failure Reason: An Error occured during Logon. Status: 0xc000018d Sub Status: 0x0 Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: MYWKS Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): - Key Length: 0 I am sure that the "NULL SID" has some significant meaning but have no idea at this point what the issue could be.

    Read the article

  • Importing a large dataset into a database

    - by peaceful
    I'm a beginning programmer in the relevant areas to this question, so if possible, it'd be helpful to avoid assuming I know a lot already. I'm trying to import the OpenLibrary dataset into a local Postgres database. After it's imported, I plan to use it as a starting seed for a Ruby on Rails application that will include information on books. The OpenLibrary datasets are available here, in a modified JSON format: http://openlibrary.org/dev/docs/jsondump I only need very basic information for my application, much less than what is provided in the dumps. I'm only trying to get out book titles, author names, and relationships between books and authors. Below are two typical entries from their dataset, the first for an author, and the second for a book (they seem to have an entry for each edition of a book). The entries seem to lead off with a primary key, and then with a type, before including the actual JSON database dump. /a/OL2A /type/author {"name": "U. Venkatakrishna Rao", "personal_name": "U. Venkatakrishna Rao", "last_modified": {"type": "/type/datetime", "value": "2008-09-10 08:44:01.978456"}, "key": "/a/OL2A", "birth_date": "1904", "type": {"key": "/type/author"}, "id": 99, "revision": 3} /b/OL345M /type/edition {"publishers": ["Social Science Research Project, Dept. of Geography, University of Dacca"], "pagination": "ii, 54 p.", "title": "Land use in Fayadabad area", "lccn": ["sa 65000491"], "subject_place": ["East Pakistan", "Dacca region."], "number_of_pages": 54, "languages": [{"comment": "initial import", "code": "eng", "name": "English", "key": "/l/eng"}], "lc_classifications": ["S471.P162 E23"], "publish_date": "1963", "publish_country": "pk ", "key": "/b/OL345M", "authors": [{"birth_date": "1911", "name": "Nafis Ahmad", "key": "/a/OL302A", "personal_name": "Nafis Ahmad"}], "publish_places": ["Dacca, East Pakistan"], "by_statement": "[by] Nafis Ahmad and F. Karim Khan.", "oclc_numbers": ["4671066"], "contributions": ["Khan, Fazle Karim, joint author."], "subjects": ["Land use -- East Pakistan -- Dacca region."]} The size of the uncompressed dumps are enormous, about 2GB for the authors list, and 18GB for the book editions list. OpenLibrary does not provide any tools for this themselves, they provide a simple unoptimized Python script for reading in sample data (which unlike the actual dumps comes in pure JSON format), but they estimate if that was modified for use on their actual data it would take 2 months (!) to finish loading the data. How can I read this into the database? I assume I'll need to write a program to do this. What language and any guidance on how I should do it to finish in a reasonable amount of time? The only scripting language I have any experience with is Ruby.

    Read the article

  • Difficulty getting Saxon into XQuery mode instead of XSLT

    - by Rosarch
    I'm having difficulty getting XQuery to work. I downloaded Saxon-HE 9.2. It seems to only want to work with XSLT. When I type: java -jar saxon9he.jar I get back usage information for XSLT. When I use the command syntax for XQuery, it doesn't recognize the parameters (like -q), and gives XSLT usage information. Here are some command line interactions: >java -jar saxon9he.jar No source file name Saxon-HE 9.2.0.6J from Saxonica Usage: see http://www.saxonica.com/documentation/using-xsl/commandline.html Options: -a Use xml-stylesheet PI, not -xsl argument -c:filename Use compiled stylesheet from file -config:filename Use configuration file -cr:classname Use collection URI resolver class -dtd:on|off Validate using DTD -expand:on|off Expand defaults defined in schema/DTD -explain[:filename] Display compiled expression tree -ext:on|off Allow|Disallow external Java functions -im:modename Initial mode -ief:class;class;... List of integrated extension functions -it:template Initial template -l:on|off Line numbering for source document -m:classname Use message receiver class -now:dateTime Set currentDateTime -o:filename Output file or directory -opt:0..10 Set optimization level (0=none, 10=max) -or:classname Use OutputURIResolver class -outval:recover|fatal Handling of validation errors on result document -p:on|off Recognize URI query parameters -r:classname Use URIResolver class -repeat:N Repeat N times for performance measurement -s:filename Initial source document -sa Use schema-aware processing -strip:all|none|ignorable Strip whitespace text nodes -t Display version and timing information -T[:classname] Use TraceListener class -TJ Trace calls to external Java functions -tree:tiny|linked Select tree model -traceout:file|#null Destination for fn:trace() output -u Names are URLs not filenames -val:strict|lax Validate using schema -versionmsg:on|off Warn when using XSLT 1.0 stylesheet -warnings:silent|recover|fatal Handling of recoverable errors -x:classname Use specified SAX parser for source file -xi:on|off Expand XInclude on all documents -xmlversion:1.0|1.1 Version of XML to be handled -xsd:file;file.. Additional schema documents to be loaded -xsdversion:1.0|1.1 Version of XML Schema to be used -xsiloc:on|off Take note of xsi:schemaLocation -xsl:filename Stylesheet file -y:classname Use specified SAX parser for stylesheet --feature:value Set configuration feature (see FeatureKeys) -? Display this message param=value Set stylesheet string parameter +param=filename Set stylesheet document parameter ?param=expression Set stylesheet parameter using XPath !option=value Set serialization option >java -jar saxon9he.jar -q:"..\w3xQueryTut.xq" Unknown option -q:..\w3xQueryTut.xq Saxon-HE 9.2.0.6J from Saxonica Usage: see http://www.saxonica.com/documentation/using-xsl/commandline.html Options: -a Use xml-stylesheet PI, not -xsl argument // etc... >java net.sf.saxon.Query -q:"..\w3xQueryTut.xq" Exception in thread "main" java.lang.NoClassDefFoundError: net/sf/saxon/Query Caused by: java.lang.ClassNotFoundException: net.sf.saxon.Query // etc... Could not find the main class: net.sf.saxon.Query. Program will exit. I'm probably making some stupid mistake. Do you know what it could be?

    Read the article

  • JBOSS 7.1 started hanging after 6 months of deployment

    - by PVR
    My application is been live from 6 months. The application is host on jboss 7.1 server. From last few days I am finding numerous problem of hanging of jboss server. Though I restart the jboss server again, it does not invoke. I need to restart the server machine itself. Can anyone please let me know what could be the cause of these problems and the workable resolutions or any suggestion ? Kindly dont degrade the question as I am facing a lot problems due to this hanging issue. Also for the information, the application is based on Java, GWT, Hibernate 3. Please find the standalone.xml file in case if it helps. <extensions> <extension module="org.jboss.as.clustering.infinispan"/> <extension module="org.jboss.as.configadmin"/> <extension module="org.jboss.as.connector"/> <extension module="org.jboss.as.deployment-scanner"/> <extension module="org.jboss.as.ee"/> <extension module="org.jboss.as.ejb3"/> <extension module="org.jboss.as.jaxrs"/> <extension module="org.jboss.as.jdr"/> <extension module="org.jboss.as.jmx"/> <extension module="org.jboss.as.jpa"/> <extension module="org.jboss.as.logging"/> <extension module="org.jboss.as.mail"/> <extension module="org.jboss.as.naming"/> <extension module="org.jboss.as.osgi"/> <extension module="org.jboss.as.pojo"/> <extension module="org.jboss.as.remoting"/> <extension module="org.jboss.as.sar"/> <extension module="org.jboss.as.security"/> <extension module="org.jboss.as.threads"/> <extension module="org.jboss.as.transactions"/> <extension module="org.jboss.as.web"/> <extension module="org.jboss.as.webservices"/> <extension module="org.jboss.as.weld"/> </extensions> <system-properties> <property name="org.apache.coyote.http11.Http11Protocol.COMPRESSION" value="on"/> <property name="org.apache.coyote.http11.Http11Protocol.COMPRESSION_MIME_TYPES" value="text/javascript,text/css,text/html,text/xml,text/json"/> </system-properties> <management> <security-realms> <security-realm name="ManagementRealm"> <authentication> <properties path="mgmt-users.properties" relative-to="jboss.server.config.dir"/> </authentication> </security-realm> <security-realm name="ApplicationRealm"> <authentication> <properties path="application-users.properties" relative-to="jboss.server.config.dir"/> </authentication> </security-realm> </security-realms> <management-interfaces> <native-interface security-realm="ManagementRealm"> <socket-binding native="management-native"/> </native-interface> <http-interface security-realm="ManagementRealm"> <socket-binding http="management-http"/> </http-interface> </management-interfaces> </management> <profile> <subsystem xmlns="urn:jboss:domain:logging:1.1"> <console-handler name="CONSOLE"> <level name="INFO"/> <formatter> <pattern-formatter pattern="%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n"/> </formatter> </console-handler> <periodic-rotating-file-handler name="FILE"> <formatter> <pattern-formatter pattern="%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n"/> </formatter> <file relative-to="jboss.server.log.dir" path="server.log"/> <suffix value=".yyyy-MM-dd"/> <append value="true"/> </periodic-rotating-file-handler> <logger category="com.arjuna"> <level name="WARN"/> </logger> <logger category="org.apache.tomcat.util.modeler"> <level name="WARN"/> </logger> <logger category="sun.rmi"> <level name="WARN"/> </logger> <logger category="jacorb"> <level name="WARN"/> </logger> <logger category="jacorb.config"> <level name="ERROR"/> </logger> <root-logger> <level name="INFO"/> <handlers> <handler name="CONSOLE"/> <handler name="FILE"/> </handlers> </root-logger> </subsystem> <subsystem xmlns="urn:jboss:domain:configadmin:1.0"/> <subsystem xmlns="urn:jboss:domain:datasources:1.0"> <datasources> <datasource jndi-name="java:jboss/datasources/ExampleDS" pool-name="ExampleDS" enabled="true" use-java-context="true"> <connection-url>jdbc:h2:mem:test;DB_CLOSE_DELAY=-1</connection-url> <driver>h2</driver> <security> <user-name>sa</user-name> <password>sa</password> </security> </datasource> <drivers> <driver name="h2" module="com.h2database.h2"> <xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class> </driver> </drivers> </datasources> </subsystem> <subsystem xmlns="urn:jboss:domain:deployment-scanner:1.1"> <deployment-scanner path="deployments" relative-to="jboss.server.base.dir" scan-interval="5000"/> </subsystem> <subsystem xmlns="urn:jboss:domain:ee:1.0"/> <subsystem xmlns="urn:jboss:domain:ejb3:1.2"> <session-bean> <stateless> <bean-instance-pool-ref pool-name="slsb-strict-max-pool"/> </stateless> <stateful default-access-timeout="5000" cache-ref="simple"/> <singleton default-access-timeout="5000"/> </session-bean> <pools> <bean-instance-pools> <strict-max-pool name="slsb-strict-max-pool" max-pool-size="20" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/> <strict-max-pool name="mdb-strict-max-pool" max-pool-size="20" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/> </bean-instance-pools> </pools> <caches> <cache name="simple" aliases="NoPassivationCache"/> <cache name="passivating" passivation-store-ref="file" aliases="SimpleStatefulCache"/> </caches> <passivation-stores> <file-passivation-store name="file"/> </passivation-stores> <async thread-pool-name="default"/> <timer-service thread-pool-name="default"> <data-store path="timer-service-data" relative-to="jboss.server.data.dir"/> </timer-service> <remote connector-ref="remoting-connector" thread-pool-name="default"/> <thread-pools> <thread-pool name="default"> <max-threads count="10"/> <keepalive-time time="100" unit="milliseconds"/> </thread-pool> </thread-pools> </subsystem> <subsystem xmlns="urn:jboss:domain:infinispan:1.2" default-cache-container="hibernate"> <cache-container name="hibernate" default-cache="local-query"> <local-cache name="entity"> <transaction mode="NON_XA"/> <eviction strategy="LRU" max-entries="10000"/> <expiration max-idle="100000"/> </local-cache> <local-cache name="local-query"> <transaction mode="NONE"/> <eviction strategy="LRU" max-entries="10000"/> <expiration max-idle="100000"/> </local-cache> <local-cache name="timestamps"> <transaction mode="NONE"/> <eviction strategy="NONE"/> </local-cache> </cache-container> </subsystem> <subsystem xmlns="urn:jboss:domain:jaxrs:1.0"/> <subsystem xmlns="urn:jboss:domain:jca:1.1"> <archive-validation enabled="true" fail-on-error="true" fail-on-warn="false"/> <bean-validation enabled="true"/> <default-workmanager> <short-running-threads> <core-threads count="50"/> <queue-length count="50"/> <max-threads count="50"/> <keepalive-time time="10" unit="seconds"/> </short-running-threads> <long-running-threads> <core-threads count="50"/> <queue-length count="50"/> <max-threads count="50"/> <keepalive-time time="100" unit="seconds"/> </long-running-threads> </default-workmanager> <cached-connection-manager/> </subsystem> <subsystem xmlns="urn:jboss:domain:jdr:1.0"/> <subsystem xmlns="urn:jboss:domain:jmx:1.1"> <show-model value="true"/> <remoting-connector/> </subsystem> <subsystem xmlns="urn:jboss:domain:jpa:1.0"> <jpa default-datasource=""/> </subsystem> <subsystem xmlns="urn:jboss:domain:mail:1.0"> <mail-session jndi-name="java:jboss/mail/Default"> <smtp-server outbound-socket-binding-ref="mail-smtp"/> </mail-session> </subsystem> <subsystem xmlns="urn:jboss:domain:naming:1.1"/> <subsystem xmlns="urn:jboss:domain:osgi:1.2" activation="lazy"> <properties> <property name="org.osgi.framework.startlevel.beginning"> 1 </property> </properties> <capabilities> <capability name="javax.servlet.api:v25"/> <capability name="javax.transaction.api"/> <capability name="org.apache.felix.log" startlevel="1"/> <capability name="org.jboss.osgi.logging" startlevel="1"/> <capability name="org.apache.felix.configadmin" startlevel="1"/> <capability name="org.jboss.as.osgi.configadmin" startlevel="1"/> </capabilities> </subsystem> <subsystem xmlns="urn:jboss:domain:pojo:1.0"/> <subsystem xmlns="urn:jboss:domain:remoting:1.1"> <connector name="remoting-connector" socket-binding="remoting" security-realm="ApplicationRealm"/> </subsystem> <subsystem xmlns="urn:jboss:domain:resource-adapters:1.0"/> <subsystem xmlns="urn:jboss:domain:sar:1.0"/> <subsystem xmlns="urn:jboss:domain:security:1.1"> <security-domains> <security-domain name="other" cache-type="default"> <authentication> <login-module code="Remoting" flag="optional"> <module-option name="password-stacking" value="useFirstPass"/> </login-module> <login-module code="RealmUsersRoles" flag="required"> <module-option name="usersProperties" value="${jboss.server.config.dir}/application-users.properties"/> <module-option name="rolesProperties" value="${jboss.server.config.dir}/application-roles.properties"/> <module-option name="realm" value="ApplicationRealm"/> <module-option name="password-stacking" value="useFirstPass"/> </login-module> </authentication> </security-domain> <security-domain name="jboss-web-policy" cache-type="default"> <authorization> <policy-module code="Delegating" flag="required"/> </authorization> </security-domain> <security-domain name="jboss-ejb-policy" cache-type="default"> <authorization> <policy-module code="Delegating" flag="required"/> </authorization> </security-domain> </security-domains> </subsystem> <subsystem xmlns="urn:jboss:domain:threads:1.1"/> <subsystem xmlns="urn:jboss:domain:transactions:1.1"> <core-environment> <process-id> <uuid/> </process-id> </core-environment> <recovery-environment socket-binding="txn-recovery-environment" status-socket-binding="txn-status-manager"/> <coordinator-environment default-timeout="300"/> </subsystem> <subsystem xmlns="urn:jboss:domain:web:1.1" default-virtual-server="default-host" native="false"> <connector name="http" protocol="HTTP/1.1" scheme="http" socket-binding="http"/> <virtual-server name="default-host" enable-welcome-root="false"> <alias name="localhost"/> <alias name="nextenders.com"/> </virtual-server> </subsystem> <subsystem xmlns="urn:jboss:domain:webservices:1.1"> <modify-wsdl-address>true</modify-wsdl-address> <wsdl-host>${jboss.bind.address:127.0.0.1}</wsdl-host> <endpoint-config name="Standard-Endpoint-Config"/> <endpoint-config name="Recording-Endpoint-Config"> <pre-handler-chain name="recording-handlers" protocol-bindings="##SOAP11_HTTP ##SOAP11_HTTP_MTOM ##SOAP12_HTTP ##SOAP12_HTTP_MTOM"> <handler name="RecordingHandler" class="org.jboss.ws.common.invocation.RecordingServerHandler"/> </pre-handler-chain> </endpoint-config> </subsystem> <subsystem xmlns="urn:jboss:domain:weld:1.0"/> </profile> <interfaces> <interface name="management"> <inet-address value="${jboss.bind.address.management:127.0.0.1}"/> </interface> <interface name="public"> <inet-address value="${jboss.bind.address:127.0.0.1}"/> </interface> <interface name="unsecure"> <inet-address value="${jboss.bind.address.unsecure:127.0.0.1}"/> </interface> </interfaces> <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}"> <socket-binding name="management-native" interface="management" port="${jboss.management.native.port:9999}"/> <socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/> <socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9443}"/> <socket-binding name="ajp" port="8009"/> <socket-binding name="http" port="80"/> <socket-binding name="https" port="443"/> <socket-binding name="osgi-http" interface="management" port="8090"/> <socket-binding name="remoting" port="4447"/> <socket-binding name="txn-recovery-environment" port="4712"/> <socket-binding name="txn-status-manager" port="4713"/> <outbound-socket-binding name="mail-smtp"> <remote-destination host="localhost" port="25"/> </outbound-socket-binding> </socket-binding-group>

    Read the article

  • Correct way to make datasources/resources a deploy-time setting

    - by Draemon
    I have a web-app that requires two settings: A JDBC datasource A string token I desperately want to be able to deploy one .war to various different containers (jetty,tomcat,gf3 minimum) and configure these settings at application level within the container. My code does this: InitialContext ctx = new InitialContext(); Context envCtx = (javax.naming.Context) ctx.lookup("java:comp/env"); token = (String)envCtx.lookup("token"); ds = (DataSource)envCtx.lookup("jdbc/datasource") Let's assume I've used the glassfish management interface to create two jdbc resources: jdbc/test-datasource and jdbc/live-datasource which connect to different copies of the same schema, on different servers, different credentials etc. Say I want to deploy this to glassfish with and point it at the test datasource, I might have this in my sun-web.xml: ... <resource-ref> <res-ref-name>jdbc/datasource</res-ref-name> <jndi-name>jdbc/test-datasource</jndi-name> </resource-ref> ... but sun-web.xml goes inside my war, right? surely there must be a way to do this through the management interface Am I even trying to do the right thing? Do other containers make this any easier? I'd be particularly interested in how jetty 7 handles this since I use it for development. EDIT Tomcat has a reasonable way to do this: Create $TOMCAT_HOME/conf/Catalina/localhost/webapp.xml with: <?xml version="1.0" encoding="UTF-8"?> <Context antiResourceLocking="false" privileged="true"> <!-- String resource --> <Environment name="token" value="value of token" type="java.lang.String" override="false" /> <!-- Linking to a global resource --> <ResourceLink name="jdbc/datasource1" global="jdbc/test" type="javax.sql.DataSource" /> <!-- Derby --> <Resource name="jdbc/datasource2" type="javax.sql.DataSource" auth="Container" driverClassName="org.apache.derby.jdbc.EmbeddedDataSource" url="jdbc:derby:test;create=true" /> <!-- H2 --> <Resource name="jdbc/datasource3" type="javax.sql.DataSource" auth="Container" driverClassName="org.h2.jdbcx.JdbcDataSource" url="jdbc:h2:~/test" username="sa" password="" /> </Context> Note that override="false" means the opposite. It means that this setting can't be overriden by web.xml. I like this approach because the file is part of the container configuration not the war, but it's not part of the global configuration; it's webapp specific. I guess I expect a bit more from glassfish since it is supposed to have a full web admin interface, but I would be happy enough with something equivalent to the above.

    Read the article

  • Help needed wit the XPath statement for Selenium test

    - by mgeorge
    I am testing a calendar component using selenium.In my test i want to click on the current date.Please help me with the XPath statement for doing that.I am adding the HTML for the calender component <input id="event_date" type="text" on="click then l:show.event.calendar" style="border: 1px solid rgb(187, 187, 187); width: 100px;" fieldset="new_event" decorator="redbox" validator="date"/> <img id="app_136" style="position: relative; top: 2px;" on="click then l:show.event.calendar" src="images/calendar.png"/> <div id="app_137" style="margin: 0pt; padding: 0pt;"> <div id="app_calendar_2" class="yui-calcontainer single withtitle" style="position: absolute; z-index: 1000;"> <div class="title">Select Event Date</div> <table id="app_calendar_2_cal" class="yui-calendar y2010" cellspacing="0"> <thead> <tr> </tr> <tr class="calweekdayrow"> <th class="calweekdaycell">Su</th> <th class="calweekdaycell">Mo</th> <th class="calweekdaycell">Tu</th> <th class="calweekdaycell">We</th> <th class="calweekdaycell">Th</th> <th class="calweekdaycell">Fr</th> <th class="calweekdaycell">Sa</th> </tr> </thead> <tbody class="m6 calbody"> <tr class="w22"> <td id="app_calendar_2_cal_cell0" class="calcell oom calcelltop calcellleft">30</td> <td id="app_calendar_2_cal_cell1" class="calcell oom calcelltop">31</td> <td id="app_calendar_2_cal_cell2" class="calcell wd2 d1 selectable calcelltop"> </td> <td id="app_calendar_2_cal_cell3" class="calcell wd3 d2 today selectable calcelltop selected"> <a class="selector" href="#">2</a> </td> I want to click the date component described in <td id="app_calendar_2_cal_cell3" class="calcell wd3 d2 today selectable calcelltop selected"> <a class="selector" href="#">2</a> </td> Thanks in advance mgeorge

    Read the article

  • "Can't mass-assign protected attributes" with nested protected models

    - by JohnnyFive
    I'm having a hell of a time trying to get this nested model working. I've tried all manner of pluralization/singular, removing the attr_accessible altogether, and who knows what else. restaurant.rb: # == RESTAURANT MODEL # # Table name: restaurants # # id :integer not null, primary key # name :string(255) # created_at :datetime not null # updated_at :datetime not null # class Restaurant < ActiveRecord::Base attr_accessible :name, :job_attributes has_many :jobs has_many :users, :through => :jobs has_many :positions accepts_nested_attributes_for :jobs, :allow_destroy => true validates :name, presence: true end job.rb: # == JOB MODEL # # Table name: jobs # # id :integer not null, primary key # restaurant_id :integer # shortname :string(255) # user_id :integer # created_at :datetime not null # updated_at :datetime not null # class Job < ActiveRecord::Base attr_accessible :restaurant_id, :shortname, :user_id belongs_to :user belongs_to :restaurant has_many :shifts validates :name, presence: false end restaurants_controller.rb: class RestaurantsController < ApplicationController before_filter :logged_in, only: [:new_restaurant] def new @restaurant = Restaurant.new @user = current_user end def create @restaurant = Restaurant.new(params[:restaurant]) if @restaurant.save flash[:success] = "Restaurant created." redirect_to welcome_path end end end new.html.erb: <% provide(:title, 'Restaurant') %> <%= form_for @restaurant do |f| %> <%= render 'shared/error_messages' %> <%= f.label "Restaurant Name" %> <%= f.text_field :name %> <%= f.fields_for :job do |child_f| %> <%= child_f.label "Nickname" %> <%= child_f.text_field :shortname %> <% end %> <%= f.submit "Done", class: "btn btn-large btn-primary" %> <% end %> Output Parameters: {"utf8"=>"?", "authenticity_token"=>"DjYvwkJeUhO06ds7bqshHsctS1M/Dth08rLlP2yQ7O0=", "restaurant"=>{"name"=>"The Pink Door", "job"=>{"shortname"=>"PD"}}, "commit"=>"Done"} The error i'm receiving is: ActiveModel::MassAssignmentSecurity::Error in RestaurantsController#create Cant mass-assign protected attributes: job Rails.root: /home/johnnyfive/Dropbox/Projects/sa Application Trace | Framework Trace | Full Trace app/controllers/restaurants_controller.rb:11:in `new' app/controllers/restaurants_controller.rb:11:in `create' Anyone have ANY clue how to get this to work? Thanks!

    Read the article

  • radgridview delete update in asp.net

    - by abhi
    i have written the follwing to display data from the datagrid and den insert new rows but how do i perform update and delete plss help here's my code using System; using System.Data; using System.Configuration; using System.Collections; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; using Telerik.Web.UI; using System.Data.SqlClient; public partial class Default6 : System.Web.UI.Page { string strQry, strCon; SqlDataAdapter da; SqlConnection con; DataSet ds; protected void Page_Load(object sender, EventArgs e) { strCon = "Data Source=MINETDEVDATA; Initial Catalog=ML_SuppliersProd; User Id=sa; Password=@MinetApps7;"; con = new SqlConnection(strCon); strQry = "SELECT * FROM table1"; da = new SqlDataAdapter(strQry, con); SqlCommandBuilder cmdbuild = new SqlCommandBuilder(da); ds = new DataSet(); da.Fill(ds, "table1"); RadGrid1.DataSource = ds.Tables["table1"]; RadGrid1.DataBind(); Label3.Visible = false; Label4.Visible = false; Label5.Visible = false; txtFname.Visible = false; txtLname.Visible = false; txtDesignation.Visible = false; } protected void Submit_Click(object sender, EventArgs e) { Label3.Visible = true; Label4.Visible = true; Label5.Visible = true; txtFname.Visible = true; txtLname.Visible = true; txtDesignation.Visible = true; } protected void Button2_Click(object sender, EventArgs e) { DataSet ds = new DataSet("EmployeeSet"); da.Fill(ds, "table1"); DataTable EmployeeTable = ds.Tables["table1"]; DataRow row = EmployeeTable.NewRow(); row["Fname"] = txtFname.Text.ToString(); row["Lname"] = txtLname.Text.ToString(); row["Designation"] = txtDesignation.Text.ToString(); EmployeeTable.Rows.Add(row); da.Update(ds, "table1"); //RadGrid1.DataSource = ds.Tables["table1"]; //RadGrid1.DataBind(); txtFname.Text = ""; txtLname.Text = ""; txtDesignation.Text = ""; } protected void RadGrid1_DeleteCommand(object source, GridCommandEventArgs e) { } } }

    Read the article

  • Hibernate MappingException Unknown entity: $Proxy2

    - by slynn1324
    I'm using Hibernate annotations and have a VERY basic data object: import java.io.Serializable; import javax.persistence.Entity; import javax.persistence.Id; @Entity public class State implements Serializable { /** * */ private static final long serialVersionUID = 1L; @Id private String stateCode; private String stateFullName; public String getStateCode() { return stateCode; } public void setStateCode(String stateCode) { this.stateCode = stateCode; } public String getStateFullName() { return stateFullName; } public void setStateFullName(String stateFullName) { this.stateFullName = stateFullName; } } and am trying to run the following test case: public void testCreateState(){ Session s = HibernateUtil.getSessionFactory().getCurrentSession(); Transaction t = s.beginTransaction(); State state = new State(); state.setStateCode("NE"); state.setStateFullName("Nebraska"); s.save(s); t.commit(); } and get an org.hibernate.MappingException: Unknown entity: $Proxy2 at org.hibernate.impl.SessionFactoryImpl.getEntityPersister(SessionFactoryImpl.java:628) at org.hibernate.impl.SessionImpl.getEntityPersister(SessionImpl.java:1366) at org.hibernate.event.def.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:121) .... I haven't been able to find anything referencing the $Proxy part of the error - and am at a loss.. Any pointers to what I'm missing would be greatly appreciated. hibernate.cfg.xml <property name="hibernate.connection.driver_class">org.hsqldb.jdbcDriver</property> <property name="connection.url">jdbc:hsqldb:hsql://localhost/xdb</property> <property name="connection.username">sa</property> <property name="connection.password"></property> <property name="current_session_context_class">thread</property> <property name="dialect">org.hibernate.dialect.HSQLDialect</property> <property name="show_sql">true</property> <property name="hbm2ddl.auto">update</property> <property name="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property> <mapping class="com.test.domain.State"/> in HibernateUtil.java public static SessionFactory getSessionFactory(boolean testing ) { if ( sessionFactory == null ){ try { String configPath = HIBERNATE_CFG; AnnotationConfiguration config = new AnnotationConfiguration(); config.configure(configPath); sessionFactory = config.buildSessionFactory(); } catch (Exception e){ e.printStackTrace(); throw new ExceptionInInitializerError(e); } } return sessionFactory; }

    Read the article

  • no such file to load -- rails (MissingSourceFile)... say what?!!

    - by Julian
    Hello, I'm having an obnoxious and weird problem while trying to include the ThinkingTank gem into my rails project. When I include gem 'thinkingtank' in my project's Gemfile I get the following error: ~/.rvm/gems/ree-1.8.7-2010.01/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require': no such file to load -- rails (MissingSourceFile) from ~/.rvm/gems/ree-1.8.7-2010.01/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:521:in `new_constants_in' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/thinkingtank-0.0.5/lib/thinkingtank.rb:1 from ~/.rvm/gems/ree-1.8.7-2010.01/gems/bundler-1.0.7/lib/bundler/runtime.rb:64:in `require' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/bundler-1.0.7/lib/bundler/runtime.rb:64:in `require' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/bundler-1.0.7/lib/bundler/runtime.rb:62:in `each' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/bundler-1.0.7/lib/bundler/runtime.rb:62:in `require' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/bundler-1.0.7/lib/bundler/runtime.rb:51:in `each' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/bundler-1.0.7/lib/bundler/runtime.rb:51:in `require' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/bundler-1.0.7/lib/bundler.rb:112:in `require' from ~/git/myproject/config/boot.rb:121:in `load_environment' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/rails-2.3.5/lib/initializer.rb:137:in `process' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/rails-2.3.5/lib/initializer.rb:113:in `send' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/rails-2.3.5/lib/initializer.rb:113:in `run' from ~/git/myproject/config/environment.rb:9 from ~/.rvm/rubies/ree-1.8.7-2010.01/lib/ruby/1.8/irb/init.rb:254:in `require' from ~/.rvm/rubies/ree-1.8.7-2010.01/lib/ruby/1.8/irb/init.rb:254:in `load_modules' from ~/.rvm/rubies/ree-1.8.7-2010.01/lib/ruby/1.8/irb/init.rb:252:in `each' from ~/.rvm/rubies/ree-1.8.7-2010.01/lib/ruby/1.8/irb/init.rb:252:in `load_modules' from ~/.rvm/rubies/ree-1.8.7-2010.01/lib/ruby/1.8/irb/init.rb:21:in `setup' from ~/.rvm/rubies/ree-1.8.7-2010.01/lib/ruby/1.8/irb.rb:54:in `start' from ~/.rvm/rubies/ree-1.8.7-2010.01/bin/irb:17 The output from ruby -v is: ruby 1.8.7 (2009-12-24 patchlevel 248) [i686-darwin10.6.0], MBARI 0x6770, Ruby Enterprise Edition 2010.01 And the output from rails -v is: Rails 2.3.5 I've followed the basic guidelines from their documentation and from similar SA questions. But none of issues have the rails gem going missing.. And yes, we are including rails in our Gemfile =) Thank you in advance.

    Read the article

  • Zend Framework-where should this root.php file go for MVC?

    - by Joel
    Hi guys, I'm converting over a web-app to use the MVC structure of Zend Framework. I have a root.php include file that contains most of the database info, and some static variables that are used in the program. I'm not sure if some of this should be in the application.ini of in a model that is called by the init() function in a controller, or in the bootstrap or what? Any help would be much appreciated! root.php (include file at the top of every php page): <?php /*** //Configuration file */ ## Site Configuration starts ## define("SITE_ROOT" , dirname(__FILE__)); define("SITE_URL" , "http://localhost/monkeycalendarapp/monkeycalendarapp/public"); define('DB_HOST', "localhost"); define('DB_USER', "root"); define('DB_PASS', "xxx"); define('DB_NAME', "xxxxx"); define("PROJECT_NAME" , "Monkey Mind Manager (beta 2.2)"); //site title define("CALENDAR_WIDTH" , "300"); //left mini calendar width define("CALENDAR_HEIGHT" , "150"); //left mini calendar height $page_title = 'Event List'; $stylesheet_name = 'style.css'; //default stylesheet define("SITE_URL_AJAX" , SITE_URL . "/ajax-tooltip"); define("JQUERY" , SITE_URL . "/jquery-ui-1.7.2"); $a_times = array("12:00","12:30","01:00","01:30","02:00","02:30","03:00","03:30","04:00","04:30","05:00","05:30","06:00","06:30","07:00","07:30","08:00","08:30","09:00","09:30","10:00","10:30","11:00","11:30"); //PTLType Promotional timeline type $a_ptlType= array(1=>"Gigs","To-Do","Completed"); $a_days = array("Su","Mo","Tu","We","Th","Fr","Sa"); $a_timesMerd = array("12:00am","12:30am","01:00am","01:30am","02:00am","02:30am","03:00am","03:30am","04:00am","04:30am","05:00am","05:30am","06:00am","06:30am","07:00am","07:30am","08:00am","08:30am","09:00am","09:30am","10:00am","10:30am","11:00am","11:30am","12:00pm","12:30pm","01:00pm","01:30pm","02:00pm","02:30pm","03:00pm","03:30pm","04:00pm","04:30pm","05:00pm","05:30pm","06:00pm","06:30pm","07:00pm","07:30pm","08:00pm","08:30pm","09:00pm","09:30pm","10:00pm","10:30pm","11:00pm","11:30pm"); //Setting stylesheet for this user. $AMPM=array("am"=>"am","pm"=>"pm"); include(SITE_ROOT . "/includes/functions/general.php"); include(SITE_ROOT . "/includes/db.php"); session_start(); if(isset($_SESSION['userData']['UserID'])) { $s_userID = $_SESSION['userData']['UserID']; } $stylesheet_name = stylesheet(); ini_set('date.timezone', 'GMT'); date_default_timezone_set('GMT'); if($s_userID) { ini_set('date.timezone', $_SESSION['userData']['timezone']); date_default_timezone_set($_SESSION['userData']['timezone']); } ?>

    Read the article

  • Please , Explain these java code ..?

    - by soma
    I want understand these code before java lab exam especially methode import javax.swing.; import java.util.; import java.text.*; public class EnglishCalendar { public static String[] months = { "January" , "February" , "March", "April" , "May" , "June", "July" , "August" , "September", "October" , "November" , "December" }; public static int days[] = { 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 }; private void showMonth(int m, int y) { int lead_spaces = 0; if (m < 0 || m > 11) { System.out.println("It should be 1 to 12"); } else { System.out.println(); System.out.println(" " + months[m] + " " + y); System.out.println(); GregorianCalendar cal = new GregorianCalendar(y, m, 0); System.out.println("Su Mo Tu We Th Fr Sa "); lead_spaces = cal.get(Calendar.DAY_OF_WEEK); int day_of_month = days[m]; if (cal.isLeapYear(cal.get(Calendar.YEAR)) && m == 1){ day_of_month++;} for (int i = 0; i < lead_spaces; i++) { System.out.print(" "); } for (int i = 1; i <= day_of_month; i++) { if (i < 10) System.out.print(" "); System.out.print(i); if ((lead_spaces + i) % 7 == 0) { System.out.println(); } else { System.out.print(" "); } } System.out.println(); } } private static void doSimpleDateFormat() { Calendar now = Calendar.getInstance(); SimpleDateFormat formatter = new SimpleDateFormat("E yyyy.MM.dd 'at' hh:mm:ss a zzz"); System.out.print(" \n It is now : " + formatter.format(now.getTime())); System.out.println(); } public static void main(String[] args) { String mo = JOptionPane.showInputDialog("Month"); String ye = JOptionPane.showInputDialog("Year"); int mon = new Integer(mo).intValue(); int yea = new Integer(ye).intValue(); EnglishCalendar k = new EnglishCalendar(); k.showMonth(mon - 1 , yea); doSimpleDateFormat(); } }

    Read the article

  • Tunnel is up but cannot ping directly connected network

    - by drmanalo
    We configured a site-to-site VPN and here is the topology. I control the network on the left but not the one on the right. All devices in our network has public IPs. Server---ASA5505---Cisco887======Internet=====ASA5510---devices I can see the tunnel is up and can do extended ping using a loopback interface. From the 10.175 and 10.165 networks, they can also ping my loopback address. I can also dial in using a Cisco VPN client, and can connect to the devices on the right. #show crypto session Crypto session current status Interface: Vlan3 Profile: xxx-profile Session status: UP-ACTIVE Peer: 213.121.x.x port 500 IKEv1 SA: local 77.245.x.x/500 remote 213.121.x.x/500 Active IPSEC FLOW: permit ip 10.0.20.0/255.255.255.240 10.175.0.0/255.255.128.0 Active SAs: 0, origin: crypto map IPSEC FLOW: permit ip 10.0.20.0/255.255.255.240 10.165.0.0/255.255.192.0 Active SAs: 2, origin: crypto map #ping 10.165.29.39 source loopback 2 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.165.29.39, timeout is 2 seconds: Packet sent with a source address of 10.0.20.1 !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 16/17/20 ms My problem is the devices on the right cannot reach my server. They could only ping the loopback address and nothing else. I'm pasting some diagnostics related to routing thinking perhaps routing is my issue. I can paste all the running-config on my side of network if needed. #show ip int brief Interface IP-Address OK? Method Status Protocol ATM0 unassigned YES NVRAM administratively down down Ethernet0 unassigned YES NVRAM administratively down down FastEthernet0 unassigned YES unset up up connected to ASA FastEthernet1 unassigned YES unset administratively down down FastEthernet2 unassigned YES unset administratively down down FastEthernet3 unassigned YES unset up up Loopback1 10.0.20.65 YES NVRAM up up Loopback2 10.0.20.1 YES NVRAM up up Virtual-Template1 77.245.x.x YES unset up down Virtual-Template2 77.245.x.x YES unset up down Vlan1 unassigned YES unset down down Vlan3 77.245.x.x YES NVRAM up up connected to the Internet #show run | section ip route ip route 0.0.0.0 0.0.0.0 77.245.x.x ip route 213.121.240.36 255.255.255.255 Vlan3 #show access-list Extended IP access list 102 10 permit ip 10.0.20.0 0.0.0.15 10.175.0.0 0.0.127.255 (3332 matches) 20 permit ip 10.0.20.0 0.0.0.15 10.165.0.0 0.0.63.255 (3498 matches) #show vlan-switch VLAN Name Status Ports ---- -------------------------------- --------- ------------------------------- 1 default active 3 VLAN0003 active Fa0, Fa1, Fa2, Fa3 1002 fddi-default act/unsup 1003 token-ring-default act/unsup 1004 fddinet-default act/unsup 1005 trnet-default act/unsup #show ip route Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2 i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, * - candidate default, U - per-user static route o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP + - replicated route, % - next hop override Gateway of last resort is 77.245.x.x to network 0.0.0.0 S* 0.0.0.0/0 [1/0] via 77.245.x.x 10.0.0.0/8 is variably subnetted, 5 subnets, 3 masks C 10.0.20.0/28 is directly connected, Loopback2 L 10.0.20.1/32 is directly connected, Loopback2 C 10.0.20.64/28 is directly connected, Loopback1 L 10.0.20.65/32 is directly connected, Loopback1 S 10.165.0.0/18 [1/0] via 213.121.x.x 77.0.0.0/8 is variably subnetted, 3 subnets, 3 masks S 77.0.0.0/8 [1/0] via 77.245.x.x C 77.245.x.x/29 is directly connected, Vlan3 L 77.245.x.x/32 is directly connected, Vlan3 213.121.x.0/32 is subnetted, 1 subnets S 213.121.x.x is directly connected, Vlan3 I read some of the posts here which lead to NATing issue but I'not sure of my next step. Should I translate my public address to private and route it to the loopback address? (only guessing) CISCO VPN site to site Site-to-Site VPN between two ASA 5505s only working in one direction Hope someone could help. Thanks in advance!

    Read the article

  • Shuffle tiles position in the beginning of the game XNA Csharp

    - by GalneGunnar
    Im trying to create a puzzlegame where you move tiles to certain positions to make a whole image. I need help with randomizing the tiles startposition so that they don't create the whole image at the beginning. There is also something wrong with my offset, that's why it's set to (0,0). I know my code is not good, but Im just starting to learn :] Thanks in advance My Game1 class: { public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; Texture2D PictureTexture; Texture2D FrameTexture; // Offset för bildgraff Vector2 Offset = new Vector2(0,0); //skapar en array som ska hålla delar av den stora bilden Square[,] squareArray = new Square[4, 4]; // Random randomeraBilder = new Random(); //Width och Height för bilden int pictureHeight = 95; int pictureWidth = 144; Random randomera = new Random(); int index = 0; MouseState oldMouseState; int WindowHeight; int WindowWidth; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; //scalar Window till 800x 600y graphics.PreferredBackBufferWidth = 800; graphics.PreferredBackBufferHeight = 600; graphics.ApplyChanges(); } protected override void Initialize() { IsMouseVisible = true; base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); PictureTexture = Content.Load<Texture2D>(@"Images/bildgraff"); FrameTexture = Content.Load<Texture2D>(@"Images/framer"); //Laddar in varje liten bild av den stora bilden i en array for (int x = 0; x < 4; x++) { for (int y = 0; y < 4; y++) { Vector2 position = new Vector2(x * pictureWidth, y * pictureHeight); position = position + Offset; Rectangle square = new Rectangle(x * pictureWidth, y * pictureHeight, pictureWidth, pictureHeight); Square frame = new Square(position, PictureTexture, square, Offset, index); squareArray[x, y] = frame; index++; } } } protected override void UnloadContent() { } protected override void Update(GameTime gameTime) { if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); MouseState ms = Mouse.GetState(); if (oldMouseState.LeftButton == ButtonState.Pressed && ms.LeftButton == ButtonState.Released) { // ta reda på vilken position vi har tryckt på int col = ms.X / pictureWidth; int row = ms.Y / pictureHeight; for (int x = 0; x < squareArray.GetLength(0); x++) { for (int y = 0; y < squareArray.GetLength(1); y++) { // kollar om rutan är tom och så att indexet inte går utanför för "col" och "row" if (squareArray[x, y].index == 0 && col >= 0 && row >= 0 && col <= 3 && row <= 3) { if (squareArray[x, y].index == 0 * col) { //kollar om rutan brevid mouseclick är tom if (col > 0 && squareArray[col - 1, row].index == 0 || row > 0 && squareArray[col, row - 1].index == 0 || col < 3 && squareArray[col + 1, row].index == 0 || row < 3 && squareArray[col, row + 1].index == 0) { Square sqaure = squareArray[col, row]; Square hal = squareArray[x, y]; squareArray[x, y] = sqaure; squareArray[col, row] = hal; for (int i = 0; i < 4; i++) { for (int j = 0; j < 4; j++) { Vector2 goalPosition = new Vector2(x * pictureWidth, y * pictureHeight); squareArray[x, y].Swap(goalPosition); } } } } } } } } //if (oldMouseState.RightButton == ButtonState.Pressed && ms.RightButton == ButtonState.Released) //{ // for (int x = 0; x < 4; x++) // { // for (int y = 0; y < 4; y++) // { // } // } //} oldMouseState = ms; base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); WindowHeight = Window.ClientBounds.Height; WindowWidth = Window.ClientBounds.Width; Rectangle screenPosition = new Rectangle(0,0, WindowWidth, WindowHeight); spriteBatch.Begin(); spriteBatch.Draw(FrameTexture, screenPosition, Color.White); //Ritar ut alla brickorna förutom den som har index 0 for (int x = 0; x < 4; x++) { for (int y = 0; y < 4; y++) { if (squareArray[x, y].index != 0) { squareArray[x, y].Draw(spriteBatch); } } } spriteBatch.End(); base.Draw(gameTime); } } } My square class: class Square { public Vector2 position; public Texture2D grafTexture; public Rectangle square; public Vector2 offset; public int index; public Square(Vector2 position, Texture2D grafTexture, Rectangle square, Vector2 offset, int index) { this.position = position; this.grafTexture = grafTexture; this.square = square; this.offset = offset; this.index = index; } public void Draw(SpriteBatch spritebatch) { spritebatch.Draw(grafTexture, position, square, Color.White); } public void RandomPosition() { } public void Swap(Vector2 Goal ) { if (Goal.X > position.X) { position.X = position.X + 144; } else if (Goal.X < position.X) { position.X = position.X - 144; } else if (Goal.Y < position.Y) { position.Y = position.Y - 95; } else if (Goal.Y > position.Y) { position.Y = position.Y + 95; } } } }

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56  | Next Page >