Search Results

Search found 36032 results on 1442 pages for 'oracle enterprise service automation'.

Page 544/1442 | < Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >

  • Help with Magento and related products

    - by Anthony
    I have a customer product page that literally lives beside the catalog/product/view.phtml page. It's basically identical to that page with a few small exceptions. It's basically a 'product of the day' type page so I can't combine it with the regular product page since I have to fetch the data from the DB and perform a load to get the product information $_product = Mage::getModel('catalog/product')->load($row['productid']); To make a long story short, everything works (including all children html blocks) with the singular exception of the related products. After the load I save the product into the registry with Mage::register('product', $_product); and then attempt to load the related products with: echo $this->getLayout()->createBlock('catalog/product_view')->setTemplate('catalog/product/list/related.phtml')->toHtml();` All of which give back the error: Fatal error: Call to a member function getSize() on a non-object in catalog/product/list/related.phtml on line 29`, and line 29 is <?php if($this->getItems()->getSize()): ?>`. Any help getting the relateds to load would be appreicated.

    Read the article

  • Passing custom info to mongrel_rails start

    - by whaka
    One thing I really don't understand is how I can pass custom start-up options to a mongrel instance. I see that a common approach is the use environment variables, but in my environment this is not going to work because my rails application serves many different clients. Much code is shared between clients, but there are also many differences which I implement by subclassing controllers and views to overload or extend existing features or introduce new ones. To make this all work, I simply add the paths to client specific modules the module load path ($:). In order to start the application for a particular client, I could now use an environment variable like say, TARGET=AMAZONE. Unfortunately, on some systems I'm running multiple mongrel clusters, each cluster serving a different client. Some of these systems run under Windows and to start mongrel I installed mongrel_services. Clearly, this makes my environment variable unsuitable. Passing this extra bit of data to the application is proving to be a real challenge. For a start, mongrel_rails service_install will reject any [custom] command line parameters that aren't documented. I'm not too concerned as installing the services using the install program is trivial. However, even if I manage to install mongrel_services such that when run it passes the custom command line option --target to mongrel_rails start, I get an error because mongrel_rails doesn't recognize the switch. So here were the things I looked at: Pass an extra parameter: mongrel_rails start --target XYZ ... use a config file and add target:XYZ, then do: mongrel_rails start -C x:\myapp\myconfig.yml modify the file: Ruby\lib\ruby\gems\1.8\gems\mongrel-1.1.5-x86-mswin32-60\lib\mongrel\command.rb Perhaps I can use the --script option, but all docs that I found on it were for Unix 1 and 2 simply don't work. I played with 4 but never managed it to do anything. So I had no choice but to go with 3. While it is relatively simple, I hate changing ruby library code. Particularly disappointing is that 2 doesn't work. I mean what is so unreasonable about adding other [custom] options in the config file? Actually I think this is a fundamental piece that is missing in rails. Somehow, the application should be able to register and access command line arguments it expects. If anybody has a good idea how to do this more elegantly using the current infrastructure, I have a chocolate fish to give away!!!

    Read the article

  • How to backup using backup API's in c++

    - by user1603185
    I am writing an application that used to backup some specified file, therefore using the backup API calls i.e CreateFile BackupRead and WriteFile API's. getting errors Access violation reading location. I have attached code below. #include <windows.h> int main() { HANDLE hInput, hOutput; //m_filename is a variable holding the file path to read from hInput = CreateFile(L"C:\\Key.txt", GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, NULL); //strLocation contains the path of the file I want to create. hOutput= CreateFile(L"C:\\tmp\\", GENERIC_WRITE, NULL, NULL, CREATE_ALWAYS, NULL, NULL); DWORD dwBytesToRead = 1024 * 1024 * 10; BYTE *buffer; buffer = new BYTE[dwBytesToRead]; BOOL bReadSuccess = false,bWriteSuccess = false; DWORD dwBytesRead,dwBytesWritten; LPVOID lpContext; //Now comes the important bit: do { bReadSuccess = BackupRead(hInput, buffer, sizeof(BYTE) *dwBytesToRead, &dwBytesRead, false, true, &lpContext); bWriteSuccess= WriteFile(hOutput, buffer, sizeof(BYTE) *dwBytesRead, &dwBytesWritten, NULL); }while(dwBytesRead == dwBytesToRead); return 0; } Any one suggest me how to use these API's? Thanks.

    Read the article

  • Securing Web Services approach valid?

    - by NBrowne
    Hi , Currently I am looking at securing our web services. At the moment we are not using WCF so this is not an option. One approach I have seen and implemented locally fairly easily was the approach described in article: http://www.codeproject.com/KB/aspnet/wsFormsAuthentication.aspx Which describes adding a HttpModule which prompts for user credentials if the user browses to any pages (web services) which are contained in a services folder. Does anyone see any way that this security could fall down and could be bypassed etc. I'm really just trying to decide whether this is a valid approach to take or not? thanks

    Read the article

  • Java EE and JDK

    - by user269799
    I want to move from Java SE to Java EE. I will be using some of the sample projects that come with the Java EE. I have uninstalled the JDK but I think this may have been a mistake? When I download the latest Java EE (6), upon installation it asks me for the location of the JDK (which is uninstalled). I was under the impression that the JDK was specific to each version of Java i.e. SE or EE. Am I wrong here? I would have thought that when I download Java EE 6 that it was actually the EE JDK? Can anybody please clarify this for me? GF

    Read the article

  • How to Audit Database Activity without Performance and Scalability Issues?

    - by GotoError
    I have a need to do auditing all database activity regardless of whether it came from application or someone issuing some sql via other means. So the auditing must be done at the database level. The database in question is Oracle. I looked at doing it via Triggers and also via something called Fine Grained Auditing that Oracle provides. In both cases, we turned on auditing on specific tables and specific columns. However, we found that Performance really sucks when we use either of these methods. Since auditing is an absolute must due to regulations placed around data privacy, I am wondering what is best way to do this without significant performance degradations. If someone has Oracle specific experience with this, it will be helpful but if not just general practices around database activity auditing will be okay as well.

    Read the article

  • Handling null values with PowerShell dates

    - by Tim Ferrill
    I'm working on a module to pull data from Oracle into a PowerShell data table, so I can automate some analysis and perform various actions based on the results. Everything seems to be working, and I'm casting columns into specific types based on the column type in Oracle. The problem I'm having has to do with null dates. I can't seem to find a good way to capture that a date column in Oracle has a null value. Is there any way to cast a [datetime] as null or empty?

    Read the article

  • Problem with index server talking to remote server names with dashes or dots in them

    - by Aim Kai
    Hi I am having a problem, accessing a remote index server catalog. The name of the server has - in it, so i put the index catalog name as: i.e num.num.num.num\name of catalog or an-example-server I get the following error when using an ole data connection to pull results from the index: "Format of the initialization string does not conform to specification starting at index 39" I tried putting single quotes and &qoute; with no luck - anyone have idea? PS. This Microsoft Index Server Question!

    Read the article

  • How to prevent the symbol "&" from being replaced by "&amp;

    - by tonsils
    Hi, Hoping someone could pls let me know how to prevent the symbol "&" from being replaced by "&amp;" within my URL, specifically within javascript? Just to expand on requirement, I am getting my url from an oracle database table, which I then use within Oracle Application Express, to set the src attribute of an iframe to this url. FYI, the url stored in the Oracle table is actually stored correctly, i.e. http://mydomain.com/xml/getInfo?s=pvalue1&f=mydir/Summary.xml what appears in my use when trying to pass over into iframe src using javascript is: http://mydomain.com/xml/getInfo?s=pvalue1&amp;f=mydir/Summary.xml which basically returns a page cannot be found Hope this clarified further my issue. Thanks.

    Read the article

  • Problem with Regex Validator - VAB

    - by sarae
    Hi, I do the validation through configuration files. But, RegexValidator does not work properly. This Validator not disciplined even to unknown regular expression!! Do you know about this problem? Many thanks!!!

    Read the article

  • Announcing: Improvements to the Windows Azure Portal

    - by ScottGu
    Earlier today we released a number of enhancements to the new Windows Azure Management Portal.  These new capabilities include: Service Bus Management and Monitoring Support for Managing Co-administrators Import/Export support for SQL Databases Virtual Machine Experience Enhancements Improved Cloud Service Status Notifications Media Services Monitoring Support Storage Container Creation and Access Control Support All of these improvements are now live in production and available to start using immediately.  Below are more details on them: Service Bus Management and Monitoring The new Windows Azure Management Portal now supports Service Bus management and monitoring. Service Bus provides rich messaging infrastructure that can sit between applications (or between cloud and on-premise environments) and allow them to communicate in a loosely coupled way for improved scale and resiliency. With the new Service Bus experience, you can now create and manage Service Bus Namespaces, Queues, Topics, Relays and Subscriptions. You can also get rich monitoring for Service Bus Queues, Topics and Subscriptions. To create a Service Bus namespace, you can now select the “Service Bus” tab in the Windows Azure portal and then simply select the CREATE command: Doing so will bring up a new “Create a Namespace” dialog that allows you to name and create a new Service Bus Namespace: Once created, you can obtain security credentials associated with the Namespace via the ACCESS KEY command. This gives you the ability to obtain the connection string associated with the service namespace. You can copy and paste these values into any application that requires these credentials: It is also now easy to create Service Bus Queues and Topics via the NEW experience in the portal drawer.  Simply click the NEW command and navigate to the “App Services” category to create a new Service Bus entity: Once you provision a new Queue or Topic it can be managed in the portal.  Clicking on a namespace will display all queues and topics within it: Clicking on an item in the list will allow you to drill down into a dashboard view that allows you to monitor the activity and traffic within it, as well as perform operations on it. For example, below is a view of an “orders” queue – note how we now surface both the incoming and outgoing message flow rate, as well as the total queue length and queue size: To monitor pub/sub subscriptions you can use the ADD METRICS command within a topic and select a specific subscription to monitor. Support for Managing Co-Administrators You can now add co-administrators for your Windows Azure subscription using the new Windows Azure Portal. This allows you to share management of your Windows Azure services with other users. Subscription co-administrators share the same administrative rights and permissions that service administrator have - except a co-administrator cannot change or view billing details about the account, nor remove the service administrator from a subscription. In the SETTINGS section, click on the ADMINISTRATORS tab, and select the ADD button to add a co-administrator to your subscription: To add a co-administrator, you specify the email address for a Microsoft account (formerly Windows Live ID) or an organizational account, and choose the subscription you want to add them to: You can later update the subscriptions that the co-administrator has access to by clicking on the EDIT button, and then select or deselect the subscriptions to which they belong. Import/Export Support for SQL Databases The Windows Azure administration portal now supports importing and exporting SQL Databases to/from Blob Storage.  Databases can be imported/exported to blob storage using the same BACPAC file format that is supported with SQL Server 2012.  Among other benefits, this makes it easy to copy and migrate databases between on-premise and cloud environments. SQL Databases now have an EXPORT command in the bottom drawer that when pressed will prompt you to save your database to a Windows Azure storage container: The UI allows you to choose an existing storage account or create a new one, as well as the name of the BACPAC file to persist in blob storage: You can also now import and create a new SQL Database by using the NEW command.  This will prompt you to select the storage container and file to import the database from: The Windows Azure Portal enables you to monitor the progress of import and export operations. If you choose to log out of the portal, you can come back later and check on the status of all of the operations in the new history tab of the SQL Database server – this shows your entire import and export history and the status (success/fail) of each: Enhancements to the Virtual Machine Experience One of the common pain-points we have heard from customers using the preview of our new Virtual Machine support has been the inability to delete the associated VHDs when a VM instance (or VM drive) gets deleted. Prior to today’s release the VHDs would continue to be in your storage account and accumulate storage charges. You can now navigate to the Disks tab within the Virtual Machine extension, select a VM disk to delete, and click the DELETE DISK command: When you click the DELETE DISK button you have the option to delete the disk + associated .VHD file (completely clearing it from storage).  Alternatively you can delete the disk but still retain a .VHD copy of it in storage. Improved Cloud Service Status Notifications The Windows Azure portal now exposes more information of the health status of role instances.  If any of the instances are in a non-running state, the status at the top of the dashboard will summarize the status (and update automatically as the role health changes): Clicking the instance hyperlink within this status summary view will navigate you to a detailed role instance view, and allow you to get more detailed health status of each of the instances.  The portal has been updated to provide more specific status information within this detailed view – giving you better visibility into the health of your app: Monitoring Support for Media Services Windows Azure Media Services allows you to create media processing jobs (for example: encoding media files) in your Windows Azure Media Services account. In the Windows Azure Portal, you can now monitor the number of encoding jobs that are queued up for processing as well as active, failed and queued tasks for encoding jobs. On your media services account dashboard, you can visualize the monitoring data for last 6 hours, 24 hours or 7 days. Storage Container Creation and Access Control Support You can now create Windows Azure Storage storage containers from within the Windows Azure Portal.  After selecting a storage account, you can navigate to the CONTAINERS tab and click the ADD CONTAINER command: This will display a dialog that lets you name the new container and control access to it: You can also update the access setting as well as container metadata of existing containers by selecting one and then using the new EDIT CONTAINER command: This will then bring up the edit container dialog that allows you to change and save its settings: In addition to creating and editing containers, you can click on them within the portal to drill-in and view blobs within them.  Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming later this month – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5, and support .NET 4.5 with Websites).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Installed Windows 7 Ultimate on D Drive and previous Windows 7 Enterprise on C Drive has stopped starting up

    - by teenup
    Please please help! I have installed Windows 7 Ultimate on same hard drive on D Drive on my laptop and the previous Windows 7 Enterprise which was installed on C Drive is not booting up now. When I turn on my laptop, I see two Windows 7 on the screen, when I select newer one, it starts, but when I select older one which is Enterprise edition, system won't start and I get the DOS black screen with this error message: Windows Boot Manager Windows failed to start. A recent hardware or software change might be the cause. To fix the problem: Insert your Windows installation disc and restart your computer. Choose your language settings, and then click "Next." Click "repair your computer." Info: The boot selection failed because a required device is inaccessible. I notice that when I run the newer OS installed, the previous OS's drive (Which is D: now instead of C:) has become unusable and when I double click it, it asks me to format the drive. The data, that I had on my D Drive (Which is now C Drive for new OS), I had copied it to a network path and it is available. It was containing Windows 7 Users folder which I copied at that time when installing new windows. I have copied that Users folder again to the new OS's C Drive thinking it would run again, but of no use. Please please please...if someone can help...It is extremely required for me. Thanks a lot in advance.

    Read the article

  • Spotlight on Claims: Serving Customers Under Extreme Conditions

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, recently attended the CII Spotlight on Claims event in London. Bad weather and its implications for the insurance industry have become very topical as the frequency and diversity of natural disasters - including rains, wind and snow - has surged across Europe this winter. On England's wettest day on record, the county of Cumbria was flooded with 12 inches of rain within 24 hours. Freezing temperatures wreaked havoc on European travel, causing high speed TVG trains to break down and stranding hundreds of passengers under the English Chanel in a tunnel all night long without heat or electricity. A storm named Xynthia thrashed France and surrounding countries with hurricane force, flooding ports and killing 51 people. After the Spring Equinox, insurers may have thought the worst had past. Then came along Eyjafjallajökull, spewing out vast quantities of volcanic ash in what is turning out to be one of most costly natural disasters in history. Such extreme events challenge insurance companies' ability to service their customers just when customers need their help most. When you add economic downturn and competitive pressures to the mix, insurers are further stretched and required to continually learn and innovate to meet high customer expectations with reduced budgets. These and other issues were hot topics of discussion at the recent "Spotlight on Claims" seminar in London, focused on how weather is affecting claims and the insurance industry. The event was organized by the CII (Chartered Insurance Institute), a group with 90,000 members. CII has been at the forefront in setting professional standards for the insurance industry for over a century. Insurers came to the conference to hear how they could better serve their customers under extreme weather conditions, learn from the experience of their peers, and hear about technological breakthroughs in climate modeling, geographic intelligence and IT. Customer case studies at the conference highlighted the importance of effective and constant communication in handling the overflow of catastrophe related claims. First and foremost is the need to rapidly establish initial communication with claimants to build their confidence in a positive outcome. Ongoing communication then needs to be continued throughout the claims cycle to mange expectations and maintain ownership of the process from start to finish. Strong internal communication to support frontline staff was also deemed critical to successful crisis management, as was communication with the broader insurance ecosystem to tap into extended resources and business intelligence. Advances in technology - such web based systems to access policies and enter first notice of loss in the field - as well as customer-focused self-service portals and multichannel alerts, are instrumental in improving customer satisfaction and helping insurers to deal with the claims surge, which often can reach four or more times normal workloads. Dynamic models of the global climate system can now be used to better understand weather-related risks, and as these models mature it is hoped that they will soon become more accurate in predicting the timing of catastrophic events. Geographic intelligence is also being used within a claims environment to better assess loss reserves and detect fraud. Despite these advances in dealing with catastrophes and predicting their occurrence, there will never be a substitute for qualified front line staff to deal with customers. In light of pressures to streamline efficiency, there was debate as to whether outsourcing was the solution, or whether it was better to build on the people you have. In the final analysis, nearly everybody agreed that in the future insurance companies would have to work better and smarter to keep on top. An appeal was also made for greater collaboration amongst industry participants in dealing with the extreme conditions and systematic stress brought on by natural disasters. It was pointed out that the public oftentimes judged the industry as a whole rather than the individual carriers when it comes to freakish events, and that all would benefit at such times from the pooling of limited resources and professional skills rather than competing in silos for competitive advantage - especially the end customer. One case study that stood out was on how The Motorists Insurance Group was able to power through one of the most devastating catastrophes in recent years - Hurricane Ike. The keys to Motorists' success were superior people, processes and technology. They did a lot of upfront planning and invested in their people, creating a healthy team environment that delivered "max service" even when they were experiencing the same level of devastation as the rest of the population. Processes were rapidly adapted to meet the challenge of the catastrophe and continually adapted to Ike's specific conditions as they evolved. Technology was fundamental to the execution of their strategy, enabling them anywhere access, on the fly reassigning of resources and rapid training to augment the work force. You can learn more about the Motorists experience by watching this video. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.

    Read the article

  • How can I install oracle-java7 from webupd8 ppa?

    - by Ahmed Zain El Dein
    I installed ppa:webupd8team/java and I get the following error Output from: sudo apt-get install oracle-java7-installer Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: binfmt-support visualvm ttf-baekmuk ttf-unfonts ttf-unfonts-core ttf-kochi-gothic ttf-sazanami-gothic ttf-kochi-mincho ttf-sazanami-mincho ttf-arphic-uming The following packages will be upgraded: oracle-java7-installer 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. Need to get 0 B/16.0 kB of archives. After this operation, 64.5 kB of additional disk space will be used. Could not exec dpkg! E: Sub-process /usr/bin/dpkg returned an error code (100) i did afterwords those line of code trying to resolve the issue becuase it is not existed actually in the /usr/bin/dpkg there is no dpkg mkdir /tmp/dpkg cd /tmp/dpkg wget http://archive.ubuntu.com/ubuntu/pool/main/d/dpkg/dpkg_1.15.5.6ubuntu4_i386.deb ar x dpkg*.deb data.tar.gz tar xfvz data.tar.gz ./usr/bin/dpkg sudo cp ./usr/bin/dpkg /usr/bin/ sudo apt-get update sudo apt-get install --reinstall dpkg then i get this $ sudo apt-get install --reinstall dpkg Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 6 not upgraded. 1 not fully installed or removed. Need to get 0 B/1,814 kB of archives. After this operation, 0 B of additional disk space will be used. dpkg: warning: 'dpkg-deb' not found on PATH. dpkg: 1 expected program(s) not found on PATH. NB: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. E: Sub-process /usr/bin/dpkg returned an error code (2) How can I fix this?

    Read the article

  • How can I generate a client proxy for a WCF service with an HTTPS endpoint?

    - by ng5000
    Might be the same issue as this previuos question: WCF Proxy but not sure... I have an HTTPS service connfigured to use transport security and, I hope, Windows credentials. The service is only accessed internally (i.e. within the intranet). The configuration is as follows: <configuration> <system.serviceModel> <services> <service name="WCFTest.CalculatorService" behaviorConfiguration="WCFTest.CalculatorBehavior"> <host> <baseAddresses> <add baseAddress = "https://localhost:8000/WCFTest/CalculatorService/" /> </baseAddresses> </host> <endpoint address ="basicHttpEP" binding="basicHttpBinding" contract="WCFTest.ICalculatorService" bindingConfiguration="basicHttpBindingConfig"/> <endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange"/> </service> </services> <bindings> <basicHttpBinding> <binding name="basicHttpBindingConfig"> <security mode="Transport"> <transport clientCredentialType = "Windows"/> </security> </binding> </basicHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="WCFTest.CalculatorBehavior"> <serviceAuthorization impersonateCallerForAllOperations="false" principalPermissionMode="UseWindowsGroups" /> <serviceCredentials > <windowsAuthentication allowAnonymousLogons="false" includeWindowsGroups="true" /> </serviceCredentials> <serviceMetadata httpsGetEnabled="True"/> <serviceDebug includeExceptionDetailInFaults="False" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> When I run the service I can't see the service in IE. I get a "this page can not be displayed" error. If I try and create a client in VS2008 via the "add service reference" wizard I get this error: There was an error downloading 'https://localhost:8000/WCFTest/CalculatorService/'. There was an error downloading 'https://localhost:8000/WCFTest/CalculatorService/'. The underlying connection was closed: An unexpected error occurred on a send. Authentication failed because the remote party has closed the transport stream. Metadata contains a reference that cannot be resolved: 'https://localhost:8000/WCFTest/CalculatorService/'. An error occurred while making the HTTP request to https://localhost:8000/WCFTest/CalculatorService/. This could be due to the fact that the server certificate is not configured properly with HTTP.SYS in the HTTPS case. This could also be caused by a mismatch of the security binding between the client and the server. The underlying connection was closed: An unexpected error occurred on a send. Authentication failed because the remote party has closed the transport stream. If the service is defined in the current solution, try building the solution and adding the service reference again. I think I'm missing some fundamental basics here. Do I need to set up some certificates? Or should it all just work as it seems to do when I use NetTcpBinding? Thanks

    Read the article

  • tapestry 4 session expired

    - by cometta
    is below caused by user session expired? if yes, how to exend session on tapestry 4 ? or any other way to solve this problem? Unable to process client request: Unable to forward to local resource '/app?service=page&page=Home&id=692': java.lang.NullPointerException: Property 'webRequest' of <OuterProxy for tapestry.globals.RequestGlobals(org.apache.tapestry.services.RequestGlobals)> is null. Apr 22, 2010 5:14:43 PM org.apache.catalina.core.ApplicationContext log SEVERE: app: ServletException javax.servlet.ServletException: java.lang.NullPointerException: Property 'webRequest' of <OuterProxy for tapestry.globals.RequestGlobals(org.apache.tapestry.services.RequestGlobals)> is null. at org.apache.tapestry.services.impl.WebRequestServicerPipelineBridge.service(WebRequestServicerPipelineBridge.java:65) at $ServletRequestServicer_128043b52ea.service($ServletRequestServicer_128043b52ea.java) at org.apache.tapestry.request.DecodedRequestInjector.service(DecodedRequestInjector.java:55) at $ServletRequestServicerFilter_128043b52e6.service($ServletRequestServicerFilter_128043b52e6.java) at $ServletRequestServicer_128043b52ec.service($ServletRequestServicer_128043b52ec.java) at org.apache.tapestry.multipart.MultipartDecoderFilter.service(MultipartDecoderFilter.java:52) at $ServletRequestServicerFilter_128043b52e4.service($ServletRequestServicerFilter_128043b52e4.java) at $ServletRequestServicer_128043b52ec.service($ServletRequestServicer_128043b52ec.java) at org.apache.tapestry.services.impl.SetupRequestEncoding.service(SetupRequestEncoding.java:53) at $ServletRequestServicerFilter_128043b52e8.service($ServletRequestServicerFilter_128043b52e8.java) at $ServletRequestServicer_128043b52ec.service($ServletRequestServicer_128043b52ec.java) at $ServletRequestServicer_128043b52de.service($ServletRequestServicer_128043b52de.java) at org.apache.tapestry.ApplicationServlet.doService(ApplicationServlet.java:126) at org.apache.tapestry.ApplicationServlet.doPost(ApplicationServlet.java:171) at javax.servlet.http.HttpServlet.service(HttpServlet.java:637) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:378) at org.springframework.security.intercept.web.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:109) at org.springframework.security.intercept.web.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:83) at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390) at org.springframework.security.ui.SessionFixationProtectionFilter.doFilterHttp(SessionFixationProtectionFilter.java:67) at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53) at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390) at org.springframework.security.ui.ntlm.NtlmProcessingFilter.doFilterHttp(NtlmProcessingFilter.java:358) at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53) at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390) at org.springframework.security.ui.ExceptionTranslationFilter.doFilterHttp(ExceptionTranslationFilter.java:101) at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53) at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390) at org.springframework.security.context.HttpSessionContextIntegrationFilter.doFilterHttp(HttpSessionContextIntegrationFilter.java:235) at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53) at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390) at org.springframework.security.concurrent.ConcurrentSessionFilter.doFilterHttp(ConcurrentSessionFilter.java:99) at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53) at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390) at org.springframework.security.util.FilterChainProxy.doFilter(FilterChainProxy.java:175) at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:236) at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • Blogger GData NullPointerException on every second try on Android

    - by Vinay
    I am experiencing a strange phenomenon with Blogger GData API 2.0 on Android. I am using the BloggerService to retrieve blogs. First time it works fine. However EVERY SECOND TRY I get a NullPointerException: Caused by: java.lang.NullPointerException at com.google.gdata.wireformats.AltRegistry.lookupType(AltRegistry.java:190) at com.google.gdata.client.Service.parseResponseData(Service.java:1860) at com.google.gdata.client.Service.getFeed(Service.java:1054) at com.google.gdata.client.Service.getFeed(Service.java:916) at com.google.gdata.client.GoogleService.getFeed(GoogleService.java:631) at com.google.gdata.client.Service.getFeed(Service.java:935)

    Read the article

  • Nagios core Event Handler not working

    - by sivashanmugam
    Nagios Event Handler is not triggering when the service is taking more time to response or down. My configuration in below nagios.cfg enable_event_handlers=1 localhost.cfg define service { use generic-service host_name Server service_description test-server servicegroups test-service check_command check-service is_volatile 0 check_period 24x7 max_check_attempts 4 normal_check_interval 2 retry_check_interval 2 contact_groups testcontacts notification_period 24x7 notification_options w,u,c,r notifications_enabled 1 event_handler_enabled 1 event_handler recheck-service } command.cfg define command{ command_name recheck-service command_line /usr/local/nagios/libexec/alert.sh $SERVICESTATE$ $SERVICESTATETYPE$ $SERVICEATTEMPT$ } alert.sh file !/bin/sh set -x case "$1" in OK) # The service just came back up, so don't do anything... ;; WARNING) # We don't really care about warning states, since the service is probably still running... ;; UNKNOWN) # We don't know what might be causing an unknown error, so don't do anything... ;; CRITICAL) Aha! The HTTP service appears to have a problem - perhaps we should restart the server... Is this a "soft" or a "hard" state? case "$2" in We're in a "soft" state, meaning that Nagios is in the middle of retrying the check before it turns into a "hard" state and contacts get notified... SOFT) # What check attempt are we on? We don't want to restart the web server on the first check, because it may just be a fluke! case "$3" in Wait until the check has been tried 3 times before restarting the web server. If the check fails on the 4th time (after we restart the web server), the state type will turn to "hard" and contacts will be notified of the problem. Hopefully this will restart the web server successfully, so the 4th check will result in a "soft" recovery. If that happens no one gets notified because we fixed the problem! 3) echo -n "Going To Ping the Virtual Machine (3rd soft critical state)..." # Call the init script to restart the HTTPD server myresult=`/usr/local/nagios/libexec/check_http xyz.com -t 100 | grep 'time'| awk '{print $10}'` echo "Your Service Is taking the following time Delay" "$myresult Seconds" |mail -s "WARNING : Service Taken More Time To Response" [email protected] ;; esac ;; # The HTTP service somehow managed to turn into a hard error without getting fixed. # It should have been restarted by the code above, but for some reason it didn't. # Let's give it one last try, shall we? # Note: Contacts have already been notified of a problem with the service at this

    Read the article

  • How to prevent direct access to my JSON service?

    - by FrankLy
    I have a JSON web service to return home markers to be displayed on my Google Map. Essentially, http://example.com calls the web service to find out the location of all map markers to display like so: http://example.com/json/?zipcode=12345 And it returns a JSON string such as: {"address": "321 Main St, Mountain View, CA, USA", ...} So on my index.html page, I take that JSON string and place the map markers. However, what I don't want to have happen is people calling out to my JSON web service directly. I only want http://example.com/index.html to be able to call my http://example.com/json/ web service ... and not some random dude calling the /json/ directly. Quesiton: how do I prevent direct calling/access to my http://example.com/json/ web service?

    Read the article

  • How to debug a WCF Service with an HTTP Context?

    - by JL
    I need to debug a WCF service but it needs to have an HTTP Context. Currently I have a solution with a WCF service web site, when I click on debug it starts up and then fires up an html page that contains no test form. While the project is running I tried starting the wcftestclient manually, then provided the address of my service, it finds the service but when I invoke it, it bypasses the IIS layer (or development server), so the httpContext is null... What is the correct way to debug a WCF service through an IIS context?

    Read the article

  • The difference between the 'Local System' account and the 'Network Service' account?

    - by jmatthias
    I have written a Windows service that spawns a separate process. This process creates a COM object. If the service runs under the 'Local System' account everything works fine, but if the service runs under the 'Network Service' account, the external process starts up but it fails to create the COM object. The error returned from the COM object creation is not a standard COM error (I think it's specific to the COM object being created). So, how do I determine how the two accounts, 'Local System' and 'Network Service' differ? These built-in accounts seem very mysterious and nobody seems to know much about them.

    Read the article

  • ASP.NET: Why can't I call a web-service twice in one page?

    - by Shay
    Hi all! I built a web-application which calls a web-service twice in the application's log-in page - first, on Page_Load, and the second time is after a button click. When debugging, everything goes well, but after publishing the web-application and trying it - I cannot make the two calls for the web-service (each call is invoking a different function in the same web-service). If I call it once (say, in the Page_Load) its OK, but once I get to the button click event, the page just seems to be loading but actually doing nothing (loading to infinity). When I disabled the web-service call in Page_Load, the web-service call after the button click worked well, and if I switched between them (disabled the call in button click and enabled the call in Page_Load) the enabled one worked OK. How come this is happening? What did I do wrong? Could it be related to the fact that the location where I published my web-application has some URL-rewriting rules?

    Read the article

< Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >