Search Results

Search found 21644 results on 866 pages for 'connection speed'.

Page 673/866 | < Previous Page | 669 670 671 672 673 674 675 676 677 678 679 680  | Next Page >

  • Calling Web Page from Windows Service

    - by David Moorhouse
    I have a Windows 2008 server. This has an instance of IIS7 installed and a website, accessible only via an internal hostname defined in the the servers hosts file. e.g. the server IP is 10.0.6.11 and the hosts file contains an entry 10.0.6.11 hl7.ourname.org I would like to call a URL on the website, e.g "http://hl7.ourname.org/updatestatus" from a Windows service. I can call the web site from an instance of IE7 running on the server. However, if try to call it from a Windows service I get a "Socket Error # 10061 Connection refused." This looks like a permission issue. The service is running as "Network Service" ,but maybe I should be running as a different user. Thanks

    Read the article

  • SLES 9 vs. SLES 10

    - by Michael Covelli
    Are there any important change in how SLES 10 implements Tcp sockets vs. SLES 9? I have several apps written in C# (.NET 3.5) that run on Windows XP and Windows Server 2003. They've been running fine for over a year, getting market data from a SLES 9 machine using a socket connection. The machine was upgraded today to SLES 10 and its causing some strange behavior. The socket normally returns a few hundred or thousand bytes every second. But occasionally, I stop receiving data. Ten or more seconds will go by with no data and then Receive returns with a 10k+ bytes. And some buffer is causing data loss because the bytes I receive on the socket no longer make a correct packet. The only thing changed was the SLES 9 to 10 upgrade. And rolling back fixes this immediately. Any ideas?

    Read the article

  • Browser won't connect to svn server

    - by Tammy Wilson
    This has been driving me nuts. For some reason, I can't access my svn repository using a browser in this laptop that I'm using right now (firefox & ie) The connection just times out. I'm at home right now and the server is in another room. It connects OK there and it also connects OK in my virtual machine in this same laptop. I'm pretty stumped right now and can't figure out why this is happening. I've also checked the proxies and I'm 100% sure I'm not using any at all. The virtual machine running on this laptop is XP 32bit and this one is a Win7 64 bit. Thanks

    Read the article

  • Poco SocketReactor Scalability

    - by Genesis
    I have written a proxy server for Linux using Poco but have since been reading up on the various approaches to achieving TCP/IP server scalability. I will need the server to handle persistent connections (not HTTP traffic) with an upper limit of about 250 simultaneous connections. Each connection typically uses about 5-10Kb/sec and the best possible latency in handling traffic is crucial. As it stands I am using the Poco SocketReactor which uses the Reactor model with a select() call at its heart however I have had a read on the C10K problem as well as few other resources and it seems that using this approach might not be the best idea. I believe there is a test implementation in the Poco libs that uses poll() so this could be an option to improve things. Does anyone have any experience using a Poco SocketReactor and do you have any idea how well it might scale for my scenario? If it will not scale well, suggestions on alternatives would be appreciated.

    Read the article

  • SQL Server Account will work for SQL Server but fails to ReportServer

    - by Larry
    Setting up SSRS. For Service Account, I set a domain account as the Report Server Service Account. click apply SQL Server Connection Dialog pops up. Need to specify an account with admin privileges. set Credentials Type = SQL Server Account I use sa account (which works, verified many ways with sql server management studio) I fails with the following: System.InvalidOperationException: Cannot start service ReportServer on computer 'DEVDB5'. --- System.ComponentModel.Win32Exception: The service did not start due to a logon failure --- End of inner exception stack trace --- at System.ServiceProcess.ServiceController.Start(String[] args) at System.ServiceProcess.ServiceController.Start() at ReportServicesConfigUI.Panels.WindowsServiceIdentityPanel.StartWindowsServicePostChangeWindowsServiceIdentity(ServiceController rsService) What am I doing wrong. btw: it was working fine as of yesterday. I originally set up SSRS a few days ago using the sa account

    Read the article

  • Recordset Paging works in IIS6 but not in IIIS7

    - by Spudhead
    Hi there, I've got a recordset/paging set up - works fine in IIS6 but when I run the site on an IIS7 server I get the following error: Microsoft OLE DB Provider for SQL Server error '80004005' [DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied. /orders.asp, line 197 the code looks like this: Set objPagingConn = Server.CreateObject("ADODB.Connection") objPagingConn.Open CONN_STRING Set objPagingRS = Server.CreateObject("ADODB.Recordset") objPagingRS.PageSize = iPageSize objPagingRS.CacheSize = iPageSize objPagingRS.Open strSQL, objPagingConn, adOpenStatic, adLockReadOnly, adCmdText iPageCount = objPagingRS.PageCount iRecordCount = objPagingRS.RecordCount Line 197 is the objPagingConn,Open ... line. I've got about 10 sites like this to migrate - is there a simple fix in IIS7??? Help is greatly appreciated! Many thanks, Martin

    Read the article

  • How to avoid "The name 'ConfigurationManager' does not exist in the current context" error?

    - by 5YrsLaterDBA
    I am using VS2008. I have a project connect with a database and the connection string is read from App.config via ConfigurationManager. We are using L2E. Now I added a helper project, AndeDataViewer, to have a simple UI to display data from the database for testing/verification purpose. I don't want to create another set of Entity Data Model in the helper project. I just added all related files as a link in the new helper project. When I compile, I got the following error: Error 15 The name 'ConfigurationManager' does not exist in the current context C:\workspace\SystemSoftware\SystemSoftware\src\systeminfo\RuntimeInfo.cs 24 40 AndeDataViewer I think I may need to add another project setting/config related file's link to the helper project from the main project? There is no App.config file in the new helper project. But it looks I cannot add that file's link to the helper project. Any ideas?

    Read the article

  • Spotlight on Claims: Serving Customers Under Extreme Conditions

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, recently attended the CII Spotlight on Claims event in London. Bad weather and its implications for the insurance industry have become very topical as the frequency and diversity of natural disasters - including rains, wind and snow - has surged across Europe this winter. On England's wettest day on record, the county of Cumbria was flooded with 12 inches of rain within 24 hours. Freezing temperatures wreaked havoc on European travel, causing high speed TVG trains to break down and stranding hundreds of passengers under the English Chanel in a tunnel all night long without heat or electricity. A storm named Xynthia thrashed France and surrounding countries with hurricane force, flooding ports and killing 51 people. After the Spring Equinox, insurers may have thought the worst had past. Then came along Eyjafjallajökull, spewing out vast quantities of volcanic ash in what is turning out to be one of most costly natural disasters in history. Such extreme events challenge insurance companies' ability to service their customers just when customers need their help most. When you add economic downturn and competitive pressures to the mix, insurers are further stretched and required to continually learn and innovate to meet high customer expectations with reduced budgets. These and other issues were hot topics of discussion at the recent "Spotlight on Claims" seminar in London, focused on how weather is affecting claims and the insurance industry. The event was organized by the CII (Chartered Insurance Institute), a group with 90,000 members. CII has been at the forefront in setting professional standards for the insurance industry for over a century. Insurers came to the conference to hear how they could better serve their customers under extreme weather conditions, learn from the experience of their peers, and hear about technological breakthroughs in climate modeling, geographic intelligence and IT. Customer case studies at the conference highlighted the importance of effective and constant communication in handling the overflow of catastrophe related claims. First and foremost is the need to rapidly establish initial communication with claimants to build their confidence in a positive outcome. Ongoing communication then needs to be continued throughout the claims cycle to mange expectations and maintain ownership of the process from start to finish. Strong internal communication to support frontline staff was also deemed critical to successful crisis management, as was communication with the broader insurance ecosystem to tap into extended resources and business intelligence. Advances in technology - such web based systems to access policies and enter first notice of loss in the field - as well as customer-focused self-service portals and multichannel alerts, are instrumental in improving customer satisfaction and helping insurers to deal with the claims surge, which often can reach four or more times normal workloads. Dynamic models of the global climate system can now be used to better understand weather-related risks, and as these models mature it is hoped that they will soon become more accurate in predicting the timing of catastrophic events. Geographic intelligence is also being used within a claims environment to better assess loss reserves and detect fraud. Despite these advances in dealing with catastrophes and predicting their occurrence, there will never be a substitute for qualified front line staff to deal with customers. In light of pressures to streamline efficiency, there was debate as to whether outsourcing was the solution, or whether it was better to build on the people you have. In the final analysis, nearly everybody agreed that in the future insurance companies would have to work better and smarter to keep on top. An appeal was also made for greater collaboration amongst industry participants in dealing with the extreme conditions and systematic stress brought on by natural disasters. It was pointed out that the public oftentimes judged the industry as a whole rather than the individual carriers when it comes to freakish events, and that all would benefit at such times from the pooling of limited resources and professional skills rather than competing in silos for competitive advantage - especially the end customer. One case study that stood out was on how The Motorists Insurance Group was able to power through one of the most devastating catastrophes in recent years - Hurricane Ike. The keys to Motorists' success were superior people, processes and technology. They did a lot of upfront planning and invested in their people, creating a healthy team environment that delivered "max service" even when they were experiencing the same level of devastation as the rest of the population. Processes were rapidly adapted to meet the challenge of the catastrophe and continually adapted to Ike's specific conditions as they evolved. Technology was fundamental to the execution of their strategy, enabling them anywhere access, on the fly reassigning of resources and rapid training to augment the work force. You can learn more about the Motorists experience by watching this video. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.

    Read the article

  • SignalR: hub invokation request

    - by Yuriy Pogrebnyak
    I'm writing custom SignalR client and I need to implement hub invokation. As I understood from .NET client code, I need to send post request to the following url (after establishing connection with server): http://serverurl/signalr/send?transport=serverSentEvents&connectionId=<my_connection_id> . In request body I need to send json string containing basic information about the invoked method. My question is how should this json look like? I'm trying to send smth like this (again, judging by .NET client code): {"data" : {"Hub" : "hubname", "Method" : "methodname", "Args" : {"message" : "msg"} } } But I get the following error: System.ArgumentNullException: Value cannot be null. Parameter name: s. What am I doing wrong? What are required parameters of sending json and how should it be formatted?

    Read the article

  • Rails ActionMailer problems on Mac

    - by seth
    I've been working on learning to use Rails the last couple days and I've run into something that I haven't been able to solve with Google. So I'm just creating a basic contact form that sends an email. Everything seems to be working ok in testing, which tells me that the form is working, and ActionMailer was implemented correctly, however, I'm having trouble configuring ActionMailer. I'm running OSX 10.6.2. I have postfix running and have verified that it's running using telnet localhost 25. When I try to use the form I get a "Connection refused" error. This is my current configuration: config.action_mailer.smtp_settings = { :address => 'localhost', :port => 25 } I thought I might need to set :domain but I'm kind of confused on what that should be set to in this situation.

    Read the article

  • Problem creating ObjectContext from different project inside solution.

    - by Levelbit
    I have two projects in my Solution. One implements my business logic and has defined entity model of entity framework. When I want to work with classes defined within this project from another project I have some problems in runtime. Actually, the most concerning thing is why I can not instantiate my, so called, TicketEntities(ObjectContext) object from other projects? when I catch following exception: The specified named connection is either not found in the configuration, not intended to be used with the EntityClient provider, or not valid. I found out it's brake at: public partial class TicketEntities : global::System.Data.Objects.ObjectContext { public TicketEntities() : base("name=TicketEntities", "TicketEntities") { this.OnContextCreated(); } with exception: Unable to load the specified metadata resource. Just to remind you everthing works fine from orginal project.

    Read the article

  • Recover that Photo, Picture or File You Deleted Accidentally

    - by The Geek
    Have you ever accidentally deleted a photo on your camera, computer, USB drive, or anywhere else? What you might not know is that you can usually restore those pictures—even from your camera’s memory stick. Windows tries to prevent you from making a big mistake by providing the Recycle Bin, where deleted files hang around for a while—but unfortunately it doesn’t work for external USB drives, USB flash drives, memory sticks, or mapped drives. The great news is that this technique also works if you accidentally deleted the photo… from the camera itself. That’s what happened to me, and prompted writing this article. Restore that File or Photo using Recuva The first piece of software that you’ll want to try is called Recuva, and it’s extremely easy to use—just make sure when you are installing it, that you don’t accidentally install that stupid Yahoo! toolbar that nobody wants. Now that you’ve installed the software, and avoided an awful toolbar installation, launch the Recuva wizard and let’s start through the process of recovering those pictures you shouldn’t have deleted. The first step on the wizard page will let you tell Recuva to only search for a specific type of file, which can save a lot of time while searching, and make it easier to find what you are looking for. Next you’ll need to specify where the file was, which will obviously be up to wherever you deleted it from. Since I deleted mine from my camera’s SD card, that’s where I’m looking for it. The next page will ask you whether you want to do a Deep Scan. My recommendation is to not select this for the first scan, because usually the quick scan can find it. You can always go back and run a deep scan a second time. And now, you’ll see all of the pictures deleted from your drive, memory stick, SD card, or wherever you searched. Looks like what happened in Vegas didn’t stay in Vegas after all… If there are a really large number of results, and you know exactly when the file was created or modified, you can switch to the advanced view, where you can sort by the last modified time. This can help speed up the process quite a bit, so you don’t have to look through quite as many files. At this point, you can right-click on any filename, and choose to Recover it, and then save the files elsewhere on your drive. Awesome! Restore that File or Photo using DiskDigger If you don’t have any luck with Recuva, you can always try out DiskDigger, another excellent piece of software. I’ve tested both of these applications very thoroughly, and found that neither of them will always find the same files, so it’s best to have both of them in your toolkit. Note that DiskDigger doesn’t require installation, making it a really great tool to throw on your PC repair Flash drive. Start off by choosing the drive you want to recover from…   Now you can choose whether to do a deep scan, or a really deep scan. Just like with Recuva, you’ll probably want to select the first one first. I’ve also had much better luck with the regular scan, rather than the “dig deeper” one. If you do choose the “dig deeper” one, you’ll be able to select exactly which types of files you are looking for, though again, you should use the regular scan first. Once you’ve come up with the results, you can click on the items on the left-hand side, and see a preview on the right.  You can select one or more files, and choose to restore them. It’s pretty simple! Download DiskDigger from dmitrybrant.com Download Recuva from piriform.com Good luck recovering your deleted files! And keep in mind, DiskDigger is a totally free donationware software from a single, helpful guy… so if his software helps you recover a photo you never thought you’d see again, you might want to think about throwing him a dollar or two. Similar Articles Productive Geek Tips Stupid Geek Tricks: Undo an Accidental Move or Delete With a Keyboard ShortcutRestore Accidentally Deleted Files with RecuvaCustomize Your Welcome Picture Choices in Windows VistaAutomatically Resize Picture Attachments in Outlook 2007Resize Your Photos with Easy Thumbnails TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Icelandic Volcano Webcams Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi

    Read the article

  • Relating NP-Complete problems to real world problems

    - by terru
    I have a decent grasp of NP Complete problems; that's not the issue. What I don't have is a good sense of where they turn up in "real" programming. Some (like knapsack and traveling salesman) are obvious, but others don't seem obviously connected to "real" problems. I've had the experience several times of struggling with a difficult problem only to realize it is a well known NP Complete problem that has been researched extensively. If I had recognized the connection more quickly I could have saved quite a bit of time researching existing solutions to my specific problem. Are there any resources (online or print) that specifically connect NP Complete to real world instances?

    Read the article

  • Exception with Subsonic 2.2, SQLite and Migrations

    - by Holger Amann
    Hi, I'm playing with Migrations and created a simple migration like public class Migration001 : Migration { public override void Up() { TableSchema.Table testTable = CreateTableWithKey("TestTable"); } public override void Down() { } } after executing sonic.exe migrate I'm getting the following output: Setting ConfigPath: 'App.config' Building configuration from c:\tmp\MigrationTest\MigrationTest\App.config Adding connection to SQLiteProvider Found 1 migration files Current DB Version is 0 Migrating to 001_Init (1) There was an error running migration (001_Init): SQLite error near "IDENTITY": syntax error Stack Trace: at System.RuntimeMethodHandle._InvokeMethodFast(Object target, Object[] argum ents, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwner) at System.RuntimeMethodHandle.InvokeMethodFast(Object target, Object[] argume nts, Signature sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwn er) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invoke Attr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisib ilityChecks) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invoke Attr, Binder binder, Object[] parameters, CultureInfo culture) at SubSonic.CodeRunner.RunAndExecute(ICodeLanguage lang, String sourceCode, S tring methodName, Object[] parameters) in D:\@SubSonic\SubSonic\SubSonic.Migrati ons\CodeRunner.cs:line 95 at SubSonic.Migrations.Migrator.ExecuteMigrationCode(String migrationFile) in D:\@SubSonic\SubSonic\SubSonic.Migrations\Migrator.cs:line 177 at SubSonic.Migrations.Migrator.Migrate() in D:\@SubSonic\SubSonic\SubSonic.M igrations\Migrator.cs:line 141 Any hints?

    Read the article

  • SQL SERVER – Planned and Unplanned Availablity Group Failovers – Notes from the Field #031

    - by Pinal Dave
    [Note from Pinal]: This is a new episode of Notes from the Fields series. AlwaysOn is a very complex subject and not everyone knows many things about this. The matter of the fact is there is very little information available on this subject online and not everyone knows everything about this. This is why when a very common question related to AlwaysOn comes, people get confused. In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very common issue DBAs and Developer faces in their career and is related to Planned and Unplanned Availablity Group Failovers. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words. Whenever a disaster occurs it will be a stressful scenario regardless of how small or big the disaster is. This gets multiplied when it is your first time working with newer technology or the first time you are going through a disaster without a proper run book. Today, were going to help you establish a run book for creating a planned failover with availability groups. To make today’s session simple were going to have two instances of SQL Server 2012 included in an availability group and walk through the steps of doing an unplanned failover.  We will focus on using the user interface and T-SQL to complete the failovers. We are going to use a two replica Availability Group where each replica is in another location. Therefore, we will be covering Asynchronous (non automatic failover) the following is a breakdown of our availability group utilized today. Seeing the following screen might be scary the first time you come across an unplanned failover.  It looks like our test database used in this Availability Group is not functional and it currently isn’t. The database status is not synchronizing which makes sense because the primary replica went down so it couldn’t synchronize. With that said, we can still failover and make it functional while we troubleshoot why we lost our primary replica. To start we are going to right click on the availability group that needs to be restarted and select failover. This will bring up the following wizard, which will walk you through several steps needed to complete the failover using the graphical user interface provided with SQL Server Management Studio (SSMS). You are going to see warning messages simply because we are in Asynchronous commit mode and can not guarantee ‘no data loss’ when we do failover. Just incase you missed it; you get another screen warning you about potential data loss because we are in Asynchronous mode. Next we get to connect to the specific replica we want to become the primary replica after the failover occurs. In our case, we only have two replicas so this is trivial. In order to failover, it’s required to connect to the replica that will become primary.  The following screen shows that the connection has been made successfully. Next, you will see the final summary screen. Once again, this reminds you that the failover action will cause data loss as were using Asynchronous commit mode due to the distance between instances used for disaster recovery. Finally, once the failover is completed you will see the following screen. If you followed along this long you might be wondering what T-SQL scripts are generated for clicking through all the sections of the wizard. If you have used Database Mirroring in the past you might be surprised.  It’s not too different, which makes sense because the data is being replicated via SQL Server endpoints just like the good old database mirroring. Now were going to take a look at how to do a failover with just T-SQL. First, were going to need to open a new query window and run our query in SQLCMD mode. Just incase you haven’t used SQLCMD mode before we will show you how to enable it below. Now you can run the following statement. Notice, we connect to the replica we want to become primary after failover and specify to force failover to allow data loss. We can use the following script to failback over when our primary instance comes back online. -- YOU MUST EXECUTE THE FOLLOWING SCRIPT IN SQLCMD MODE. :Connect SQL2012PROD1 ALTER AVAILABILITY GROUP [AGSQL2] FORCE_FAILOVER_ALLOW_DATA_LOSS; GO Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Invalid XML in persistence.xml : Init method

    - by James.Elsey
    I'm getting the following error when I try to startup my application on google app engine: Failed startup of context com.google.apphosting.utils.jetty.RuntimeAppEngineWebAppContext@8fcc7b{/,/base/data/home/apps/sales-tracker/3.340980411948080671} org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'clientDao' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Cannot resolve reference to bean 'entityManagerFactory' while setting bean property 'entityManagerFactory'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Invocation of init method failed; nested exception is java.lang.IllegalArgumentException: Invalid XML in persistence unit from URL [file:/base/data/home/apps/sales-tracker/3.340980411948080671/WEB-INF/classes/META-INF/persistence.xml] at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:275) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:104) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1245) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1010) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:472) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409) at java.security.AccessController.doPrivileged(Native Method) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:429) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:728) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:380) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:255) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) at org.mortbay.jetty.handler.ContextHandler.startContext(ContextHandler.java:548) at org.mortbay.jetty.servlet.Context.startContext(Context.java:136) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1250) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:517) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:467) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at com.google.apphosting.runtime.jetty.AppVersionHandlerMap.createHandler(AppVersionHandlerMap.java:191) at com.google.apphosting.runtime.jetty.AppVersionHandlerMap.getHandler(AppVersionHandlerMap.java:168) at com.google.apphosting.runtime.jetty.JettyServletEngineAdapter.serviceRequest(JettyServletEngineAdapter.java:123) at com.google.apphosting.runtime.JavaRuntime.handleRequest(JavaRuntime.java:243) at com.google.apphosting.base.RuntimePb$EvaluationRuntime$6.handleBlockingRequest(RuntimePb.java:5485) at com.google.apphosting.base.RuntimePb$EvaluationRuntime$6.handleBlockingRequest(RuntimePb.java:5483) at com.google.net.rpc.impl.BlockingApplicationHandler.handleRequest(BlockingApplicationHandler.java:24) at com.google.net.rpc.impl.RpcUtil.runRpcInApplication(RpcUtil.java:398) at com.google.net.rpc.impl.Server$2.run(Server.java:852) at com.google.tracing.LocalTraceSpanRunnable.run(LocalTraceSpanRunnable.java:56) at com.google.tracing.LocalTraceSpanBuilder.internalContinueSpan(LocalTraceSpanBuilder.java:536) at com.google.net.rpc.impl.Server.startRpc(Server.java:807) at com.google.net.rpc.impl.Server.processRequest(Server.java:369) at com.google.net.rpc.impl.ServerConnection.messageReceived(ServerConnection.java:442) at com.google.net.rpc.impl.RpcConnection.parseMessages(RpcConnection.java:319) at com.google.net.rpc.impl.RpcConnection.dataReceived(RpcConnection.java:290) at com.google.net.async.Connection.handleReadEvent(Connection.java:474) at com.google.net.async.EventDispatcher.processNetworkEvents(EventDispatcher.java:831) at com.google.net.async.EventDispatcher.internalLoop(EventDispatcher.java:207) at com.google.net.async.EventDispatcher.loop(EventDispatcher.java:103) at com.google.net.rpc.RpcService.runUntilServerShutdown(RpcService.java:251) at com.google.apphosting.runtime.JavaRuntime$RpcRunnable.run(JavaRuntime.java:404) at java.lang.Thread.run(Unknown Source) Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Invocation of init method failed; nested exception is java.lang.IllegalArgumentException: Invalid XML in persistence unit from URL [file:/base/data/home/apps/sales-tracker/3.340980411948080671/WEB-INF/classes/META-INF/persistence.xml] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1338) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:473) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409) at java.security.AccessController.doPrivileged(Native Method) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:269) ... 47 more Caused by: java.lang.IllegalArgumentException: Invalid XML in persistence unit from URL [file:/base/data/home/apps/sales-tracker/3.340980411948080671/WEB-INF/classes/META-INF/persistence.xml] at org.springframework.orm.jpa.persistenceunit.PersistenceUnitReader.readPersistenceUnitInfos(PersistenceUnitReader.java:151) at org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager.readPersistenceUnitInfos(DefaultPersistenceUnitManager.java:303) at org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager.preparePersistenceUnitInfos(DefaultPersistenceUnitManager.java:275) at org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager.afterPropertiesSet(DefaultPersistenceUnitManager.java:260) at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:192) at org.springframework.orm.jpa.AbstractEnti My persistence.xml looks as follows, as taken from the GAE documentation <?xml version="1.0" encoding="UTF-8" ?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"version="1.0"> <persistence-unit name="transactions-optional"> <provider>org.datanucleus.store.appengine.jpa.DatastorePersistenceProvider</provider> <properties> <property name="datanucleus.NontransactionalRead" value="true" /> <property name="datanucleus.NontransactionalWrite" value="true" /> <property name="datanucleus.ConnectionURL" value="appengine" /> </properties> </persistence-unit> </persistence> Is there something wrong with my persistence file? Or could my errors be caused elsewhere? Can someone please give me some pointers thanks

    Read the article

  • Get corresponding physical disk drives of mountpoints with WMI queries?

    - by Thomas
    Is there a way to retrieve a connection between a mountpoint (a volume which is mounted into the file system instead of mounted to a drive letter) and its belonging physical disk drive(s) with WMI? For example I have got a volume mountpoint on a W2K8 server which is mounted to “C:\Data\” and the mountpoint is spreaded on the physical disk drives 2, 4, and 5 of the server (the Data Management of the Server Manager shows that) but I cannot find a way to get this to know by using WMI. Volumes which have got a drive letter can be connected with the WMI-Classes Win32_DiskDrive -- Win32_DiskDriveToDiskPartition -- Win32_DiskPartition -- Win32_LogicalDiskToPartition -- Win32_LogicalDisk – but the problem is, that volume mountpoints aren’t listed in the class Win32_LogicalDisk, they are only listed in Win32_Volume. And I did not find a way to connect the class Win32_Volume with the class Win32_DiskDrive – there are missing some linking classes. Does anyone know a solution?

    Read the article

  • Websockify to wrap a forking server

    - by Gurjeet Singh
    I came across Websockify [1] and the accompanying Websock client-side javascript library. AIUI from the Wrap a Programsection in README, Websockify can help you launch a TCP server and rebind its port so that incoming Websockets-based communication is parsed and forwarded to the server on the proper (rebinded) port. My question is, can this mechanism be used to wrap a server that forks its children which in turn communicate with the client on a different port. Specifically, I am interested in websockifying a Postgres server, which typically listens on port 5432 and for a new incoming connection it forks a child which serves all future request from that client. (If it helps, Oracle RDBMS and many other servers, RDBMS or not, also use similar method.) [1] https://github.com/kanaka/websockify

    Read the article

  • How do I set the jax-ws client request timeout programatically on jboss?

    - by Jonas Andersson
    I am trying to set the request (and connection) timeout for a jax-ws-webservice-client generated with the jaxws-maven-plugin. When running my app under tomcat or jetty the timeout works, but when deployed under jboss it doesn't "take". private void setRequestAndConnectionTimeout(Object wsPort) { String REQUEST_TIMEOUT = BindingProviderProperties.REQUEST_TIMEOUT; // "com.sun.xml.ws.request.timeout"; ((BindingProvider) wsPort).getRequestContext().put(REQUEST_TIMEOUT, timeoutInMillisecs); ((BindingProvider) wsPort).getRequestContext().put(JAXWSProperties.CONNECT_TIMEOUT, timeoutInMillisecs); } What is the correct way to do this for JBoss?

    Read the article

  • Optimizing collision engine bottleneck

    - by Vittorio Romeo
    Foreword: I'm aware that optimizing this bottleneck is not a necessity - the engine is already very fast. I, however, for fun and educational purposes, would love to find a way to make the engine even faster. I'm creating a general-purpose C++ 2D collision detection/response engine, with an emphasis on flexibility and speed. Here's a very basic diagram of its architecture: Basically, the main class is World, which owns (manages memory) of a ResolverBase*, a SpatialBase* and a vector<Body*>. SpatialBase is a pure virtual class which deals with broad-phase collision detection. ResolverBase is a pure virtual class which deals with collision resolution. The bodies communicate to the World::SpatialBase* with SpatialInfo objects, owned by the bodies themselves. There currenly is one spatial class: Grid : SpatialBase, which is a basic fixed 2D grid. It has it's own info class, GridInfo : SpatialInfo. Here's how its architecture looks: The Grid class owns a 2D array of Cell*. The Cell class contains two collection of (not owned) Body*: a vector<Body*> which contains all the bodies that are in the cell, and a map<int, vector<Body*>> which contains all the bodies that are in the cell, divided in groups. Bodies, in fact, have a groupId int that is used for collision groups. GridInfo objects also contain non-owning pointers to the cells the body is in. As I previously said, the engine is based on groups. Body::getGroups() returns a vector<int> of all the groups the body is part of. Body::getGroupsToCheck() returns a vector<int> of all the groups the body has to check collision against. Bodies can occupy more than a single cell. GridInfo always stores non-owning pointers to the occupied cells. After the bodies move, collision detection happens. We assume that all bodies are axis-aligned bounding boxes. How broad-phase collision detection works: Part 1: spatial info update For each Body body: Top-leftmost occupied cell and bottom-rightmost occupied cells are calculated. If they differ from the previous cells, body.gridInfo.cells is cleared, and filled with all the cells the body occupies (2D for loop from the top-leftmost cell to the bottom-rightmost cell). body is now guaranteed to know what cells it occupies. For a performance boost, it stores a pointer to every map<int, vector<Body*>> of every cell it occupies where the int is a group of body->getGroupsToCheck(). These pointers get stored in gridInfo->queries, which is simply a vector<map<int, vector<Body*>>*>. body is now guaranteed to have a pointer to every vector<Body*> of bodies of groups it needs to check collision against. These pointers are stored in gridInfo->queries. Part 2: actual collision checks For each Body body: body clears and fills a vector<Body*> bodiesToCheck, which contains all the bodies it needs to check against. Duplicates are avoided (bodies can belong to more than one group) by checking if bodiesToCheck already contains the body we're trying to add. const vector<Body*>& GridInfo::getBodiesToCheck() { bodiesToCheck.clear(); for(const auto& q : queries) for(const auto& b : *q) if(!contains(bodiesToCheck, b)) bodiesToCheck.push_back(b); return bodiesToCheck; } The GridInfo::getBodiesToCheck() method IS THE BOTTLENECK. The bodiesToCheck vector must be filled for every body update because bodies could have moved meanwhile. It also needs to prevent duplicate collision checks. The contains function simply checks if the vector already contains a body with std::find. Collision is checked and resolved for every body in bodiesToCheck. That's it. So, I've been trying to optimize this broad-phase collision detection for quite a while now. Every time I try something else than the current architecture/setup, something doesn't go as planned or I make assumption about the simulation that later are proven to be false. My question is: how can I optimize the broad-phase of my collision engine maintaining the grouped bodies approach? Is there some kind of magic C++ optimization that can be applied here? Can the architecture be redesigned in order to allow for more performance? Actual implementation: SSVSCollsion Body.h, Body.cpp World.h, World.cpp Grid.h, Grid.cpp Cell.h, Cell.cpp GridInfo.h, GridInfo.cpp

    Read the article

  • Running SSIS packages from C#

    - by Piotr Rodak
    Most of the developers and DBAs know about two ways of deploying packages: You can deploy them to database server and run them using SQL Server Agent job or you can deploy the packages to file system and run them using dtexec.exe utility. Both approaches have their pros and cons. However I would like to show you that there is a third way (sort of) that is often overlooked, and it can give you capabilities the ‘traditional’ approaches can’t. I have been working for a few years with applications that run packages from host applications that are implemented in .NET. As you know, SSIS provides programming model that you can use to implement more flexible solutions. SSIS applications are usually thought to be batch oriented, with fairly rigid architecture and processing model, with fixed timeframes when the packages are executed to process data. It doesn’t to be the case, you don’t have to limit yourself to batch oriented architecture. I have very good experiences with service oriented architectures processing large amounts of data. These applications are more complex than what I would like to show here, but the principle stays the same: you can execute packages as a service, on ad-hoc basis. You can also implement and schedule various signals, HTTP calls, file drops, time schedules, Tibco messages and other to run the packages. You can implement event handler that will trigger execution of SSIS when a certain event occurs in StreamInsight stream. This post is just a small example of how you can use the API and other features to create a service that can run SSIS packages on demand. I thought it might be a good idea to implement a restful service that would listen to requests and execute appropriate actions. As it turns out, it is trivial in C#. The application is implemented as console application for the ease of debugging and running. In reality, you might want to implement the application as Windows service. To begin, you have to reference namespace System.ServiceModel.Web and then add a few lines of code: Uri baseAddress = new Uri("http://localhost:8011/");               WebServiceHost svcHost = new WebServiceHost(typeof(PackRunner), baseAddress);                           try             {                 svcHost.Open();                   Console.WriteLine("Service is running");                 Console.WriteLine("Press enter to stop the service.");                 Console.ReadLine();                   svcHost.Close();             }             catch (CommunicationException cex)             {                 Console.WriteLine("An exception occurred: {0}", cex.Message);                 svcHost.Abort();             } The interesting lines are 3, 7 and 13. In line 3 you create a WebServiceHost object. In line 7 you start listening on the defined URL and then in line 13 you shut down the service. As you have noticed, the WebServiceHost constructor is accepting type of an object (here: PackRunner) that will be instantiated as singleton and subsequently used to process the requests. This is the class where you put your logic, but to tell WebServiceHost how to use it, the class must implement an interface which declares methods to be used by the host. The interface itself must be ornamented with attribute ServiceContract. [ServiceContract]     public interface IPackRunner     {         [OperationContract]         [WebGet(UriTemplate = "runpack?package={name}")]         string RunPackage1(string name);           [OperationContract]         [WebGet(UriTemplate = "runpackwithparams?package={name}&rows={rows}")]         string RunPackage2(string name, int rows);     } Each method that is going to be used by WebServiceHost has to have attribute OperationContract, as well as WebGet or WebInvoke attribute. The detailed discussion of the available options is outside of scope of this post. I also recommend using more descriptive names to methods . Then, you have to provide the implementation of the interface: public class PackRunner : IPackRunner     {         ... There are two methods defined in this class. I think that since the full code is attached to the post, I will show only the more interesting method, the RunPackage2.   /// <summary> /// Runs package and sets some of its variables. /// </summary> /// <param name="name">Name of the package</param> /// <param name="rows">Number of rows to export</param> /// <returns></returns> public string RunPackage2(string name, int rows) {     try     {         string pkgLocation = ConfigurationManager.AppSettings["PackagePath"];           pkgLocation = Path.Combine(pkgLocation, name.Replace("\"", ""));           Console.WriteLine();         Console.WriteLine("Calling package {0} with parameter {1}.", name, rows);                  Application app = new Application();         Package pkg = app.LoadPackage(pkgLocation, null);           pkg.Variables["User::ExportRows"].Value = rows;         DTSExecResult pkgResults = pkg.Execute();         Console.WriteLine();         Console.WriteLine(pkgResults.ToString());         if (pkgResults == DTSExecResult.Failure)         {             Console.WriteLine();             Console.WriteLine("Errors occured during execution of the package:");             foreach (DtsError er in pkg.Errors)                 Console.WriteLine("{0}: {1}", er.ErrorCode, er.Description);             Console.WriteLine();             return "Errors occured during execution. Contact your support.";         }                  Console.WriteLine();         Console.WriteLine();         return "OK";     }     catch (Exception ex)     {         Console.WriteLine(ex);         return ex.ToString();     } }   The method accepts package name and number of rows to export. The packages are deployed to the file system. The path to the packages is configured in the application configuration file. This way, you can implement multiple services on the same machine, provided you also configure the URL for each instance appropriately. To run a package, you have to reference Microsoft.SqlServer.Dts.Runtime namespace. This namespace is implemented in Microsoft.SQLServer.ManagedDTS.dll which in my case was installed in the folder “C:\Program Files (x86)\Microsoft SQL Server\100\SDK\Assemblies”. Once you have done it, you can create an instance of Microsoft.SqlServer.Dts.Runtime.Application as in line 18 in the above snippet. It may be a good idea to create the Application object in the constructor of the PackRunner class, to avoid necessity of recreating it each time the service is invoked. Then, in line 19 you see that an instance of Microsoft.SqlServer.Dts.Runtime.Package is created. The method LoadPackage in its simplest form just takes package file name as the first parameter. Before you run the package, you can set its variables to certain values. This is a great way of configuring your packages without all the hassle with dtsConfig files. In the above code sample, variable “User:ExportRows” is set to value of the parameter “rows” of the method. Eventually, you execute the package. The method doesn’t throw exceptions, you have to test the result of execution yourself. If the execution wasn’t successful, you can examine collection of errors exposed by the package. These are the familiar errors you often see during development and debugging of the package. I you run the package from the code, you have opportunity to persist them or log them using your favourite logging framework. The package itself is very simple; it connects to my AdventureWorks database and saves number of rows specified in variable “User::ExportRows” to a file. You should know that before you run the package, you can change its connection strings, logging, events and many more. I attach solution with the test service, as well as a project with two test packages. To test the service, you have to run it and wait for the message saying that the host is started. Then, just type (or copy and paste) the below command to your browser. http://localhost:8011/runpackwithparams?package=%22ExportEmployees.dtsx%22&rows=12 When everything works fine, and you modified the package to point to your AdventureWorks database, you should see "OK” wrapped in xml: I stopped the database service to simulate invalid connection string situation. The output of the request is different now: And the service console window shows more information: As you see, implementing service oriented ETL framework is not a very difficult task. You have ability to configure the packages before you run them, you can implement logging that is consistent with the rest of your system. In application I have worked with we also have resource monitoring and execution control. We don’t allow to run more than certain number of packages to run simultaneously. This ensures we don’t strain the server and we use memory and CPUs efficiently. The attached zip file contains two projects. One is the package runner. It has to be executed with administrative privileges as it registers HTTP namespace. The other project contains two simple packages. This is really a cool thing, you should check it out!

    Read the article

  • 2012 Oracle Fusion Innovation Awards - Part 2

    - by Michelle Kimihira
    Author: Moazzam Chaudry Continuing from Friday's blog on 2012 Oracle Fusion Innovation Awards, this blog (Part 2) will provide more details around the customers. It was a tremendous honor to be in single room of winners. We only wish we could have had more time to share stories from all the winners.  We received great insight from all the innovative solutions that our customers deploy and would like to share them broadly, so that others can benefit from best practices. There was a customer panel session joined by Ingersoll Rand, Nike and Motability and here is what was discussed: Barry Bonar, Enterprise Architect from Ingersoll Rand shared details around their solution, comprised of Oracle Exalogic, Oracle WebLogic Server and Oracle SOA Suite. This combined solutoin enabled their business transformation to increase decision-making, speed and efficiency, resulting in 40% reduced IT spend, 41X Faster response time and huge cost savings. Ashok Balakrishnan, Architect from Nike shared how they leveraged Oracle Coherence to analyze their digital "footprint" of activities. This helps them compete, collaborate and compare athletic data over time. Lastly, Ashley Doodly, Head of IT from Motability shared details around their solution compromised of Oracle SOA Suite, Service Bus, ADF, Coherence, BO and E-Business Suite. This solution helped Motability achieve 100% ROI within the first few months, performance in seconds vs. 10's of minutes and tremendous improvement in throughput that increased up to 50%.  This year's winners by category are: Oracle Exalogic Customer Results using Fusion Middleware Netshoes ATG on Exalogic: 6X Reduced H/W foot print, 6.2X increased throughput and 3 weeks time to market Claro Part of America Movil, running mission critical Java Application on Exalogic with 35X Faster Java response time, 5X Throughput Underwriters Laboratories Exalogic as an Apps Consolidation platform to power tremendous growth Ingersoll Rand EBS on Exalogic: Up to 40% Reduction in overall IT budget, 3x reduced foot print Oracle Cloud Application Foundation Customer Results using Fusion Middleware  Mazda Motor Corporation Tuxedo ART Batch runtime environment to migrate their batch apps on new open environment and reduce main frame cost. HOTELBEDS Technology Open Source to WebLogic transformation Globalia Corporation Introduced Oracle Coherence to fully reengineer DTH system and provide multiple business and technical benefits Nike Nike+, digital sports platform, has 8M users and is expecting an 5X increase in users, many of who will carry multiple devices that frequently sync data with the Digital Sport platform Comcast Corporation The solution is expected to increase availability, continuity, performance, and simplify and make the code at the application layer more flexible. Oracle SOA and Oracle BPM Customer Results using Fusion Middleware NTT Docomo Network traffic solution based on Oracle event processing and coherence - massive in scale: 12M users (50M in future) - 800,000 events/sec. Schneider National, Inc. SOA/B2B/ADF/Data Integration to orchestrate key order processes across Siebel, OTM & EBS.  Platform runs 60M trans/day and  50 million composite SOA instances per day across 10G and 11G Amadeus Oracle BPM solution: Business Rules and processes vary across local (80), regional (~10) and corporate approval process. Up to 10 levels of approval. Plans to deploy across 20+ markets Navitar SOA solution integrates a fully non-Oracle legacy application/ERP environment using Oracle’s SOA Suite and Oracle AIA Foundation Pack. Motability Uses SOA Suite to synchronize data across the systems and to manage the vehicle remarketing process Oracle WebCenter Customer Results using Fusion Middleware  News Limited Single platform running websites for 50% of Australia's newspapers University of Louisville “Facebook for Medicine”: Oracle Webcenter platform and Oracle BIEE to analyze patient test data and uncover potential health issues. Expecting annualized ROI of 277% China Mobile Jiangsu Company portal (25k users) to drive collaboration & productivity Life Technologies Portal for remotely monitoring & repairing biotech instruments LA Dept. of Water & Power Oracle WebCenter Portal to power ladwp.com on desktop and mobile for 1.6million users Oracle Identity Management Customer Results using Fusion Middleware Education Testing Service Identity Management platform for provisioning & SSO of 6 million GRE, GMAT, TOEFL customers Avea Oracle Identity Manager allowing call center personnel to quickly change Identity Profile to handle varying call loads based on a user self service interface. Decreased Admin Cost by 30% Oracle Data Integration Customer Results using Fusion Middleware Raymond James Near real-time integration for improved systems (throughput & performance) and enhanced operational flexibility in a 24 X 7 environment Wm Morrison Supermarkets Electronic Point of Sale integration handling over 80 million transactions a day in near real time (15 min intervals) Oracle Application Development Framework and Oracle Fusion Development Customer Results using Fusion Middleware Qualcomm Incorporated Solution providing  immediate business value enabling a self-service model necessary for growing the new customer base, an increase in customer satisfaction, reduced “time-to-deliver” Micros Systems, Inc. ADF, SOA Suite, WebCenter  enables services that include managing distribution of hotel rooms availability and rates to channels such as Hotel Web-site, Expedia, etc. Marfin Egnatia Bank A new web 2.0 UI provides a much richer experience through the ADF solution with the end result being one of boosting end-user productivity    Business Analytics (Oracle BI, Oracle EPM, Oracle Exalytics) Customer Results using Fusion Middleware INC Research Self-service customer portal delivering 5–10% of the overall revenue - expected to grow fast with the BI solution Experian Reduction in Time to Complete the Financial Close Process Hologic Inc Solution, saving months of decision-making uncertainty! We look forward to seeing many more innovative nominations. The nominatation process for 2013 begins in April 2013.    Additional Information: Blog: Oracle WebCenter Award Winners Blog: Oracle Identity Management Winners Blog: Oracle Exalogic Winners Blog: SOA, BPM and Data Integration will be will feature award winners in its respective areas this week Subscribe to our regular Fusion Middleware Newsletter Follow us on Twitter and Facebook

    Read the article

  • JDBC resultset close

    - by KM
    I am doing profiling of my Java application and found some interesting statistics for a jdbc PreparedStatement call: Given below is the environment details: Database: Sybase SQL Anywhere 10.0.1 Driver: com.sybase.jdbc3.jdbc.SybDriver connection pool: c3p0 JRE: 1.6.0_05 The code in question is given below: try { ps = conn.prepareStatement(sql); ps.setDouble(...); rs = ps.executeQuery(); ...... return xyz; } finally { try { if (rs != null) rs.close(); if (ps != null) ps.close(); } catch (SQLException sqlEx) { } } From JProfiler stats, I find that this particular resultspace.close() statement alone takes a large amount of time. It varies from 25 ms to 320s while for other code blocks which are of identical in nature, i find that this takes close to 20 microseconds. Just to be sure, I ran this performance test multiple times and confirmed this data. I am puzzled by this behaviour - Thoughts?

    Read the article

  • Python urllib2 Basic Auth Problem

    - by Simon
    I'm having a problem sending basic AUTH over urllib2. I took a look at this article, and followed the example. My code: passman = urllib2.HTTPPasswordMgrWithDefaultRealm() passman.add_password(None, "api.foursquare.com", username, password) urllib2.install_opener(urllib2.build_opener(urllib2.HTTPBasicAuthHandler(passman))) req = urllib2.Request("http://api.foursquare.com/v1/user") f = urllib2.urlopen(req) data = f.read() I'm seeing the following on the Wire via wireshark: GET /v1/user HTTP/1.1 Host: api.foursquare.com Connection: close Accept-Encoding: gzip User-Agent: Python-urllib/2.5 You can see the Authorization is not sent, vs. when I send a request via curl: curl -u user:password http://api.foursquare.com/v1/user GET /v1/user HTTP/1.1 Authorization: Basic =SNIP= User-Agent: curl/7.19.4 (universal-apple-darwin10.0) libcurl/7.19.4 OpenSSL/0.9.8k zlib/1.2.3 Host: api.foursquare.com Accept: */* For some reason my code seems to not send the authentication - anyone see what I'm missing? thanks -simon

    Read the article

  • Why can't I download a whole image file with urllib2.urlopen()

    - by John Gann
    When I run the following code, it only seems to be downloading the first little bit of the file and then exiting. Occassionally, I will get a 10054 error, but usually it just exits without getting the whole file. My internet connection is crappy wireless, and I often get broken downloads on larger files in firefox, but my browser has no problem getting a 200k image file. I'm new to python, and programming in general, so I'm wondering what nuance I'm missing. import urllib2 xkcdpic=urllib2.urlopen("http://imgs.xkcd.com/comics/literally.png") xkcdpicfile=open("C:\\Documents and Settings\\John Gann\\Desktop\\xkcd.png","w") while 1: chunk=xkcdpic.read(4028) if chunk: print chunk xkcdpicfile.write(chunk) else: break

    Read the article

< Previous Page | 669 670 671 672 673 674 675 676 677 678 679 680  | Next Page >