Search Results

Search found 11703 results on 469 pages for 'dev to production'.

Page 383/469 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • Parsing unicode XML with Python SAX on App Engine

    - by Derek Dahmer
    I'm using xml.sax with unicode strings of XML as input, originally entered in from a web form. On my local machine (python 2.5, using the default xmlreader expat, running through app engine), it works fine. However, the exact same code and input strings on production app engine servers fail with "not well-formed". For example, it happens with the code below: from xml import sax class MyHandler(sax.ContentHandler): pass handler = MyHandler() # Both of these unicode strings return 'not well-formed' # on app engine, but work locally xml.parseString(u"<a>b</a>",handler) xml.parseString(u"<!DOCTYPE a[<!ELEMENT a (#PCDATA)> ]><a>b</a>",handler) # Both of these work, but output unicode xml.parseString("<a>b</a>",handler) xml.parseString("<!DOCTYPE a[<!ELEMENT a (#PCDATA)> ]><a>b</a>",handler) resulting in the error: File "<string>", line 1, in <module> File "/base/python_dist/lib/python2.5/xml/sax/__init__.py", line 49, in parseString parser.parse(inpsrc) File "/base/python_dist/lib/python2.5/xml/sax/expatreader.py", line 107, in parse xmlreader.IncrementalParser.parse(self, source) File "/base/python_dist/lib/python2.5/xml/sax/xmlreader.py", line 123, in parse self.feed(buffer) File "/base/python_dist/lib/python2.5/xml/sax/expatreader.py", line 211, in feed self._err_handler.fatalError(exc) File "/base/python_dist/lib/python2.5/xml/sax/handler.py", line 38, in fatalError raise exception SAXParseException: <unknown>:1:1: not well-formed (invalid token) Any reason why app engine's parser, which also uses python2.5 and expat, would fail when inputting unicode?

    Read the article

  • Multiple database connection in Rails

    - by Sanal
    I'm using active_delegate for multiple connection in Rails. Here I'm using mysql as master_database for some models,and postgresql for some other models. Problem is that when I try to access the mysql models, I'm getting the error below! Stack trace shows that, it is still using the postgresql adapter to access my mysql models! RuntimeError: ERROR C42P01 Mrelation "categories" does not exist P15 F.\src\backend\parser\parse_relation.c L886 RparserOpenTable: SELECT * FROM "categories" STACKTRACE =========== d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract_adapter.rb:212:in `log' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/postgresql_adapter.rb:507:in `execute' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/postgresql_adapter.rb:985:in `select_raw' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/postgresql_adapter.rb:972:in `select' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/database_statements.rb:7:in `select_all_without_query_cache' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/query_cache.rb:60:in `select_all' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/query_cache.rb:81:in `cache_sql' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/query_cache.rb:60:in `select_all' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:661:in `find_by_sql' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:1553:in `find_every' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:615:in `find' D:/ROR/Aptana/dedomenon/app/models/category.rb:50:in `get_all_with_exclusive_scope' D:/ROR/Aptana/dedomenon/app/models/category.rb:50:in `get_all_with_exclusive_scope' D:/ROR/Aptana/dedomenon/app/controllers/categories_controller.rb:48:in `index' here is my database.yml file postgre: &postgre adapter: postgresql database: codex host: localhost username: postgres password: root port: 5432 mysql: &mysql adapter: mysql database: project host: localhost username: root password: root port: 3306 development: <<: *postgre test: <<: *postgre production: <<: *postgre master_database: <<: *mysql and my master_databse model is like this class Category < ActiveRecord::Base delegates_connection_to :master_database, :on => [:create, :save, :destroy] end Anyone has any solution??

    Read the article

  • Weblogic 10.3, JDBC, Oracle, SQL - Table or View does not exist

    - by shelfoo
    Hi, I've got a really odd issue that I've not had any success googling for. It started happening with no changes to the DB, connection settings, code etc. Problem is, when accessing a servlet, one of the EJB's is doing a direct SQL call, very simple "select \n" + " value, \n" + " other_value \n" + " from \n" + " some_table \n" + " where some_condition = ? " That's obviously not the direct SQL, but pretty close. For some reason, this started returning an error stating "ORA-00942: table or view does not exist". The table exists, and the kicker is if I hook in a debugger, change a space or something minor (not changing the query itself) in the query, and hot-deploy the change, it works fine. This isn't the first time I've run across this. It only seems to happen in dev environments (haven't seen it in q/a, sandbox, or production yet), is not always replicable, and driving me seriously insane. By not always replicable I mean that occasionally a clean build & redeploy will sometimes fix the problem, but not always. It's not always the same table (although if the error occurs it continues with the same query). Just throwing a feeler out there to see if anybody has run into issues like this before, and what they may have discovered to fix it.

    Read the article

  • IE Security Warning with widgets

    - by superexsl
    Hey I'm creating an ASP.NET application which uses Facebook Connect and fbml tags. It also uses the LinkedIn widget. When I run this app in any browser, there are no warnings and everything works. However, in IE, a message like this comes up: Security Warning: The current webpage is trying to open a site in your Trusted sites list. Do you want to allow this? Current site:http://www.facebook.com Trusted site:http://localhost (same for LinkedIn.com). I know how to fix this from a client perspective and to stop the security warning showing up. However, is it possible to ensure this message doesn't come up as it could be off putting for users who don't know how to suppress this warning? I haven't tried uploading it to my webhost, so not sure if this message will appear for everyone in production. However, I always get it on my local machine. (None of my pages use SSL, so I don't think that's the issue. I tried using FB's HTTPS urls but that didn't make a difference). Thanks

    Read the article

  • certificate issues running app in windows 7 ?

    - by Jurjen
    Hi, I'm having some problems with my App. I'm using 'org.mentalis.security' assembly to create a certificate object from a 'pfx' file, this is the line of code where the exception occurs : Certificate cert = Certificate.CreateFromPfxFile(publicKey, certificatePassword); this has always worked and still does in production, but for some reason it throws an exception when run in windows 7 (tried it on 2 machines). CertificateException : Unable to import the PFX file! [error code = -2146893792] I can't find much on this message via google, but when checking the EventViewer I get an 'Audit Failure' every time this exception occurs: Event ID = 5061 Source = Microsoft Windows Security Task Category = system Integrity Keywords = Audit Failure Cryptographic operation. Subject: Security ID: NT AUTHORITY\IUSR Account Name: IUSR Account Domain: NT AUTHORITY Logon ID: 0x3e3 Cryptographic Parameters: Provider Name: Microsoft Software Key Storage Provider Algorithm Name: Not Available. Key Name: VriendelijkeNaam Key Type: User key. Cryptographic Operation: Operation: Open Key. Return Code: 0x2 ` I'm not sure why this isn't working on win 7, I've never had problems when I was running on Vista with this. I am running VS2008 as administrator but I guess that maybe the ASP.NET user doesn't have sufficient rights or something. It's pretty strangs that the 'Algorithm name' is 'Not Available' can anyone help me with this... TIA, Jurjen de Groot

    Read the article

  • Eclipse - How can I change a 'Project Facet' from Tomcat 6 to Tomcat 5.5?

    - by pcimring
    (Eclipse 3.4, Ganymede) I have an existing Dynamic Web Application project in Eclipse. When I created the project, I specified 'Default configuration for Apache Tomcat v6' under the 'Configuration' drop down. It's a month or 2 down the line, and I would now like to change the configuration to Tomcat 'v5.5'. (This will be the version of Tomcat on the production server.) I have tried the following steps (without success): I selected Targeted Runtimes under the Project Properties The Tomcat v5.5 option was disabled and The UI displayed this message: If the runtime you want to select is not displayed or is disabled you may need to uninstall one or more of the currently installed project facets. I then clicked on the Uninstall Facets... link. Under the Runtimes tab, only Tomcat 6 displayed. For Dynamic Web Module, I selected version 2.4 in place of 2.5. Under the Runtimes tab, Tomcat 5.5 now displayed. However, the UI now displayed this message: Cannot change version of project facet Dynamic Web Module to 2.4. The Finish button was disabled - so I reached a dead-end. I CAN successfully create a NEW Project with a Tomcat v5.5 configuration. For some reason, though, it will not let me downgrade' an existing Project. As a work-around, I created a new Project and copied the source files from the old Project. Nonetheless, the work-around was fairly painful and somewhat clumsy. Can anyone explain how I can 'downgrade' the Project configuration from 'Tomcat 6' to 'Tomcat 5'? Or perhaps shed some light on why this happened? Thanks Pete

    Read the article

  • "no block given" errors with cache_money

    - by emh
    i've inherited a site that in production is generating dozens of "no block given" exceptions every 5 minutes. the top of the stack trace is: vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:42:in `add' vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:33:in `get' vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:22:in `call' vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:22:in `fetch' vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:31:in `get' so it appears that the problem is in the cache money plugin. has anyone experienced something similar? i've cut and pasted the relevant code below -- anyone more familiar with blocks able to discern any obvious problems? 11 def fetch(keys, options = {}, &block) 12 case keys 13 when Array 14 keys = keys.collect { |key| cache_key(key) } 15 hits = repository.get_multi(keys) 16 if (missed_keys = keys - hits.keys).any? 17 missed_values = block.call(missed_keys) 18 hits.merge!(missed_keys.zip(Array(missed_values)).to_hash) 19 end 20 hits 21 else 22 repository.get(cache_key(keys), options[:raw]) || (block ? block.call : nil) 23 end 24 end 25 26 def get(keys, options = {}, &block) 27 case keys 28 when Array 29 fetch(keys, options, &block) 30 else 31 fetch(keys, options) do 32 if block_given? 33 add(keys, result = yield(keys), options) 34 result 35 end 36 end 37 end 38 end 39 40 def add(key, value, options = {}) 41 if repository.add(cache_key(key), value, options[:ttl] || 0, options[:raw]) == "NOT_STORED\r\n" 42 yield 43 end 44 end

    Read the article

  • How do I securely authenticate the calling assembly of a WCF service method?

    - by Tim
    The current situation is as follows: We have an production .net 3.5 WCF service, used by several applications throughout the organization, over wsHttpBinding or netTcpBinding. User authentication is being done on the Transport level, using Windows integrated security. This service has a method Foo(string parameter), which can only be called by members of given AD groups. The string parameter is obligatory. A new client application has come into play (.net 3.5, C# console app), which eliminates the necessity of the string parameter. However, only calls from this particular application should be allowed to omit the string parameter. The identity of the caller of the client application should still be known by the server because the AD group limitation still applies (ruling out impersonation on the client side). I found a way to pass on the "evidence" of the calling (strong-named) assembly in the message headers, but this method is clearly not secure because the "evidence" can easily be spoofed. Also, CAS (code access security) seems like a possible solution, but I can't seem to figure out how to make use of CAS in this particular scenario. Does anyone have a suggestion on how to solve this issue? Edit: I found another thread on this subject; apparently the conclusion there is that it is simply impossible to implement in a secure fashion.

    Read the article

  • Reduce number of config files to as few as possible

    - by Scott
    For most of my applications I use iBatis.Net for database access/modeling and log4Net for logging. In doing this, I need a number of *.config files for each project. For example, for a simple application I need to have the following *.config files: app.config ([AssemblyName].[Extention].config) [AssemblyName].SqlMap.config [AssemblyName].log4Net.config [AssemblyName].SqlMapProperties.config providers.config When these applications go from DEV to TEST to PRODUCTION environments, the settings contained in these files change depending on the environment. When the number of files get compounded by having 5-10 (or more) supporting executables per project, the work load on the infrastructure team (the ones doing the roll-outs to the different environments) gets rather high. We also have a high risk of one of the config files being missed, or a mistype in the config file. What is the best way to avoid these risks? Should I combine all of the config files into one file? (is that possible with iBatis?) I know that with VisualStudio 2010 they introduce transforms for these config files that allow the developer to setup all the settings for the different environments and then dynamically (depending on the build kicked off) the config files get updated to the correct versions. (VS 2010 - transforms) Thank you for any help that you can provide.

    Read the article

  • Windows Azure WebRole stuck in a deployment loop

    - by Rob G
    I've been struggling with this one for a couple of days now. My current Windows Azure WebRole is stuck in a loop where the status keeps changing between Initializing, Busy, Stopping and Stopped. It never goes live, and I can can never see the website as a result. The WebRole is an "out of the box" MVC 2 application with Copy Local set to true on the Mvc dll and I haven't even tried hooking up a storage or WorkerRole yet, and there is nothing really happening inside the Start method that I can see would crash. I've really tried going back to basics to ensure nothing can complicate the process and the website launches without a problem on the Dev Fabric and yes it looks just like the standard "Home", "About" MVC app - just can't get it running in the cloud! Funny thing is, a few days ago, this exact package worked on the staging area in the cloud, and I could even see it in the browser - but could never get it swapped over to production, so I deleted everything and started from scratch, and now I can't even get it running on staging... Does anyone have any ideas on what I could do to diagnose this problem myself because since logging this problem on the forums 2 days ago, there has been no improvement or feedback. Any help appreciated, Regards, Rob G

    Read the article

  • Authenticating to Google Search Appliance using Basic HTTP auth and ASP.NET (VB)

    - by Chainlink
    I've run into a snag though which has to do with authentication between the Google Search Appliance and ASP. Normally, when asking for secure pages from the search appliance, the search appliance asks for credentials, then uses these credentials to try and access the secure results. If this attempt is successful, the page shows up in the results list. Since ASP is contacting the search appliance on the client's behalf, it will need to collect credentials and pass them along to the search appliance. I have tried a couple of different documented ways of accomplishing this, but they don't seem to work. Below is the code I have tried: 'Bypass SSL since discovery.gov.mb.ca does not have valid SSL cert (NOT PRODUCTION SAFE) ServerCertificateValidationCallback = New System.Net.Security.RemoteCertificateValidationCallback(AddressOf customXertificateValidation) googleUrl = "https://removed.com" Dim rdr As New XmlTextReader(googleUrl) Dim resolver As New XmlUrlResolver() Dim myCred As New System.Net.NetworkCredential("USERNAME", "PASSWORD", Nothing) Dim credCache As New CredentialCache() credCache.Add(New Uri(googleUrl), "Basic", myCred) resolver.Credentials = credCache rdr.XmlResolver = resolver doc = New System.Xml.XPath.XPathDocument(rdr) path = doc.CreateNavigator() Private Function customXertificateValidation(ByVal sender As Object, ByVal certificate As System.Security.Cryptography.X509Certificates.X509Certificate, ByVal chain As System.Security.Cryptography.X509Certificates.X509Chain, ByVal sslPolicyErrors As Net.Security.SslPolicyErrors) As Boolean Return True End Function

    Read the article

  • General Drools Question

    - by El Guapo
    For the last few months my company has been using a product from a company called Informatica (previously AgentLogic) called RulePoint. This product has proven itself very easy to use with a well-developed and easy-to-use SDK for customization. The way we use the product for CEP is fairly trivial, we have 2 sources which we monitor for our rule data, the first being a JMS Queue, the second being a Jabber IM account. The product runs on any java-based application server (WebLogic, Tomcat, etc) and runs just about flawlessly. Last week my boss says, "Hey, I've heard that we may be able to do the same thing we are doing with RulePoint with an open-source product called Drools. Check it out and let me know what you think." I've heard of people using Drools for flow-based operations (validation, etc), however, I've never heard of anyone using their CEP product (Fusion) in practice. So, being the diligent worker, I have undertaken this task. I've downloaded all the files (version 5.0) and accompanying documentation and have started to read. I've read through just about all the docs and run most of the examples, but I still don't really see HOW drools works for CEP. While there are examples for using Data (or Facts, I guess) from JMS, I don't see how this thing stays "running", continuously monitoring a queue until the application is actually stopped. RulePoint pretty must just sits and listens, however, Drools seems to not. I could probably write a full-blown command-line application for our needs, however, I was hoping to leverage some of the benefits of using a application server provides. I guess I'm looking for some good tutorials or an example of how someone is using Drools and CEP in production. Thanks in advanced for any information, advice you may be able to provide.

    Read the article

  • Access denied error on select into outfile using Zend

    - by Peter
    Hi, I'm trying to make a dump of a MySQL table on the server and I'm trying to do this in Zend. I have a model/mapper/dbtable structure for all my connections to my tables and I'm adding the following code to the mappers: public function dumpTable() { $db = $this->getDbTable()->getAdapter(); $name = $this->getDbTable()->info('name'); $backupFile = APPLICATION_PATH . '/backup/' . date('U') . '_' . $name . '.sql'; $query = "SELECT * INTO OUTFILE '$backupFile' FROM $name"; $db->query( $query ); } This should work peachy, I thought, but Message: Mysqli prepare error: Access denied for user 'someUser'@'localhost' (using password: YES) is what this results in. I checked the user rights for someUser and he has all the rights to the database and table in question. I've been looking around here and on the net in general and usually turning on "all" the rights for the user seems to be the solution, but not in my case (unless I'm overlooking something right now with my tired eyes + I don't want to turn on "all" on my production server). What am I doing wrong here? Or, does anybody know a more elegant way to get this done in Zend?

    Read the article

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • 'memcache-client' problem - app can't load the gem

    - by Max Williams
    Hi all - i'm trying to get memcached and the Interlock plugin working with a new rails app. The weird thing is that they both work fine in another app on the same machine and i can't figure out the difference that's stopping this app. The new app is rails 2.3.4 and the old one is 2.2.2 in case that's a factor. When the app starts, i get a warning from interlock: `install_memcached':Interlock::ConfigurationError: 'memcache-client' client requested but not installed. Try 'sudo gem install memcache-client'. Now, i have memcache-client installed: $> gem list | grep memcache memcache-client (1.7.8) The gem is in /var/lib/gems/1.8, which is in my GEM_PATH variable. On a bit of further investigation, the above error is raised by interlock when it refers to the MemCache class, which doesn't exist and so raises an 'anonymous module' error. So, ultimately, the problem is that MemCache isn't loaded. I have a memcached.yml in my config folder (below) however. I'm stuck - any advice anyone? #contents of config/memcached.yml defaults: namespace: millionaire #sessions: true sessions: false client: memcache-client with_finders: true development: servers: - 127.0.0.1:11211 production: servers: - 127.0.0.1:11211

    Read the article

  • Using a .MDF SQL Server Database with ASP.NET Versus Using SQL Server

    - by Maxim Z.
    I'm currently writing a website in ASP.NET MVC, and my database (which doesn't have any data in it yet, it only has the correct tables) uses SQL Server 2008, which I have installed on my development machine. I connect to the database out of my application by using the Server Explorer, followed by LINQ to SQL mapping. Once I finish developing the site, I will move it over to my hosting service, which is a virtual hosting plan. I'm concerned about whether using the SQL Server setup that is currently working on my development machine will be hard to do on the production server, as I'll have to import all the database tables through the hosting control panel. I've noticed that it is possible to create a SQL Server database from inside Visual Studio. It is then stored in the App_Data directory. My questions are the following: Does it make sense to move my SQL Server DB out of SQL Server and into the App_Data directory as an .mdf file? If so, how can I move it? I believe this is called the Detach command, is it not? Are there any performance/security issues that can occur with a .mdf file like this? Would my intended setup work OK with a typical virtual hosting plan? I'm hoping that the .mdf database won't count against the limited number of SQL Server databases that can be created with my plan. I hope this question isn't too broad. Thanks in advance! Note: I'm just starting out with ASP.NET MVC and all this, so I might be completely misunderstanding how this is supposed to work.

    Read the article

  • How should bug tracking and help tickets integrate?

    - by Max Schmeling
    I have a little experience with bug tracking systems such as FogBugz where help tickets are issues are (or can be) bugs, and I have some experience using a bug tracking system internally completely separate from a help center system. My question is, in a company with an existing (home-grown) help center system where replacing it is not an option, how should a bug tracking system (probably Mantis) be integrated into the process? Right now help tickets get put in for issues, questions, etc and they get assigned to the appropriate person (PC Tech, Help Desk staff, or if it's an application issue they can't solve in the help desk it gets assigned to a developer). A user can put a request for small modifications or fixes to an application in a help ticket and the developer it gets assigned to will make the change at some point, apply their time to that ticket, and then close the ticket when it goes to production. We don't currently have a bug tracking system, so I'm looking into the best way to integrate one. Should we just take the help tickets and put it into the bug tracking system if it's a bug (or issue or feature request) and then close the ticket if it's not an emergency fix? We probably don't want to expose the bug tracking system to anyone else as they wouldn't know what to put in the help center system and what to put in the bug tracker... right? Any thoughts? Suggestions? Tips? Advice? To-dos? Not to-dos? etc...

    Read the article

  • Symfony/Propel NestedSet left/right ID corruption/adjustment

    - by Mike Crowe
    Hi folks, I have a nested set application that seems to be getting corrupted. Here's what I'm seeing: We're using nested sets for a binary tree (any node can have 2 children). It appears to be working fine, but some event causes a discrepancy. For instance, when I do a getNumberOfDescendants() for the root node, it will slowly increase as this event happens. However, displaying the tree works fine, as does inserting (apparently). Has anybody seen anything like this before? For instance, my repair program shows this as the repairs that it makes: User pxxxxx left 0=>0, right 145=>129 User axxxxx left 1=>1, right 124=>106 User mxxxxx left 119=>117, right 120=>118 User fxxxxx left 125=>107, right 144=>128 User fxxxxx left 126=>108, right 131=>113 User rxxxxx left 127=>109, right 128=>110 User mxxxxx left 129=>111, right 130=>112 User mxxxxx left 132=>114, right 143=>127 User cxxxxx left 133=>115, right 142=>126 User gxxxxx left 134=>116, right 137=>121 User mxxxxx left 135=>119, right 136=>120 User jxxxxx left 138=>122, right 141=>125 User axxxxx left 139=>123, right 140=>124 I thought at first it was when I deleted a user, but it has since occurred w/o that event. Anybody know of a cause that might generate this? I've tested ad nauseum on my local machine, but I can't duplicate it. I do have an issue where my production box is PHP 5.2.0, whereas my test device is 5.2.10. Could that be an issue? TIA Mike

    Read the article

  • iPhone SDK Push notification randomly fails

    - by Jameson
    I have a PHP file with the following content that works perfectly on development ceritficates, but when I switch to a production certificate the PHP errors and gives the below message, but it only does this about 50% of the time. The other 50% it works. Anyone know why this might be happening? <?php // masked for security reason $deviceToken = 'xxxxxx'; // jq $ctx = stream_context_create(); stream_context_set_option($ctx, 'ssl', 'local_cert', dirname(__FILE__)."/prod.pem"); $number = 5; $fp = stream_socket_client('ssl://gateway.push.apple.com:2195', $err, $errstr, 60, STREAM_CLIENT_CONNECT, $ctx); if (!$fp) { print "Failed to connect $err $errstr\n"; } else { print "Connection OK\n"; $msg = $_GET['msg']; $payload['aps'] = array('alert' => $msg, 'badge' => 1, 'sound' => 'default'); $payload = json_encode($payload); $msg = chr(0) . pack("n",32) . pack('H*', str_replace(' ', '', $deviceToken)) . pack("n",strlen($payload)) . $payload; print "sending message :" . $payload . "\n"; fwrite($fp, $msg); fclose($fp); } ?> The PHP error: Warning: stream_socket_client() [function.stream-socket-client]: Unable to set local cert chain file `/var/www/vhosts/thissite.com/httpdocs/prod.pem'; Check that your cafile/capath settings include details of your certificate and its issuer in /var/www/vhosts/thissite.com/httpdocs/pushMessageLive.php on line 19 Warning: stream_socket_client() [function.stream-socket-client]: failed to create an SSL handle in /var/www/vhosts/thissite.com/httpdocs/pushMessageLive.php on line 19 Warning: stream_socket_client() [function.stream-socket-client]: Failed to enable crypto in /var/www/vhosts/thissite.com/httpdocs/pushMessageLive.php on line 19 Warning: stream_socket_client() [function.stream-socket-client]: unable to connect to ssl://gateway.sandbox.push.apple.com:2195 (Unknown error) in /var/www/vhosts/thissite.com/httpdocs/pushMessageLive.php on line 19 Failed to connect 0

    Read the article

  • Extending spring based app

    - by pitr
    I have a spring-based Web Service. I now want to build a sort of plugin for it that extends it with beans. What I have now in web.xml is: <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/classes/*-configuration.xml</param-value> </context-param> My core app has main-configuration.xml which declares its beans. My plugin app has plugin-configuration.xml which declares additional beans. Now when I deploy, my build deploys plugin.jar into /WEB-INF/lib/ and copies plugin-configuration.xml into /WEB-INF/classes/ all under main.war. This is all fine (although I think there could be a better solution), but when I develop the plugin, I don't want to have two projects in Eclipse with dependencies. I wish to have main.jar that I include as a library. However, web.xml from main.jar isn't automatically discovered. How can I do this? Bean injection? Bean discovery of some sort? Something else? Note: I expect to have multiple different plugins in production, but development of each of them will be against pure main.jar Thank you.

    Read the article

  • Where is my Drupal View pager?

    - by anotherthink
    Hi, I have a Drupal 6 site where I've created a view that shows a list of nodes. Nothing complicated -- except that when I choose "use pager" -- "yes" (and choose the "full pager" option), the pager doesn't show up on the page. The first page of nodes shows up, but there's no way to get to other pages. Through googling, I saw that some people had an issue with the "Pager Element" item, so I changed that from 0 to 1 -- no luck. This shouldn't be very complicated, but I've been at it for a while! Help!? ETA: I've tracked it down to the following lines in /modules/views/theme/theme.inc: $pager_theme = views_theme_functions($pager_type, $view, $view->display_handler->display); $vars['pager'] = theme($pager_theme, $exposed_input, $view->pager['items_per_page'], $view->pager['element']); The first line returns an array; the second line returns nothing. I suspect now that this is a theming problem with the custom theme I'm using that may not have fully been correctly updated for Drupal 6 -- like, maybe I'm missing a pager template somehow? -- however, I'm quite new to Drupal and don't really understand how to further track down and fix the issue. Any advice would be much appreciated! ETA yet again: The pager also doesn't show up when using Garland, so it's not a theme issue after all. ALSO: I have a copy of this site set up on a development server as well, and that copy has working pagination! I've checked what I thought might be different -- files in the theme, what modules are enabled -- and it seems like pretty much everything is the same. The one thing that I know is different, however, is that the production server has a lower version of MySQL (lower than recommended for Drupal 6 -- we're waiting on the hosting company being able to change this later). Would it make sense that the old version of MySQL is unable to do pagination correctly in Drupal 6? If so, does anyone know a workaround I can do until we are able to update MySQL?

    Read the article

  • Vlad the deployer on Dreamhost - initial script

    - by xmariachi
    Hi, I'm trying to deploy an app with SVN and Vlad the deployer. Vlad and its dependencies are installed and seem OK. I'm trying the following: rake prod vlad:update Being my config/deploy.rb file: task :prod do set :application, "xxx" set :deploy_timestamped, "false" set :user, "username" set :scm_user, "scmusername" set :repository, "http://domain.com/svn/app" set :domain, "domain.com" set :deploy_to, "/home/username/deployments/app" puts "Production deployment to #{deploy_to}" end I have done "rake prod vlad:setup" already, that's fine. But when calling "rake prod vlad:update", I get the following A ...file Exported revision 14. ln: creating symbolic link `/home/username/deployments/drupalgestalt/releases/20100503164225/public/system' to `/home/username/deployments/drupalgestalt/shared/system': No such file or directory rake aborted! execution failed with status 1: ssh domain.com ln -s /home/username/deployments/app/shared/log /home/username/deployments/app/releases/20100503164225/log && ln -s /home/username/deployments/app/shared/system /home/username/deployments/app/releases/20100503164225/public/system && ln -s /home/username/deployments/app/shared/pids /home/username/deployments/app/releases/20100503164225/tmp/pids Apparently it complains when creating the ln, but permissions are all set up fine. Am I doing anything wrong? I'm just starting with Vlad on the assumption it was super-easy to set up. Had played a bit with cap in the past, and I do like Vlad idea.

    Read the article

  • rails, mysql charsets & encoding: binary

    - by Benjamin Vetter
    Hi, i've a rails app that runs using utf-8. It uses a mysql database, all tables with mysql's default charset and collation (i.e. latin1). Therefore the latin1 tables contain utf-8 data. Sure, that's not nice, but i'm not really interested in it. Everything works fine, because the connection encoding is latin1 as well and therefore mysql does not convert between charsets. Only one problem: i need a utf-8 fulltext index for one table: mysql> show create table autocompletephrases; ... AUTO_INCREMENT=310095 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci But: I don't want to convert between charsets in my rails app. Therefore I would like to know if i could just set config/database.yml production: adapter: mysql >>>> encoding: binary ... which just calls SET NAMES 'binary' when connecting to mySQL. It looks like it works for my case, because i guess it forces mysql to -not- convert between charsets (mySQL docs). Does anyone knows about problems about doing this? Any side-effects? Or do you have any other suggestions? But i'd like to avoid converting my whole database to utf-8. Many Thanks! Benjamin

    Read the article

  • What are the limitations of the .NET Assembly format?

    - by McKAMEY
    We just ran into an interesting issue that I've not experienced before. We have a large scale production ASP.NET 3.5 SP1 Web App Project in Visual Studio 2008 SP1 which gets compiled and deployed using a Website Deployment Project. Everything has worked fine for the last year, until after a check-in yesterday the app started critically failing with BadImageFormatException. The check-in in question doesn't change anything particularly special and the errors are coming from areas of the app not even changed. Using Reflector we inspected the offending methods to find that there were garbage strings in the code (which Reflector humorously interpreted as Chinese characters). We have consistently reproduced this on several machines so it does not appear to be hardware related. Further inspection showed that those garbage strings did not exist in the Assemblies used as inputs to aspnet_merge.exe during deployment. Web Deployment Project Output Assemblies Properties: Merge all outputs to a single assembly Merge each individual folder output to its own assembly Merge all pages and control outputs to a single assembly Create a separate assembly for each page and control output In the web deployment project properties if we set the merge options to the first option ("Merge all outputs to a single assembly") we experience the issue, yet all of the other options work perfectly! So my question: does anyone know why this is happening? Is there a size-limit to aspnet_merge.exe's capabilities (the resulting merged DLL is around 19.3 MB)? Are there any other known issues with merging the output of WAPs? I would love it if any Assembly format / aspnet_merge gurus know about any such limitations like this. Seems to me like a 25MB Assembly, while big, isn't outrageous. Less disk to hit if it is all pregen'd stuff.

    Read the article

  • Oracle T4CPreparedStatement memory leaks?

    - by Jay
    A little background on the application that I am gonna talk about in the next few lines: XYZ is a data masking workbench eclipse RCP application: You give it a source table column, and a target table column, it would apply a trasformation (encryption/shuffling/etc) and copy the row data from source table to target table. Now, when I mask n tables at a time, n threads are launched by this app. Here is the issue: I have run into a production issue on first roll out of the above said app. Unfortunately, I don't have any logs to get to the root. However, I tried to run this app in test region and do a stress test. When I collected .hprof files and ran 'em through an analyzer (yourKit), I noticed that objects of oracle.jdbc.driver.T4CPreparedStatement was retaining heap. The analysis also tells me that one of my classes is holding a reference to this preparedstatement object and thereby, n threads have n such objects. T4CPreparedStatement seemed to have character arrays: lastBoundChars and bindChars each of size char[300000]. So, I researched a bit (google!), obtained ojdbc6.jar and tried decompiling T4CPreparedStatement. I see that T4CPreparedStatement extends OraclePreparedStatement, which dynamically manages array size of lastBoundChars and bindChars. So, my questions here are: Have you ever run into an issue like this? Do you know the significance of lastBoundChars / bindChars? I am new to profiling, so do you think I am not doing it correct? (I also ran the hprofs through MAT - and this was the main identified issue - so, I don't really think I could be wrong?) I have found something similar on the web here: http://forums.oracle.com/forums/thread.jspa?messageID=2860681 Appreciate your suggestions / advice.

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >