Search Results

Search found 8589 results on 344 pages for 'pre production'.

Page 98/344 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Auto switching databases from a rails app gracefully from the ApplicationController?

    - by Zaqintosh
    I've seen this post a few times, but haven't really found the answer to this specific question. I'd like to run a rails application that based on the detected request.host (imagine I have two subdomains points to the same rails app and server ip address: myapp1.domain.com and myapp2.domain.com). I'm trying to have myapp1 use the default "production" database, and myapp2 requests always use the alternative remote database. Here is an example of what I tried to do in Application controller that did not work: class ApplicationController < ActionController::Base helper :all before_filter :use_alternate_db private def use_alternate_db if request.host == 'myapp1.domain.com' regular_db elsif request.host == 'myapp2.domain.com' alternate_db end end def regular_db ActiveRecord::Base.establish_connection :production end def alternate_db ActiveRecord::Base.establish_connection( :adapter => 'mysql', :host => '...', :username => '...', :password => '...', :database => 'alternatedb' ) end end The problem is when it switches databases using this method, all connections (including valid sessions across the different subdomains get interrupted...). All examples online have people controlling database connectivity at the model level, but this would involve adding code all over my application. Is there some way to globally switch database connections on a per-request basis in the manner I'm suggesting above WITHOUT having to inject code all over my application? The added complexity here is I'm using Heroku as a hosting provider, so I have no control at the apache / rails application server level. I have looked at solutions like dbcharmer and magicmodels, but none seem to show examples of doing it in the manner that I'm trying to. Thanks for any help!

    Read the article

  • PHP CURL Google Calendar using Private URL

    - by MooCow
    I'm trying to get an array of events from Google Calendar using the Private URL. I read the Google API document but I want to try doing this without using the ZEND library since I have no idea what the eventual server file structure is and avoid having other people edit the codes. I also did a search before posting and ran into the same condition where PHP CURL_EXEC returns false with the URL but I get a JSON file if the URL is open using a web browser. Since I'm using the Private URL, do I really need to authenticate against the Google server using ZEND? I'm trying to have PHP clean up the array before encoding it for Flash. $URL = <string of the private URL from Google Calendar> $ch = curl_init($URL); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $data = curl_exec($ch); curl_close($ch); $result = json_decode($data); print '<pre>'.var_export($data,1).'</pre>'; Screen output >>> false

    Read the article

  • what's the correct way to release a new website?

    - by kk
    so i've been working on a website on and off for about a year now, and i'm finally at a point where it's functional enough to test out in a sort of private beta (not ready for live release). but i never thought about the correct process for doing this and what things i need to take care of. i've never released a public website before. some of the questions/concerns i have in mind: 1) is it against my MSDN license agreement to release a website using the software? 2) how do i protect my "idea"? is it a bad idea to find random people you don't know to test out your site? can you make them digitally sign some sort of NDA? 3) i'm using some open source code - any proper way to release open source code to live production? 4) how much traffic can a place like discountasp.net handle anyway? can hosting sites generally handle large volume of traffic? any comments/suggestions regarding the proper/safe way to release a public website would be appreciated. i've been working on this for a while and never actually sat down to think about the right way to move from a personal side project to a live production website.

    Read the article

  • vlad the deployer: why do I need a scm folder?

    - by egarcia
    I'm learning to use vlad the deployer and I've got a question. Since I'm still learning I don't know what is pertinent to the question and what isn't, so please bear with me if I'm a little verbose. I've got 2 environments for a new application (test and production) besides my development machine. I've figured out this way to do the initial setup in my vlad.rake: namespace :test task :set set :domain, 'test.myserver.com' end end namespace :production task :set set :domain, 'www.myserver.com' end end This way I can have environment-specific stuff inside the namespaces, and still have shared tasks. For example, this would be the initial setup for test: rake vlad:test:set vlad:setup vlad:update This creates the following folders on my test server: releases/ scm/ shared/ current -> symlink to last release (inside the releases folder) My question is: what's the point of the scm folder? Every time I do vlad:update, the following happens: svn checkout on the scm/ folder above svn export on the /releases/{date} folder update current symlink So scm is a copy of my repository... but then there's an "export" copy of the repository on /releases/{date}. And that is the one used by the application... scm doesn't seem to be used by anyone? Wouldn't I be just fine without the scm folder?

    Read the article

  • Good-practices: How to reuse .csproj and .sln files to create your MSBuild script for CI ?

    - by Gishu
    What is the painless/maintainable way of using MSBuild as your build runner ? (Forgive the length of this post) I was just trying my hand at TeamCity (which I must say is awesome w.r.t. learning curve and out of the box functionality). I got an SVN MSBuild NUnit NCover combo working. I was curious as to how moderate to large projects are using MSBuild - I've just pointed MSBuild to my Main sln file. I've spent some time with NAnt some years ago and I found MSBuild to be a bit obtuse. The docs are too dense/detailed for a beginner. MSBuild seems to have some special magic to handle .sln files ; I tried my hand at writing a custom build script by hand, linking/including .csproj files in order (such that I could have custom pre-post build tasks). However it threw up (citing duplicate target imports). I'm assuming most devs wouldn't want to go messing around with msbuild proj files - they'd be making changes to the .csproj and .sln files. Is there some tool / MSBuild task that reverse-engineers a new script from an existing .sln + its .csproj files that I'm unaware of ? If I'm using MSBuild just to do the compile step, I might as well use Nant with an exec task to MSBuild for compiling the solution ? I've this nagging feeling that I'm missing something obvious. My end-goal here is to have a MSBuild build script which builds the solution that acts as a build script instead of a compile step. Allows custom pre/post tasks. (e.g. call nunit to run a nunit project (which seems to be not yet supported via the teamcity web UI)) stays out of the way of the developers making changes to the solution. No redundancy ; shouldn't require devs to make the same change in 2 places

    Read the article

  • How to delete a large cookie that causes Apache to 400

    - by jakemcgraw
    I've come across an issue where a web application has managed to create a cookie on the client, which, when submitted by the client to Apache, causes Apache to return the following: HTTP/1.1 400 Bad Request Date: Mon, 08 Mar 2010 21:21:21 GMT Server: Apache/2.2.3 (Red Hat) Content-Length: 7274 Connection: close Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Size of a request header field exceeds server limit.<br /> <pre> Cookie: ::: A REALLY LONG COOKIE ::: </pre> </p> <hr> <address>Apache/2.2.3 (Red Hat) Server at www.foobar.com Port 80</address> </body></html> After looking into the issue, it would appear that the web application has managed to create a really long cookie, over 7000 characters. Now, don't ask me how the web application was able to do this, I was under the impression browsers were supposed to prevent this from happening. I've managed to come up with a solution to prevent the cookies from growing out of control again. The issue I'm trying to tackle is how do I reset the large cookie on the client if every time the client tries to submit a request to Apache, Apache returns a 400 client error? I've tried using the ErrorDocument directive, but it appears that Apache bails on the request before reaching any custom error handling.

    Read the article

  • Practical size limitations for RDBMS

    - by grenade
    I am working on a project that must store very large datasets and associated reference data. I have never come across a project that required tables quite this large. I have proved that at least one development environment cannot cope at the database tier with the processing required by the complex queries against views that the application layer generates (views with multiple inner and outer joins, grouping, summing and averaging against tables with 90 million rows). The RDBMS that I have tested against is DB2 on AIX. The dev environment that failed was loaded with 1/20th of the volume that will be processed in production. I am assured that the production hardware is superior to the dev and staging hardware but I just don't believe that it will cope with the sheer volume of data and complexity of queries. Before the dev environment failed, it was taking in excess of 5 minutes to return a small dataset (several hundred rows) that was produced by a complex query (many joins, lots of grouping, summing and averaging) against the large tables. My gut feeling is that the db architecture must change so that the aggregations currently provided by the views are performed as part of an off-peak batch process. Now for my question. I am assured by people who claim to have experience of this sort of thing (which I do not) that my fears are unfounded. Are they? Can a modern RDBMS (SQL Server 2008, Oracle, DB2) cope with the volume and complexity I have described (given an appropriate amount of hardware) or are we in the realm of technologies like Google's BigTable? I'm hoping for answers from folks who have actually had to work with this sort of volume at a non-theoretical level.

    Read the article

  • What file format can represent an uncompressed raster image at 48 or 64 bits per pixel?

    - by finnw
    I am creating screenshots under Windows and using the LockBits function from GDI+ to extract the pixel data, which will then be written to a file. To maximise performance I am also: Using the same PixelFormat as the source bitmap, to avoid format conversion Using the ImageLockModeUserInputBuf flag to extract the pixel data into a pre-allocated buffer This pre-allocated buffer (pointed to by BitmapData::Scan0) is part of a memory-mapped file (to avoid copying the pixel data again.) I will also be writing the code that reads the file, so I can use (or invent) any format I wish. However I would prefer to use a well-known format that existing programs (ideally web browsers) are able to read, because that means I can visually confirm that the images are correct before writing the code for the other program (that reads the image.) I have implemented this successfully for the PixelFormat32bppRGB format, which matches the format of a 32bpp BMP file, so if I extract the pixel data directly into the memory-mapped BMP file and prefix it with a BMP header I get a valid BMP image file that can be opened in Paint and most browsers. Unfortunately one of the machines I am testing on returns pixels in PixelFormat64bppPARGB format (presumably this is influenced by the video adapter driver) and there is no corresponding BMP pixel format for this. Converting to a 16, 24 or 32bpp BMP format slows the program down considerably (as well as being lossy) so I am looking for a file format that can use this pixel format without conversion, so I can extract directly into the memory-mapped file as I have done with the 32bpp format. What raster image file formats support 48bpp and/or 64bpp?

    Read the article

  • PHP XML Strategy: Parsing DOM to fill "Bean"

    - by Mike
    I have a question concerning a good strategy on how to fill a data "bean" with data inside an xml file. The bean might look like this: class Person { var $id; var $forename = ""; var $surname = ""; var $bio = new Biography(); } class Biography { var $url = ""; var $id; } the xml subtree containing the info might look like this: <root> <!-- some more parent elements before node(s) of interest --> <person> <name pre="forename"> Foo </name> <name pre="surname"> Bar </name> <id> 1254 </id> <biography> <url> http://www.someurl.com </url> <id> 5488 </id> </biography> </person> </root> At the moment, I have one approach using DOMDocument. A method iterates over the entries and fills the bean by "remembering" the last node. I think thats not a good approach. What I have in mind is something like preconstructing some xpath expression(s) and then iterate over the subtrees/nodeLists. Return an array containing the beans as defined above eventually. However, it seems not to be possible reusing a subtree /DOMNode as DOMXPath constructor parameter. Has anyone of you encountered such a problem?

    Read the article

  • Problem retrieving values from Zend_Form_SubForms - no values returned

    - by anu iyer
    I have a Zend_Form that has 4 or more subforms. /** Code Snippet **/ $bigForm = new Zend_Form(); $littleForm1 = new Form_LittleForm1(); $littleForm1->setMethod('post'); $littleForm2 = new Form_LittleForm2(); $littleForm2->setMethod('post'); $bigForm->addSubForm($littleForm1,'littleForm1',0); $bigForm->addSubForm($littleForm2,'littleForm2',0); On clicking the 'submit' button, I'm trying to print out the values entered into the forms, like so: /** Code snippet, currently not validating, just printing **/ if($this-_request-getPost()){ $formData = array(); foreach($bigForm->getSubForms() as $subForm){ $formData = array_merge($formData, $subForm->getValues()); } /* Testing */ echo "<pre>"; print_r($formData); echo "</pre>"; } The end result is that - all the elements in the form do get printed, but the values entered before posting the form don't get printed. Any thoughts are appreciated...I have run around circles working on this! Thanks in advance!

    Read the article

  • ASP.net AppendHeader not working in ASP MVC

    - by Chao
    I'm having problems getting AppendHeader to work properly if I am also using an authorize filter. I'm using an actionfilter for my AJAX actions that applies Expires, Last-Modified, Cache-Control and Pragma (though while testing I have tried including it in the action method itself with no change in results). If I don't have an authorize filter the headers work fine. Once I add the filter the headers I tried to add get stripped. The headers I want to add Response.AppendHeader("Expires", "Sun, 19 Nov 1978 05:00:00 GMT"); Response.AppendHeader("Last-Modified", String.Format("{0:r}", DateTime.Now)); Response.AppendHeader("Cache-Control", "no-store, no-cache, must-revalidate"); Response.AppendHeader("Cache-Control", "post-check=0, pre-check=0"); Response.AppendHeader("Pragma", "no-cache"); An example of the headers from a correct page: Server ASP.NET Development Server/9.0.0.0 Date Mon, 14 Jun 2010 17:22:24 GMT X-AspNet-Version 2.0.50727 X-AspNetMvc-Version 2.0 Pragma no-cache Expires Sun, 19 Nov 1978 05:00:00 GMT Last-Modified Mon, 14 Jun 2010 18:22:24 GMT Cache-Control no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Content-Type text/html; charset=utf-8 Content-Length 352 Connection Close And from an incorrect page: Server ASP.NET Development Server/9.0.0.0 Date Mon, 14 Jun 2010 17:27:34 GMT X-AspNet-Version 2.0.50727 X-AspNetMvc-Version 2.0 Pragma no-cache, no-cache Cache-Control private, s-maxage=0 Content-Type text/html; charset=utf-8 Content-Length 4937 Connection Close

    Read the article

  • How to connect a new query script with SSMS add-in?

    - by squillman
    I'm trying to create a SSMS add-in. One of the things I want to do is to create a new query window and programatically connect it to a server instance (in the context of a SQL Login). I can create the new query script window just fine but I can't find how to connect it without first manually connecting to something else (like the Object Explorer). So in other words, if I connect Obect Explorer to a SQL instance manually and then execute the method of my add-in that creates the query window I can connect it using this code: ServiceCache.ScriptFactory.CreateNewBlankScript( Editors.ScriptType.Sql, ServiceCache.ScriptFactory.CurrentlyActiveWndConnectionInfo.UIConnectionInfo, null); But I don't want to rely on CurrentlyActiveWndConnectionInfo.UIConnectionInfo for the connection. I want to set a SQL Login username and password programatically. Does anyone have any ideas? EDIT: I've managed to get the query window connected by setting the last parameter to an instance of System.Data.SqlClient.SqlConnection. However, the connection uses the context of the last login that was connected instead of what I'm trying to set programatically. That is, the user it connects as is the one selected in the Connection Dialog that you get when you click the New Query button and don't have an Object Explorer connected. EDIT2: I'm writing (or hoping to write) an add-in to automatically send a SQL statement and the execution results to our case-tracking system when run against our production servers. One thought I had was to remove write permissions and assign logins through this add-in which will also force the user to enter a case # canceling the statement if it's not there. Another thought I've just had is to inspect the server name in ServiceCache.ScriptFactory.CurrentlyActiveWndConnectionInfo.UIConnectionInfo and compare it to our list of production servers. If it matches and there's no case # then cancel the query.

    Read the article

  • Version control for PHP Developement

    - by Vinod Mohan
    Hello, I have been hearing a lot about the advantages of using a version control system and would like to try one. I was doing freelance web development in PHP for the past 2 years, two months back I hired two more programmer to help me. I will be hiring one more person soon. We maintain 4 websites, all of which are my own, which are continuously being edited by one of us. I learned PHP by myself and have never worked in any other firms. Hence I am new to version control, unit testing and all. Currently, we have development servers on our workstations. When we edit a particular section of a site, we download the code for that particular section (say /news/ or /movies/ or /wallpapers/ ) from the production server to the dev server, makes the changes locally and uploads to the production server (no code review / testing). Because of this, our dev server is always out of date from our prod server. Occasionally, this also create problem when one of us forgets to download the latest copy from prod and overwrites the last change. I know this is very very foolish, but currently our prod server is the only copy that has all the updates and latest changes. Can anyone please suggest which is the best version control system for me? I am more interested in distributed version control, since we don't have a central backup for all our code. I read about Mercurial and Git and found that Mercurial is used in several large open source projects by Mozilla, Sun, Symbian etc. So which one do you think is better for me? Not only version control, if there are any other package that I can use to make my current setup better, please mention that too :) Thanks and Regards, Vinod Mohan

    Read the article

  • Changing customErrors in web.config semi-dynamically

    - by Tom Ritter
    The basic idea is we have a test enviroment which mimics Production so customErrors="RemoteOnly". We just built a test harness that runs against the Test enviroment and detects breaks. We would like it to be able to pull back the detailed error. But we don't want to turn customErrors="On" because then it doesn't mimic Production. I've looked around and thought a lot, and everything I've come up with isn't possible. Am I wrong about any of these points? We can't turn customErrors on at runtime because when you call configuration.Save() - it writes the web.config to disk and now it's Off for every request. We can't symlink the files into a new top level directory with it's own web.config because we're on windows and subversion on windows doesn't do symlinks. We can't use URL-Mapping to make an empty folder dir2 with its own web.config and make the files in dir1 appear to be in dir2 - the web.config doesn't apply We can't copy all the aspx files into dir2 with it's own web.config because none of the links would be consistent and it's a horrible hacky solution. We can't change customErrors in web.config based on hostname (e.g. add another dns entry to the test server) because it's not possible/supported We can't do any virtual directory shenanigans to make it work. If I'm not, is there a way to accomplish what I'm trying to do? Turn on customErrors site-wide under certain circumstances (dns name or even a querystring value)?

    Read the article

  • Mysteriously empty $_POST array

    - by Lex
    Hi all! I have the following HTML/PHP page: <?php if(empty($_SERVER['CONTENT_TYPE'])) { $type = "application/x-www-form-urlencoded"; $_SERVER['CONTENT_TYPE'] = $type; } echo "<pre>"; var_dump($_POST); var_dump(file_get_contents("php://input")); echo "</pre>"; ?> <form method="post" action="test.php"> <input type="text" name="test[1]" /> <input type="text" name="test[2]" /> <input type="text" name="test[3]" /> <input type="submit" name="action" value="Go" /> </form> As you can see, the form will submit and the expected output is a POST array with one array in it containing the filled in values and one entry "action" with the value "Go" (the button). However, no matter what values I enter in the fields; the result is always: array(2) { ["test"]=> string(0) "" ["action"]=> string(2) "Go" } string(16) "test=&action=Go&" Somehow, the array named test is emptied, the "action" variable does make it through. I've used the Live HTTP Headers extension for Firefox to check whether the POST fields get submitted, and they do. The relevant information from Live HTTP Headers (with a, b and c filled in as values in the textboxes): Content-Type: application/x-www-form-urlencoded Content-Length: 51 test%5B1%5D=a&test%5B2%5D=b&test%5B3%5D=c&action=Go Does anybody have any idea as to why this is happening? I'm freaking out on this one, it has cost me so much time already...

    Read the article

  • Suppress Eclipse compiler errors in in a plug-in

    - by Jan Gorzny
    Hi, I'm currently working on a plug-in for Eclipse that translates some custom Java code (which doesn't necessarily run/compile), to runnable Java code. In particular, the plug-in allows code to be written using classes created or imported during the translation. In general, the pre-translation code runs/compiles fine provided the writer uses import statements at the top of their class files. However, it would be convenient for my users if it was not necessary to import these classes. At the moment, the lack of import statements results in (obvious) compiler errors. Would it be possible to empower my plug-in to either a) suppress/ignore these errors, or b) have Eclipse find these classes automatically, without the use of import statements? I should point out that the translated code would these include the required import statements--but this is not a problem for me. I'm also aware that this could lead to lazy programmers and some bad habits. To clarify, consider the following example of pre-translated code: File f = new File("Somefilename.txt"); which clearly requires the possibly imported class File. Without an import statement (import java.io.File;), Eclipse reports that File can not be resolved to a type. This is the error I'd like to hide in files pertaining to projects created for use with my plug-in. (The translated code would include import java.io.File; so that it would be runnable) In closing, I should point out I'm not necessarily looking for code (though I wouldn't be opposed to it), but rather some links to some relevant tutorials (if they exist), or helpful tips/ideas. Also, as this is my first plug-in, it's entirely possible that what I'd like to do is not possible and that I don't realize it--if this is the case, please let me know, preferably with some justification. Thanks!

    Read the article

  • Using hg repository as web site

    - by Tex
    This is somewhat related to my security question here. Is it a bad idea to use an hg / mercurial repository for a live website? If so, why? Furthermore, we have dev, test and production installations of our website, like dev.example.com, test.example.com and www.example.com. If it's a bad idea to use a repository for a live/production website, would it be OK to use an hg repository for the dev and test sites? I'm also concerned about ease of deployment. We have technical and less technical co-workers who will be working with the site. The technical guys (software engineers) won't have any problem working with the command line or TortoiseHG. I'm more concerned about the less technical guys (web designers). They won't be comfortable working on the command line, and may even find TortoiseHG daunting. These guys mostly upload .css files and images to the server. I'd like for these files (at least the .css files) to be under version control, but I want this to be as transparent as possible for the non technical guys. What's the best way to achieve this? Edit: Our 'site' is actually a multi-site CMS setup with a main repository and several subrepositories. Mock-up of the repository structure: /root [main repository containing core files and subrepositories] /modules [modules subrepository] /sites/global [subrepository for global .css and .php files] /sites/site1 [site1 subrepository] ... /sites/siteN [siteN subrepository] Software engineers would work in the root, modules and sites/global repositories. Less technical guys (web designers) would work only in the site1 ... siteN subrepositories.

    Read the article

  • Why am I getting an error on Heroku that suggests I need to migrate my app to Bamboo?

    - by user242065
    When I type: git push heroku master, this is what happens @68-185-86-134:sample_app git push heroku master Counting objects: 110, done. Delta compression using up to 2 threads. Compressing objects: 100% (94/94), done. Writing objects: 100% (110/110), 87.48 KiB, done. Total 110 (delta 19), reused 0 (delta 0) -----> Heroku receiving push -----> Rails app detected ! This version of Rails is only supported on the Bamboo stack ! Please migrate your app to Bamboo and push again. ! See http://docs.heroku.com/bamboo for more information ! Heroku push rejected, incompatible Rails version error: hooks/pre-receive exited with error code 1 To [email protected]:blazing-frost-89.git ! [remote rejected] master -> master (pre-receive hook declined) error: failed to push some refs to '[email protected]:blazing-frost-89.git' My .gems file: rails --version 2.3.8 My .git/config file: [core] repositoryformatversion = 0 filemode = true bare = false logallrefupdates = true ignorecase = true [remote "origin"] url = [email protected]:csmeder/sample_app.git fetch = +refs/heads/*:refs/remotes/origin/* [remote "heroku"] url = [email protected]:blazing-frost-89.git fetch = +refs/heads/*:refs/remotes/heroku/*

    Read the article

  • php curl login problem

    - by user331329
    <?php // create a new CURL resource $file_path = '/mail'; define("COOKIE_FILE", "c:\cookie.txt"); $ch = curl_init(); // set URL and other appropriate options curl_setopt($ch, CURLOPT_URL, "https://mail.gov.in/iwc/signin"); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt ($ch, CURLOPT_SSL_VERIFYPEER, FALSE); curl_setopt ($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.6) Gecko/20070725 Firefox/2.0.0.6"); curl_setopt ($ch, CURLOPT_TIMEOUT, 60); curl_setopt ($ch, CURLOPT_FOLLOWLOCATION, 1); curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt ($ch, CURLOPT_COOKIESESSION, TRUE); session_write_close(); $strCookie = 'PHPSESSID=d095af0e30afc021dd3652734009' . $_COOKIE['PHPSESSID'] . '; path=/mail'; curl_setopt( $ch, CURLOPT_COOKIE, $strCookie ); curl_setopt ($ch, CURLOPT_COOKIEJAR, COOKIE_FILE); curl_setopt($ch, CURLOPT_COOKIEFILE, COOKIE_FILE); curl_setopt ($ch, CURLOPT_POST, 1); curl_setopt ($ch, CURLOPT_POSTFIELDS,'fromLogin=true&domainName=nic.in&username=&password=&button=Sign%20In'); $url = curl_getinfo($ch); // grab URL and pass it to the browser $data = curl_exec($ch); echo $data."<pre>"; echo "<pre>"; print_r($url); // close CURL resource, and free up system resources curl_close($ch); ?> whats wrong with my code why i am not able to login directly in their mail

    Read the article

  • Android SQLite database locale, locking, and version

    - by gdoten
    In some books and online I see these method calls being made after a database is created: db.setLocale(Locale.getDefault()); db.setLockingEnabled(true); db.setVersion(DB_VERSION); Why is this done? As far as I can tell, after creating a new database, the system adds a table named android_metadata with one field named locale and that table has one row with the locale field set to "en_US". Now I assume the column has that value because I am using a U.S. phone, and if I were using a phone from a different region then the locale field would be set appropriately. Can anyone confirm this? I'm guessing that the setLocale method would only be useful in the case that you install a pre-built database onto a phone and then want to change the locale to match the phone's locale. Sound right? The documentation for setLockingEnabled says it defaults to true so there's no need to make that call, right? Lastly, what's with the call to setVersion? I can't find a table that contains this information so I've been assuming that the database file itself stores the version number somewhere. So when I create a database, which requires you to have already specified the version number in the call to the SQLiteOpenHelper constructor, there's no point in calling setVersion. Again, perhaps this method exists for the case of installing a pre-built database to a device and you then wish to change the database's version (though I can't think of when doing this would make sense). Thanks for any insight!

    Read the article

  • Throttling outbound API calls generated by a Rails app

    - by Sharpie
    I am not a professional web developer, but I like to wrench on websites as a hobby. Recently, I have been playing with developing a Rails app as a project to help me learn the framework. The goal of my toy app is to harvest data from another service through their API and make it available for me to query using a search function. However, the service I want to pull data from imposes a rate limit on the number of API calls that may be executed per minute. I plan on having my app run a daily update which may generate a burst of API calls that far exceeds the limit provided by the external service. I wish to respect the performance of the external site and so would like to throttle the rate at which my app executes the calls. I have done a little bit of searching and the overwhelming amount of tutorial material and pre-built libraries I have found cover throttling inbound API calls to a web app and I can find little discussion of controlling the flow of outbound calls. Being both an amateur web developer and a rails newbie, it is entirely possible that I have been executing the wrong searches in the wrong places. Therefore my questions are: Is there a nice website out there aggregating Rails tutorials that has material related to throttling outbound API requests? Are there any ruby gems or other libraries that would help me throttle the requests? I have some ideas of how I might go about writing a throttling system using a queue-based worker like DelayedJob or Resque to manage the API calls, but I would rather spend my weekends building the rest of the site if there is a good pre-built solution out there already.

    Read the article

  • Dropping all user tables/sequences in Oracle

    - by Ambience
    As part of our build process and evolving database, I'm trying to create a script which will remove all of the tables and sequences for a user. I don't want to do recreate the user as this will require more permissions than allowed. My script creates a procedure to drop the tables/sequences, executes the procedure, and then drops the procedure. I'm executing the file from sqlplus: drop.sql: create or replace procedure drop_all_cdi_tables is cur integer; begin cur:= dbms_sql.OPEN_CURSOR(); for t in (select table_name from user_tables) loop execute immediate 'drop table ' ||t.table_name|| ' cascade constraints'; end loop; dbms_sql.close_cursor(cur); cur:= dbms_sql.OPEN_CURSOR(); for t in (select sequence_name from user_sequences) loop execute immediate 'drop sequence ' ||t.sequence_name; end loop; dbms_sql.close_cursor(cur); end; / execute drop_all_cdi_tables; / drop procedure drop_all_cdi_tables; / Unfortunately, dropping the procedure causes a problem. There seems to cause a race condition and the procedure is dropped before it executes. E.g.: SQL*Plus: Release 11.1.0.7.0 - Production on Tue Mar 30 18:45:42 2010 Copyright (c) 1982, 2008, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Procedure created. PL/SQL procedure successfully completed. Procedure created. Procedure dropped. drop procedure drop_all_user_tables * ERROR at line 1: ORA-04043: object DROP_ALL_USER_TABLES does not exist SQL Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64 With the Partitioning, OLAP, Data Mining and Real Application Testing options Any ideas on how to get this working?

    Read the article

  • ibm informix spatial datablade select statement error

    - by changed
    Hi I am using IBM informix spatial datablade module for some geo specific data. I am trying to find points in table xmlData lying in a specified region. But i am getting this error for select statement. SELECT sa.pre, sa.post FROM xmlData sa WHERE ST_Contains( ST_PolyFromText('polygon((2 2,6 2,6 6,2 6,2 2))',6),sa.point) Query: select count(*) as mycnt fromText('polygon((2 2,6 2,6 6,2 6,2 2))',6),sa.point) Error: -201 [Informix][Informix ODBC Driver][Informix]A syntax error has occurred. (SQLPrepare[-201] at /work/lwchan/workspace/OATPHPcompile/pdo_informix/pdo_informix/informix_driver.c:131) If any one can help me with this. CREATE TABLE xmlData (row_id integer NOT NULL, x integer, y integer, tagname varchar(40,1), point ST_POINT ); EXECUTE FUNCTION SE_CreateSRID(0, 0, 250000, 250000, "use the return value in next query last column"); INSERT INTO geometry_columns (f_table_catalog, f_table_schema, f_table_name, f_geometry_column, geometry_type, srid) VALUES ("mydatabase", -- database name "informix", -- user name "xmlData", -- table name "point", -- spatial column name 1, -- column type (1 = point) 6); -- srid //use value returned by above query. INSERT INTO xmlData VALUES ( 1, 20,20, 'country', ST_PointFromText('point (20 20)',6) ); INSERT INTO xmlData VALUES ( 1, 12,13, 'sunday', ST_PointFromText('point (12 13)',6) ); INSERT INTO xmlData VALUES ( 1, 21,22, 'monday', ST_PointFromText('point (21 22)',6) ); SELECT sa.pre, sa.post FROM xmlData sa WHERE ST_Contains( ST_PolyFromText('polygon((1 1,30 1,30 30,1 30,1 1))', 6),sa.point); I am using following query as reference "ibm link". SELECT name, type, zone FROM sensitive_areas WHERE SE_EnvelopesIntersect(zone, ST_PolyFromText('polygon((20000 20000,60000 20000,60000 60000,20000 60000,20000 20000))', 5));

    Read the article

  • ASP.NET MVC intermittent slow response

    - by arehman
    Problem In our production environment, system occasionally delays the page response of an ASP.NET MVC application up to 30 seconds or so, even though same page renders in 2-3 seconds most of the times. This happens randomly with any arbitrary page, and GET or POST type requests. For example, log files indicates, system took 15 seconds to complete a request for jquery script file or for other small css file it took 10 secs. Similar Problems: Random Slow Downs Production Environment: Windows Server 2008 - Standard (32-bit) - App Pool running in integrated mode. ASP.NET MVC 1.0 We have tried followings/observations: Moved the application to a stand alone web server, but, it didn't help. We didn't ever notice same issue on the server for any 'ASP.NET' application. App Pool settings are fine. No abrupt recycles/shutdowns. No cpu spikes or memory problems. No delays due to SQL queries or so. It seems as something causing delay along HTTP Pipeline or worker processor seeing the request late. Looking for other suggestions. -- Thanks

    Read the article

  • Thin permissions in etc folder (Ubuntu)

    - by Apollo
    I am working on a RoR server setup that uses Thin and Nginx. It works fine, but only if I manually add the folder /etc/thin and set the permissions to 777 in order to use the command below: thin config -C /etc/thin/testapp.yml -c /var/www/testapp --servers 1 -e production If I don't set it to 777, I get this error: me@UbuntuRails:/etc$ thin config -C /etc/thin/testapp.yml -c /var/www/testapp --servers 1 -e production /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/controllers/controller.rb:115:in initialize': Permission denied - /etc/thin/testapp.yml (Errno::EACCES) from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/controllers/controller.rb:115:inopen' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/controllers/controller.rb:115:in config' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/runner.rb:187:inrun_command' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/runner.rb:152:in run!' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/bin/thin:6:in' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/bin/thin:19:in load' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/bin/thin:19:in' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/bin/ruby_noexec_wrapper:14:in eval' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/bin/ruby_noexec_wrapper:14:in' I don't like to set this folder to a 777, sounds like a rubbish workaround. I run everything from an admin user account, not root. RVM runs from my admin user and gem only works in my admin as well. If I sudo that action, nothing happens because my root doesn't "know" thin. Which is the correct way to handle this? Thanks!

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >