Search Results

Search found 5413 results on 217 pages for 'git pull'.

Page 189/217 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • PHP RegEx: How to Stripe Whitespace Between Two Strings

    - by roydukkey
    I have been trying to write a regex that will remove whitespace following a semicolon (';') when it is between both an open and close curly brace ('{','}'). I've gotten somewhere but haven't been able to pull it off. Here what I've got: <?php $output = '@import url("/home/style/nav.css"); body{color:#777; background:#222 url("/home/style/nav.css") top center no-repeat; line-height:23px; font-family:Arial,Times,serif; font-size:13px}' $output = preg_replace("#({.*;) \s* (.*[^;]})#x", "$1$2", $output); ?> The the $output should be as follows. Also, notice that the first semicolon in the string still is followed by whitespace, as it should be. <?php $output = '@import url("/home/style/nav.css"); body{color:#777;background:#222 url("/home/style/nav.css") top center no-repeat;line-height:23px;font-family:Arial,Times,serif;font-size:13px}'; ?> Thanks! In advance to anyone willing to give it a shot.

    Read the article

  • How to suppress jqgrid from initially loading data?

    - by Steve Johnston
    I have a page that has a couple of jqGrids on it, along with a few other fields. I want to make one AJAX call myself that pulls back a JSON object that has the data that should be used to fill the entire page. So, I would like to make the call myself, populate the "other fields" and then pull a couple of collections off of the main JSON object that was returned and populate each of the jqGrids with those collections "manually". I have this much working, but I can't get jqGrid to stop attempting to make an AJAX request itself. Shouldn't there be a way to tell jqGrid to NOT attempt an AJAX call when it is initialized? I found a similar question asked here: http://stackoverflow.com/questions/1850159/how-to-suppress-jqgrid-from-initially-loading-data But I don't have the option that solved it for the poster. It seems pretty logical to me that some people may want to use this plugin without having the table attempt to get its own data upon initialization. Am I missing an option somewhere in the documentation (wiki - options)? Thanks.

    Read the article

  • How to convert Unicode strings (\u00e2, etc) into NSString for display?

    - by karlbecker_com
    I am trying to support arbitrary unicode from a variety of international users. They have already put a bunch of data into sqlite databases on their iPhones, and now I want to capture the data into a database, then send it back to their device. Right now I am using a php page that is sending data back to from an internet mysql database. The data is saved in the mysql database properly, but when it's sent back it comes out as unicode text, such as Frank\u00e2\u0080\u0099s iPad instead of just Frank's iPad where the apostrophe should really be a curly apostrophe. The answer posted to another question indicates that there is no built-in Cocoa methods to convert the "\u00e2\u0080\u0099" portion of the unicode string from the webserver to an NSString object. Is this correct? That seems really surprising (and scarily disappointing), since Cocoa definitely allows input from many different Unicode characters, and I need to support any arbitrary language that I have never heard of, and all of the possible characters. I save them to and from the local sqlite database just fine now, but once I send it to a web server, then perhaps pull down different data, I want to ensure the data pulled from the web server is correctly formatted.

    Read the article

  • Capturing Drupal7 DOM content before page load for comparison

    - by ehime
    We have an MU (Multisite) installation of Drupal7 here at work, and are trying to temporarily hold back the swarm of bots we receive until we get a chance to load our content. I wrote a quick and and dirty script to send 503 headers if we find a certain criteria in Xpath (This can ALSO be done as a strpos/preg_match if DOM is not formed). In order to get the ball rolling though I need to figure out how to either A) Hijack the Drupal7 bootstrap and pull all content through this filter below B) ob_flush content through the filter before content is loaded The issue that I am having is figuring out exactly where I can catch the content at? I thought that index.php in Drupal7 would be the suspect, but I'm a little confused as to where or how I should capture the contents. Here's the script, and hopefully someone can point me in the right direction. //error_reporting(-1); /* start query */ $dom = new DOMDocument; $dom->preserveWhiteSpace = false; $dom->Load($_SERVER['PHP_SELF']); $xpath = new DOMXPath($dom); //if this exists we aren't ready to be read by bots $query = $xpath->query(".//*[@id='block-views-about-this-site-block']/div/div/div"); //or $query = 'klat-badge'; //if this is a string not DOM /* end query */ if(strpos($query) !== false) { //require banlist require('botlist.php'); $str = strtolower('/'.implode('|', array_unique($list)).'/i'); if(preg_match($str, strtolower($_SERVER['HTTP_USER_AGENT']))) { //so tell bots we're broken header('HTTP/1.1 503 Service Temporarily Unavailable'); header('Status: 503 Service Temporarily Unavailable'); exit; } }

    Read the article

  • Silverlight Socket Constantly Returns With Empty Buffer

    - by Benny
    I am using Silverlight to interact with a proxy application that I have developed but, without the proxy sending a message to the Silverlight application, it executes the receive completed handler with an empty buffer ('\0's). Is there something I'm doing wrong? It is causing a major memory leak. this._rawBuffer = new Byte[this.BUFFER_SIZE]; SocketAsyncEventArgs receiveArgs = new SocketAsyncEventArgs(); receiveArgs.SetBuffer(_rawBuffer, 0, _rawBuffer.Length); receiveArgs.Completed += new EventHandler<SocketAsyncEventArgs>(ReceiveComplete); this._client.ReceiveAsync(receiveArgs); if (args.SocketError == SocketError.Success && args.LastOperation == SocketAsyncOperation.Receive) { // Read the current bytes from the stream buffer int bytesRecieved = this._client.ReceiveBufferSize; // If there are bytes to process else the connection is lost if (bytesRecieved > 0) { try { //Find out what we just received string messagePart = UTF8Encoding.UTF8.GetString(_rawBuffer, 0, _rawBuffer.GetLength(0)); //Take out any trailing empty characters from the message messagePart = messagePart.Replace('\0'.ToString(), ""); //Concatenate our current message with any leftovers from previous receipts string fullMessage = _theRest + messagePart; int seperator; //While the index of the seperator (LINE_END defined & initiated as private member) while ((seperator = fullMessage.IndexOf((char)Messages.MessageSeperator.Terminator)) > 0) { //Pull out the first message available (up to the seperator index string message = fullMessage.Substring(0, seperator); //Queue up our new message _messageQueue.Enqueue(message); //Take out our line end character fullMessage = fullMessage.Remove(0, seperator + 1); } //Save whatever was NOT a full message to the private variable used to store the rest _theRest = fullMessage; //Empty the queue of messages if there are any while (this._messageQueue.Count > 0) { ... } } catch (Exception e) { throw e; } // Wait for a new message if (this._isClosing != true) Receive(); } } Thanks in advance.

    Read the article

  • CM and Agile validation process of merging to the Trunk?

    - by LoneCM
    Hello All, We are a new Agile shop and we are encountering an issue that I hope others have seen. In our process, the Trunk is considered an integration branch; it does not have to be releasable, but it does have to be stable and functional for others to branch off of. We create Feature branches of the Trunk for new development. All work and testing occurs in these branches. An individual branch pulls up as needed to stay integrated with the Trunk as other features that are accepted and are committed. But now we have numerous feature branches. Each are focused, have a short life cycle, and are pushed to the trunk as they are completed, so we not debating the need for the branches and trying very much to be Agile. My issue comes in here: I require that the branches pull up from the Trunk at the end of their life cycle and complete the validation, regression testing and handle all configuration issues before pushing to the trunk. Once reintegrated into the Trunk, I ask for at least a build and an automated smoke test. However, I am now getting push back on the Trunk validation. The argument is that the developers can merge the code and not need the QA validation steps because they already complete the work in the feature branch. Therefore, the extra testing is not needed. I have attempted to remind management of the numerous times "brainless" merges have failed. Thier solution is to instead of build and regression testing to have the developer diff the Feature branch and the newly merged Trunk. That process in thier mind would replace the regression testing I asked for. So what do you require when you reintegrate back to the Trunk? What are the issues that we will encounter if we remove this step and replace with the diff? Is the cost of staying Agile the additional work of the intergration of the branches? Thanks for any input. LoneCM

    Read the article

  • Choose a XML node in SQL Server based on max value of a child element

    - by Jay
    I am trying to select from SQL Server 2005 XML datatype some values based on the max data that is located in a child node. I have multiple rows with XML similar to the following stored in a field in SQL Server: <user> <name>Joe</name> <token> <id>ABC123</id> <endDate>2013-06-16 18:48:50.111</endDate> </token> <token> <id>XYX456</id> <endDate>2014-01-01 18:48:50.111</endDate> </token> </user> I want to perform a select from this XML column where it determines the max date within the token element and would return the datarows similar to the result below for each record: Joe XYZ456 2014-01-01 18:48:50.111 I have tried to find a max function for xpath that would all me to select the correct token element but I couldn't find one that would work. I also tried to use the SQL MAX function but I wasn't able to get it working with that method either. If I only have a single token it of course works fine but when I have more than one I get a NULL, most likely because the query doesn't know which date to pull. I was hoping there would be a way to specify a where clause [max(endDate)] on the token element but haven't found a way to do that. Here is an example of the one that works when I only have a single token: SELECT XMLCOL.query('user/name').value('.','NVARCHAR(20)') as name XMLCOL.query('user/token/id').value('.','NVARCHAR(20)') as id XMLCOL.query('user/token/endDate').value(,'xs:datetime(.)','DATETIME') as endDate FROM MYTABLE

    Read the article

  • Postback Removing Styling from Page

    - by Roy
    Hi, Currently I've created a ASP.Net page that has a dropdown control with autopostback set to true. I've also added color backgrounds for individual listitems. Whenever an item is selected in the dropdown control the styling is completely removed from all of the list items. How can I prevent this from happening? I need the postback to pull data based on the dropdown item that is selected. Here is my code. aspx file: <asp:DropDownList ID="EmpDropDown" AutoPostBack="True" OnSelectedIndexChanged="EmpDropDown_SelectedIndexChanged" runat="server"> </asp:DropDownList> <asp:TextBox ID="MessageTextBox" TextMode="MultiLine" Width="550" Height="100px" runat="server"></asp:TextBox> aspx.cs code behind: protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { GetEmpList(); } } protected void EmpDropDown_SelectedIndexChanged(object sender, EventArgs e) { GetEmpDetails(); } private void GetEmpList() { SqlDataReader dr = ToolsLayer.GetEmpList(); int currentIndex = 0; while (dr.Read()) { EmpDropDown.Items.Add(new ListItem(dr["Title"].ToString(), dr["EmpKey"].ToString())); if (dr["Status"].ToString() == "disabled") { EmpDropDown.Items[currentIndex].Attributes.Add("style", "background-color:red;"); } currentIndex++; } dr.Close(); } private void GetEmpDetails() { SqlDataReader dr = ToolsLayer.GetEmpDetails(EmpDropDown.SelectedValue); while (dr.Read()) { MessageTextBox.Text = dr["Message"].ToString(); } dr.Close(); } Thank You

    Read the article

  • Organizing Eager Queries in an ObjectContext

    - by Nix
    I am messing around with Entity Framework 3.5 SP1 and I am trying to find a cleaner way to do the below. I have an EF model and I am adding some Eager Loaded entities and i want them all to reside in the "Eager" property in the context. We originally were just changing the entity set name, but it seems a lot cleaner to just use a property, and keep the entity set name in tact. Example: Context - EntityType - AnotherType - Eager (all of these would have .Includes to pull in all assoc. tables) - EntityType - AnotherType Currently I am using composition but I feel like there is an easier way to do what I want. namespace Entities{ public partial class TestObjectContext { EagerExtensions Eager { get;set;} public TestObjectContext(){ Eager = new EagerExtensions (this); } } public partial class EagerExtensions { TestObjectContext context; public EagerExtensions(TestObjectContext _context){ context = _context; } public IQueryable<TestEntity> TestEntity { get { return context.TestEntity .Include("TestEntityType") .Include("Test.Attached.AttachedType") .AsQueryable(); } } } } public class Tester{ public void ShowHowIWantIt(){ TestObjectContext context= new TestObjectContext(); var query = from a in context.Eager.TestEntity select a; } }

    Read the article

  • how can a Win32 App plugin load its DLL in its own directory

    - by Jean-Denis Muys
    My code is a plugin for a specific Application, written in C++ using Visual Studio 8. It uses two DLL from an external provider. Unfortunately, my plugin fails to start because the DLLs are not found (I put them in the same directory as the plugin itself). When I manually move or copy the DLLs to the host application directory, then the plugin loads fine. This moving was deemed unacceptably cumbersome for the end user, and I am looking for a way for my plugin to load its DLLs transparently. What can I do? Relevant details: the host Application plugins are located in a directory mandated by the host application. That directory is not in the DLL search path and I don't control it. The plugin is itself packaged as a subdirectory of the plugin directory, holding the plugin code itself, but also any resource associated with the plugin (eg images, configuration files…). I control what's inside that subdirectory, called a "bundle", but not where it's located. the common plugin installation idiom for that App is for the end user to copy the plugin bundle to the plugin directory. This plugin is a port from the Macintosh version of the plugin. On the Mac there is no issue because each binary contains its own dynamic library search path, which I set as I needed to for my plugin binary. To set that on the Mac simply involves a project setting in the Xcode IDE. This is why I would hope for something similar in Visual Studio, but I could not find anything relevant. Moreover, Visual Studio's help was anything but, and neither was Google. A possible workaround would be for my code to explicitly tell Windows where to find the DLL, but I don't know how, and in any case, since my code is not even started, it hasn't got the opportunity to do so. As a Mac developer, I realize that I may be asking for something very elementary. If such is the case, I apologize, but I have run out of hair to pull out.

    Read the article

  • Simplest PHP Routing framework .. ?

    - by David
    I'm looking for the simplest implementation of a routing framework in PHP, in a typical PHP environment (Running on Apache, or maybe nginx) .. It's the implementation itself I'm mostly interested in, and how you'd accomplish it. I'm thinking it should handle URL's, with the minimal rewriting possible, (is it really a good idea, to have the same entrypoint for all dynamic requests?!), and it should not mess with the querystring, so I should still be able to fetch GET params with $_GET['var'] as you'd usually do.. So far I have only come across .htaccess solutions that puts everything through an index.php, which is sort of okay. Not sure if there are other ways of doing it. How would you "attach" what URL's fit to what controllers, and the relation between them? I've seen different styles. One huge array, with regular expressions and other stuff to contain the mapping. The one I think I like the best is where each controller declares what map it has, and thereby, you won't have one huge "global" map, but a lot of small ones, each neatly separated. So you'd have something like: class Root { public $map = array( 'startpage' => 'ControllerStartPage' ); } class ControllerStartPage { public $map = array( 'welcome' => 'WelcomeControllerPage' ); } // Etc ... Where: 'http://myapp/' // maps to the Root class 'http://myapp/startpage' // maps to the ControllerStartPage class 'http://myapp/startpage/welcome' // maps to the WelcomeControllerPage class 'http://myapp/startpage/?hello=world' // Should of course have $_GET['hello'] == 'world' What do you think? Do you use anything yourself, or have any ideas? I'm not interested in huge frameworks already solving this problem, but the smallest possible implementation you could think of. I'm having a hard time coming up with a solution satisfying enough, for my own taste. There must be something pleasing out there that handles a sane bootstrapping process of a PHP application without trying to pull a big magic hat over your head, and force you to use "their way", or the highway! ;)

    Read the article

  • oauth problem( app engine)

    - by portoalet
    hi i am trying to pull user's documents data from google docs using oauth, but i cannot understand how to do it - what's the purpose of oauth_verifier - how to get the access token secret? - if i try to use DocsService below, then i have a "server error" - is there a clear tutorial for this? i cannot find any atm.. String oauth_verifier = req.getParameter("oauth_verifier"); String oauth_token = req.getParameter("oauth_token"); String oauthtokensecret = req.getParameter("oauth_token_secret"); GoogleOAuthParameters oauthparam = new GoogleOAuthParameters(); oauthparam.setOAuthConsumerKey("consumer key"); oauthparam.setOAuthConsumerSecret("secret"); oauthparam.setOAuthToken(oauth_token); oauthparam.setOAuthTokenSecret(oauthtokensecret); oauthparam.setOAuthVerifier(oauth_verifier); OAuthHmacSha1Signer signer = new OAuthHmacSha1Signer(); GoogleOAuthHelper oauthhelper = new GoogleOAuthHelper(signer); String accesstoken = ""; String accesstokensecret = ""; try { oauthhelper.getUnauthorizedRequestToken(oauthparam); accesstoken = oauthhelper.getAccessToken(oauthparam); accesstokensecret = oauthparam.getOAuthTokenSecret(); // DocsService client = new DocsService("yourCompany-YourAppName-v1"); ...

    Read the article

  • Dynamic Method Creation

    - by TJMonk15
    So, I have been trying to research this all morning, and have had no luck. I am trying to find a way to dynamically create a method/delegate/lambda that returns a new instance of a certain class (not known until runtime) that inherits from a certain base class. I can guarantee the following about the unknown/dynamic class It will always inherit from one known Class (Row) It will have atleast 2 constructors (one accepting a long, and one accepting an IDataRecord) I plan on doign the following: Finding all classes that have a certain attribute on them Creating a delegate/method/lambda/whatever that creates a new instance of the class Storing the delegate/whatever along with some properties in a struct/class Insert the struct into a hashtable When needed, pull the info out of the hashtable and calling the delegate/whatever to get a new instance of the class and returning it/adding it to a list/etc. I need help only with #2 above!!! I have no idea where to start. I really just need some reference material to get me started, or some keywords to throw into google. This is for a compact/simple to use ORM for our office here. I understand the above is not simple, but once working, should make maintaining the code incredibly simple. Please let me know if you need any more info! And thanks in advance! :)

    Read the article

  • TFS How does merging work?

    - by Johannes Rudolph
    I have a release branch (RB, starting at C5) and a changeset on trunk (C10) that I now want to merge onto RB. The file has changes at C3 (common to both), one in CS 7 on RB, and one in C9 (trunk) and one in C10). So the history for my changed file looks like this: RB: C5 -> C7 Trunk: C3 -> C9 -> C10 When I merge C10 from trunk to RB, I'd expect to see a merge window showing me C10 | C3 | C7 since C3 is the common ancestor revision and C10 and C7 are the tips of my two branches respectively. However, my merge tool shows me C10 | C9 | C7. My merge tool is configured to show %1(OriginalFile)|%3(BaseFile)|%2(Modified File), so this tells me TFS chose C9 as the base revision. This is totally unexpected and completely contrary to the way I'm used to merges working in Mercurial or Git. Did I get something wrong or is TFS trying to drive me nuts with merging? Is this the default TFS Merge behavior? If so, can you provide insight into why they chose to implement it this way? I'm using TFS 2008 with VS2010 as a Client.

    Read the article

  • Cross compiling from MinGW on Fedora 12 to Windows - console window?

    - by elcuco
    After reading this article http://lukast.mediablog.sk/log/?p=155 I decided to use mingw on linux to compile windows applications. This means I can compile, test, debug and release directly from Linux. I hacked this build script which will cross compile the application and even package it in a ZIP file. Note that I am using out of source builds for QMake (did you even know this is supported? wow...). Also note that the script will pull the needed DLLs automagically. Here is the script for you all internets to use and abuse: #! /bin/sh set -x set -e VERSION=0.1 PRO_FILE=blabla.pro BUILD_DIR=mingw_build DIST_DIR=blabla-$VERSION-win32 # clean up old shite rm -fr $BUILD_DIR mkdir $BUILD_DIR cd $BUILD_DIR # start building QMAKESPEC=fedora-win32-cross qmake-qt4 QT_LIBINFIX=4 config=\"release\ quiet\" ../$PRO_FILE #qmake-qt4 -spec fedora-win32-cross make DLLS=`i686-pc-mingw32-objdump -p release/*.exe | grep dll | awk '{print $3}'` for i in $DLLS mingwm10.dll ; do f=/usr/i686-pc-mingw32/sys-root/mingw/bin/$i if [ ! -f $f ]; then continue; fi cp -av $f release done mkdir -p $DIST_DIR mv release/*.exe $DIST_DIR mv release/*.dll $DIST_DIR zip -r ../$DIST_DIR.zip $DIST_DIR The compiled binary works on the Windows7 machine I tested. Now to the questions: When I execute the application on Windows, the theme is not the Windows7 theme. I assume I am missing a style module, I am not really sure yet. The application gets a console window for some reason. The second point (the console window) is critical. How can I remove this background window? Please note that the extra config lines are not working for me, what am I missing there?

    Read the article

  • Getting registry information using Python

    - by Willy
    I am trying to pull registry info from many servers and put them all into one txt file. I got the code working fine in a .bat file. I hear that there is a way simpler way to do this in Python. I am intrigued and delighted to hear this. Can anyone help finish my code: My working bat file: echo rfsqlcl01app >> foo.txt reg query "\\rfsqlcl01app\HKEY_LOCAL_MACHINE\SOFTWARE\Network Associates\TVD\Shared Components\On Access Scanner\McShield\Configuration\Default" >> foo.txt echo GLADGSQL01 >> foo.txt reg query "\\GLADGSQL01\HKEY_LOCAL_MACHINE\SOFTWARE\Network Associates\TVD\Shared Components\On Access Scanner\McShield\Configuration\Default" >> foo.txt echo GLADGWEB01 >> foo.txt reg query "\\GLADGWEB01\HKEY_LOCAL_MACHINE\SOFTWARE\Network Associates\TVD\Shared Components\On Access Scanner\McShield\Configuration\Default" >> foo.txt echo PAPERVISION >> foo.txt My python code structure: >>> server_list = open('server_test.txt', 'r') >>> for line in server_list: print r'reg query \\%s\blah\blah\blah' % line.strip() reg query \\foo\blah\blah\blah reg query \\moo\blah\blah\blah reg query \\boo\blah\blah\blah >>> server_list.close()

    Read the article

  • Paperclip failing to upload on specific scaffold, yet works on others

    - by Saifis
    I know there are tons of questions about paperclip, but I failed to find the answer to my problem. I know its prob just something simple, but I I'm running out of hair to pull out. I have paperclip working on other parts of my project, they work with no problem, however, a certain scaffold fails to upload, all the attributes to the uploaded file are nil. Here are the relevant information. Model: has_attached_file :foo, :styles => { :thumb => "140x140>" }, :url => "/data/:id/:style/:basename.:extension", :path => ":rails_root/public/data/:id/:style/:basename.:extension" View: <% form_for(@bar, :html => { :multipart => true }) do |f| %> <%= f.error_messages %> ---------- <li><%= f.label :top %> <%= f.file_field :foo %></li> ---------- <ul><%= f.submit "Save" %></ul> <% end %> Also, comparing the logs to the parts that work, the :foo attribute seems to be passing different values than in the ones that work. In the logs, when the paperclip function works, it looks like this "image"=>#<File:/var/folders/M5/M5HEb+WhFxmqNDGH5s-pNE+++TI/-Tmp-/RackMultipart20100512-1302-5e2e6e-0> when it does not, it seems to pass the file name directly "foo"=>"foo_image.png" I am developing locally on MacOSX using local rails and ruby libs.

    Read the article

  • Configuring an offscreen framebuffer fails the completeness test

    - by randallmeadows
    I'm trying to create an offscreen framebuffer into which I can do some OpenGL drawing, and then pull the bits out manually. I'm following the instructions here, but in step 4, status is 0 instead of GL_FRAMEBUFFER_COMPLETE_OES. If I insert a call go glGetError() after every gl call, it returns 0 (GL_NO_ERROR) every time. But, the values of variables do not change during the call. E.g., GLuint framebuffer; glGenFramebuffersOES(1, &framebuffer); glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer); the value of framebuffer does not get altered at all (even when I change it to some arbitrary value and re-execute). It's almost like the gl calls are not actually being made. I'm linking against OpenGLES framework, and get no compile, link, or run-time errors (or warnings). I'm at a loss as to what to do to fix this. I've tried continuing on with my drawing, but do not see the results I expect, but at this point I can't tell whether it's because of the above error, or the conversion to a UIImage.

    Read the article

  • How to post to a request using node.js

    - by Mr JSON
    I am trying to post some json to a URL. I saw various other questions about this on stackoverflow but none of them seemed to be clear or work. This is how far I got, I modified the example on the api docs: var http = require('http'); var google = http.createClient(80, 'server'); var request = google.request('POST', '/get_stuff', {'host': 'sever', 'content-type': 'application/json'}); request.write(JSON.stringify(some_json),encoding='utf8'); //possibly need to escape as well? request.end(); request.on('response', function (response) { console.log('STATUS: ' + response.statusCode); console.log('HEADERS: ' + JSON.stringify(response.headers)); response.setEncoding('utf8'); response.on('data', function (chunk) { console.log('BODY: ' + chunk); }); }); When I post this to the server I get an error telling me that it's not of the json format or that it's not utf8, which they should be. I tried to pull the request url but it is null. I am just starting with nodejs so please be nice.

    Read the article

  • Google app engine: Poor Performance with JDO + Datastore

    - by Bosh
    I have a simple data model that includes USERS: store basic information (key, name, phone # etc) RELATIONS: describe, e.g. a friendship between two users (supplying a relationship_type + two user keys) I'm getting very poor performance, for instance, if I try to print the first names of all of a user's friends. Say the user has 500 friends: I can fetch the list of friend user_ids very easily in a single query. But then, to pull out first names, I have to do 500 back-and-forth trips to the Datastore, each of which seems to take on the order of 30 ms. If this were SQL, I'd just do a JOIN and get the answer out fast. I understand there are rudimentary facilities for performing joins across un-owned relations in a relaxed implementation of JDO (as described at http://gae-java-persistence.blogspot.com) but they sound experimental and non-standard (e.g. my code won't work in any other JDO implementation). Is this really my best bet? Otherwise, how do people extract satisfactory performance from JDO/Datastore in this kind of (very common) situation? -Bosh

    Read the article

  • Parsing a JSON feed from YQL using jQuery

    - by Keith
    I am using YQL's query.multi to grab multiple feeds so I can parse a single JSON feed with jQuery and reduce the number of connections I'm making. In order to parse a single feed, I need to be able to check the type of result (photo, item, entry, etc) so I can pull out items in specific ways. Because of the way the items are nested within the JSON feed, I'm not sure the best way to loop through the results and check the type and then loop through the items to display them. Here is a YQL (http://developer.yahoo.com/yql/console/) query.multi example and you can see three different result types (entry, photo, and item) and then the items nested within them: select * from query.multi where queries= "select * from twitter.user.timeline where id='twitter'; select * from flickr.photos.search where has_geo='true' and text='san francisco'; select * from delicious.feeds.popular" or here is the JSON feed itself: http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20query.multi%20where%20queries%3D%22select%20*%20from%20flickr.photos.search%20where%20user_id%3D'23433895%40N00'%3Bselect%20*%20from%20delicious.feeds%20where%20username%3D'keith.muth'%3Bselect%20*%20from%20twitter.user.timeline%20where%20id%3D'keithmuth'%22&format=json&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys&callback=

    Read the article

  • using jquery in mysql php

    - by JPro
    I am new to using Jquery using mysql and PHP I am using the following code to pull the data. But there is not data or error displayed. JQUERY: <html> <head> <script> function doAjaxPost() { // get the form values var field_a = $("#field_a").val(); $("#loadthisimage").show(); $.ajax({ type: "POST", url: "serverscript.php", data: "ID="+field_a, success: function(resp){ $("#resposnse").html(resp); $("#loadthisimage").hide(); }, error: function(e){ alert('Error: ' + e); } }); } </script> </head> <body> <select id="field_a"> <option value="data_1">data_1</option> <option value="data_2">data_2</option> </select> <input type="button" value="Ajax Request" onClick="doAjaxPost()"> <a href="#" onClick="doAjaxPost()">Here</a> </form> <div id="resposnse"> <img src="ajax-loader.gif" style="display:none" id="loadthisimage"> </div> </body> and now serverscript.php <?php if(isset($_POST['ID'])) { $nm = $_POST['ID']; echo $nm; //insert your code here for the display. mysql_connect("localhost", "root", "pop") or die(mysql_error()); mysql_select_db("JPro") or die(mysql_error()); $result1 = mysql_query("select Name from results where ID = \"$nm\" ") or die(mysql_error()); // store the record of the "example" table into $row while($row1 = mysql_fetch_array( $result1 )) { $tc = $row1['Name']; echo $tc; } } ?>

    Read the article

  • C# : Forcing a clean run in a long running SQL reader loop?

    - by Wardy
    I have a SQL data reader that reads 2 columns from a sql db table. once it has done its bit it then starts again selecting another 2 columns. I would pull the whole lot in one go but that presents a whole other set of challenges. My problem is that the table contains a large amount of data (some 3 million rows or so) which makes working with the entire set a bit of a problem. I'm trying to validate the field values so i'm pulling the ID column then one of the other cols and running each value in the column through a validation pipeline where the results are stored in another database. My problem is that when the reader hits the end of handlin one column I need to force it to immediately clean up every little block of ram used as this process uses about 700MB and it has about 200 columns to go through. Without a full Garbage Collect I will definately run out of ram. Anyone got any ideas how I can do this? I'm using lots of small reusable objects, my thought was that I could just call GC.Collect() on the end of each read cycle and that would flush everything out, unfortunately that isn't happening for some reason.

    Read the article

  • grabbing a substring while scraping with Python2.6

    - by Diego
    Hey can someone help with the following? I'm trying to scrape a site that has the following information.. I need to pull just the number after the </strong> tag.. [<li><strong>ISBN-13:</strong> 9780375853401</li>, <li><strong>Pub. Date: </strong> 05/11/2010</li>] [<li><strong>UPC:</strong> 490355000372</li>, <li><strong>Catalog No:</strong> 15024/25</li>, <li><strong>Label:</strong> CAMERATA</li>] here's a piece of the code I've been using to grab the above data using mechanize and BeautifulSoup. I'm stuck here as it won't let me use the find() function for a list br_results = mechanize.urlopen(br_results) html = br_results.read() soup = BeautifulSoup(html) local_links = soup.findAll("a", {"class" : "down-arrow csa"}) upc_code = soup.findAll("ul", {"class" : "bc-meta3"}) for upc in upc_code: upc_text = upc.contents.contents print upc_text

    Read the article

  • How can I create Wikipedia's footnote highlighting solely with jQuery

    - by andrew.bachman
    I would like to duplicate Wikipedia's footnote highlighting from in-text citation click solely in jQuery and CSS classes. I found a webpage that describes how to do so with CSS3 and then a JavaScript solution for IE. I would like however to do so solely with jQuery as the site I'm doing it on already has a bunch of jQuery elements. I've come up with a list of the process. In-Text Citation Clicked Replace highlight class with standard footnote class on <p> tag of footnotes that are already highlighted. Add highlight to the appropriate footnote Use jQuery to scroll down the page to appropriate footnote. I've come up with some jQuery so far but I'm extremely new to it relying heavy on tutorials and plugins to this point. Here is what I have with some plain English for the parts I haven't figured out yet. $('.inlineCite').click(function() { $('.footnoteHighlight').removeClass('footnoteHighlight').addClass('footnote'); //add highlight to id of highlightScroll //scroll to footnote with matching id }); If I had a method to pass a part of the selector into the function and turn it into a variable I believe I could pull it off. Most likely I'm sure one of you gurus will whip something up that puts anything I think I have done to shame. Any help will be greatly appreciated so thank you in advance. Cheers.

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >