Search Results

Search found 4962 results on 199 pages for 'parse'.

Page 113/199 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Fetching custom Authorization header from incoming PHP request

    - by jpatokal
    So I'm trying to parse an incoming request in PHP which has the following header set: Authorization: Custom Username Simple question: how on earth do I get my hands on it? If it was Authorization: Basic, I could get the username from $_SERVER["PHP_AUTH_USER"]. If it was X-Custom-Authorization: Username, I could get the username from $_SERVER["HTTP_X_CUSTOM_AUTHORIZATION"]. But neither of these are set by a custom Authorization, var_dump($_SERVER) reveals no mention of the header (in particular, AUTH_TYPE is missing), and PHP5 functions like get_headers() only work on responses to outgoing requests. I'm running PHP 5 on Apache with an out-of-the box Ubuntu install.

    Read the article

  • How do I handle a missing mandatory argument in Ruby OptionParser?

    - by Rob Jones
    In OptionParser I can make an option mandatory, but if I leave out that value it will take the name of any following option as the value, screwing up the rest of the command line parsing. Here is a test case that echoes the values of the options: $ ./test_case.rb --input foo --output bar output bar input foo Now leave out the value for the first option: $ ./test_case.rb --input --output bar input --output Is there some way to prevent it taking another option name as a value? Thanks! Here is the test case code: #!/usr/bin/env ruby require 'optparse' files = Hash.new option_parser = OptionParser.new do |opts| opts.on('-i', '--input FILENAME', 'Input filename - required') do |filename| files[:input] = filename end opts.on('-o', '--output FILENAME', 'Output filename - required') do |filename| files[:output] = filename end end begin option_parser.parse!(ARGV) rescue OptionParser::ParseError $stderr.print "Error: " + $! + "\n" exit end files.keys.each do |key| print "#{key} #{files[key]}\n" end

    Read the article

  • How to get the changes on a branch in git

    - by Greg Hewgill
    What is the best way to get a log of commits on a branch since the time it was branched from the current branch? My solution so far is: git log $(git merge-base HEAD branch)..branch The documentation for git-diff indicates that "git diff A...B" is equivalent to "git diff $(git-merge-base A B) B". On the other hand, the documentation for git-rev-parse indicates that "r1...r2" is defined as "r1 r2 --not $(git merge-base --all r1 r2)". Why are these different? Note that "git diff HEAD...branch" gives me the diffs I want, but the corresponding git log command gives me more than what I want. In pictures, suppose this: x---y---z---branch / ---a---b---c---d---e---HEAD I would like to get a log containing commits x, y, z. "git diff HEAD...branch" gives these commits. However, "git log HEAD...branch" gives x, y, z, c, d, e.

    Read the article

  • Writing an app with Perl and Ruby?

    - by Jeff Erickson
    I am working on a project that is mostly Ruby on Rails. However, I need to generate and parse Excel files in this project (I know, I know...), so I've been using Perl's Spreadsheet::WriteExcel and Spreadsheet::ParseExcel which work well. However, what is the best way to combine this use of Perl with the larger Ruby on Rails app? Is calling the Perl script with backticks the kosher way to go about this? It feels a little hacky to me, but if that is the only (or best) way, then that's what I'll do. I wanted to reach out and see if anyone else has some suggestions or advise. Thank you!

    Read the article

  • How to get a non-XML output using JDOM XSLTransformer?

    - by Neil McF
    Hello, I have an XML file which I'd like to parse into a non-XML (text) file based on a XLST file. The code in both seem correct, and it works when testing manually, but I'm having a problem doing this programatically. I'm using JDOM's XSLTransformer class to apply the XSLT to the XML and it returns it in the format of a JDOM Document. The problem here is that I can't seem to access anything in the Document as it is not a proper XML file and I get a "java.lang.IllegalStateException: Root element not set" error. Is there a better way within Java to obtain a non-XML file as a result of XSLT?

    Read the article

  • Linq to xml not able to add new elements

    - by Fore
    We save our xml in a "text" field in the database. So first I check if it exist any xml, if not I create a new xdocument, fill it with the necessary xml. else i just add the new element. Code looks like this: XDocument doc = null; if (item.xmlString == null || item.xmlString == "") { doc = new XDocument(new XDeclaration("1.0", "utf-8", "yes"), new XElement("DataTalk", new XAttribute(XNamespace.Xmlns + "xsi", "http://www.w3.org/2001/XMLSchema-instance"), new XAttribute(XNamespace.Xmlns + "xsd", "http://www.w3.org/2001/XMLSchema"), new XElement("Posts", new XElement("TalkPost")))); } else { doc = XDocument.Parse(item.xmlString); } This is working alright to create a structure, but then the problem appears, when I want to add new TalkPost. I get an error saying incorrectly structured document. The following code when adding new elements: doc.Add(new XElement("TalkPost", new XElement("PostType", newDialog.PostType), new XElement("User", newDialog.User), new XElement("Customer", newDialog.Customer), new XElement("PostedDate", newDialog.PostDate), new XElement("Message", newDialog.Message)));

    Read the article

  • writing a fast parser in python

    - by panzi
    I've written a hands-on recursive pure python parser for a some file format (ARFF) we use in one lecture. Now running my exercise submission is awfully slow. Turns out by far the most time is spent in my parser. It's consuming a lot of CPU time, the HD is not the bottleneck. I wonder what performant ways are there to write a parser in python? I'd rather not rewrite it in C. I tried to use jython, but that decreased performance a lot! The files I parse are partially huge ( 150 MB) with very long lines. My current parser only needs a look-ahead of one character. I'd post the source here but I don't know if that's such a good idea. After all the submission deadline has not jet ended. But then, the focus in this exercise is not the parser. You can choose whatever language you want to use and there already is a parser for Java.

    Read the article

  • does PHP 5.3 change the way file_get_contents works?

    - by ceejayoz
    I'm having an odd issue with PHP's file_get_contents. In the past, file_get_contents on a remote file returns the text of that file regardless of the HTTP status code returned. If I hit an API and it sends back JSON error information with a status of 500, file_get_contents gives me that JSON (with no indication that an error code was encountered). I've just set up a Ubuntu 10.04 server, which is the first Ubuntu to have PHP 5.3. Instead of giving me the JSON, PHP throws a warning when a 500 error is present. As a result, I can't parse the JSON and give a nice error message. It's nice that PHP is noticing there's an error in the remote file, but I need the JSON even (especially!) if there's a 500 error. There doesn't appear to be any way to switch this off. Has anyone encountered this? Any tips?

    Read the article

  • Entity Framework connection metadata extraction

    - by James
    Hi, I am using the EntityFramework POCO adapter and since there are limitations to what microsoft gives access to with regards to the meta data, i am manually extracting the information i need out of the xml. The only problem is i want to get the ssdl, msl, csdl file names to load without having to directly check for the connection string node in app.config. In short where in the ObjectContext/EntityConnection can i get access to these file names? Worst case scenario i need to get the connection name from the EntityConnection object then load this from app.config and parse the string itself and extract the filenames myself. (But i obviously don't want to do that). Thanks

    Read the article

  • IsDouble check for string in Vb.net?

    - by James123
    I will get data in DataTable. I am going to iterate data in foreach. I will have all types of data in Datatable. Now I need to find Double for each item (string) in DataTable. How to find IsDouble for string? Ex: I have "21342.2121" string. I need to covert this to Double. But sometimes the data will be "TextString". So I can't use Double.Parse(). How to handle this?

    Read the article

  • Android - Using Camera Intent but not updating correctly?

    - by Tyler
    Hello - I am using an intent to capture a picture: Intent i = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE); i.putExtra(android.provider.MediaStore.EXTRA_OUTPUT, Uri.fromFile(new File(Environment.getExternalStorageDirectory(), "test.jpg"))); startActivityForResult(i, 2); And then once taken I do the following: sendBroadcast(new Intent(Intent.ACTION_MEDIA_MOUNTED, Uri.parse("file://" + Environment.getExternalStorageDirectory()))); launchGallery(); While the above seems to work the first time without issue, whenever I run through a second time (so test.jpg already exists) the image actually saves correctly to /sdcard/ but I am finding that the thumbnail does not update and when the gallery loads it shows the previous test.jpg image! I was under the impression that sendBroadcast should update the thumbnails, but it doesn't appear to be.. Is there some other way to go about this and ensure when I call my launchGallery(); method the most recent image I just took appears? Thanks!

    Read the article

  • Automating WebTrends analysis

    - by tridium
    Every week I access server logs processed by WebTrends (for about 7 profiles) and copy ad clickthrough and visitor information into Excel spreadsheets. A lot of it is just accessing certain sections and finding the right title and then copying the unique visitor information. I tried using WebTrends' built-in query tool but that is really poorly done (only uses a drag-and-drop system instead of text-based) and it has a maximum number of parameters and maximum length of queries to query with. As far as I know, the tools in WebTrends are not suitable to my purpose of automating the entire web metrics gathering process. I've gotten access to the raw server logs, but it seems redundant to parse that given that they are already being processed by WebTrends. To me it seems very scriptable, but how would I go about doing that? Is screen-scraping an option?

    Read the article

  • Creating a "less"-like console pager interface for pysqlite3 database

    - by Eric
    I would like to add some interactive capability to a python CLI application I've writen that stores data in a SQLite3 database. Currently, my app reads-in a certain type of file, parses and analyzes, puts the analysis data into the db, and spits the formatted records to stdout (which I generally pipe to a file). There are on-the-order-of a million records in this file. Ideally, I would like to eliminate that text file situation altogether and just loop after that "parse and analyze" part, displaying a screen's worth of records, and allowing the user to page through them and enter some commands that will edit the records. The backend part I know how to do. Can anyone suggest a good starting point for creating that pager frontend either directly in the console (like the pager "less"), through ncurses, or some other system?

    Read the article

  • Memory Issues When DOM Parsing A Large XML File on Android Devices

    - by tonyc
    Hey awesome SO users, I have an Android application that parses an XML file for users and displays results in a much more mobile friendly format. The app works great for most users, but some users have lots and lots of data and the app crashes on them because it runs out of memory. Is there any way I have a DOM style XML parser quit parsing data after a certain amount of parsing? I only need the first 30 or so elements so it would make the application much more efficient. I'd like to use a SAX or pull parser instead, but the XML I'm parsing is not valid and I have no control over it. Unless anyone has some good SAX solutions that let me parse messy, invalid XML, I think DOM is the only way to go. Thanks for reading!

    Read the article

  • Question on dynamic URL parsing

    - by jerebear
    I see many, many sites that have URLs for individual pages such as http://www.mysite.com/articles/this-is-article-1 http://www.mysite.com/galleries/575 And they don't redirect, they don't run slowly... I know how to parse URL's, that's easy enough. But in my mind, that seems slow and cumbersome on a dynamic site. As well, if the pages are all staticly built (hende the custom URL) then that means all components of the page are static as well... (which would be bad) I'd love to hear some ideas about how this is typically accomplished.

    Read the article

  • pthread_create from non-static member function

    - by Stanislav Palatnik
    This is somewhat similiar to this : http://stackoverflow.com/questions/1151582/pthread-function-from-a-class But the function that's getting called in the end is referencing the this pointer, so it cannot be made static. void * Server::processRequest() { std::string tmp_request, outRequest; tmp_request = this->readData(); outRequest = this->parse(tmp_request); this->writeReply(outRequest); } void * LaunchMemberFunction(void * obj) { return ((Server *)obj)->processRequest(); } and then the pthread_create pthread_create(&handler[tcount], &attr, (void*)LaunchMemberFunction,(void*)&SServer); errors: SS_Twitter.cpp:819: error: invalid conversion from void* to void* ()(void) SS_Twitter.cpp:819: error: initializing argument 3 of int pthread_create(pthread_t*, const pthread_attr_t*, void* ()(void), void*)

    Read the article

  • Generate Info (wrapper) Class from stored procedure

    - by Adem
    Hello everybody I am in a crucial project and I am trying to speed up the development phase by using codesmith for generating the business class DAL and info class for the tables of my project. There are about 50 tables with relationships parent child many to many and for retrieving data I have to code several inner joins in stored procedures. I have to combine fields from many tables and this makes working with the info class difficult. Is there anyway to generate info class from stored procedures or to be more exact is there a way to parse the result set of the stored procedure and to generate the info class with properties for every column in that result set. Please if anyone can give me some advice and tell me how to achieve this. Best Regards

    Read the article

  • NHibernate, the Parallel Framework, and SQL Server

    - by andy
    hey guys, we have a loop that: 1.Loops over several thousand xml files. Altogether we're parsing millions of "user" nodes. 2.In each iteration we parse a "user" xml, do custom deserialization 3.finally, in each iteration, we send our object to nhibernate for saving. We use: .SaveOrUpdateAndFlush(user); This is a lengthy process, and we thought it would be a perfect candidate for testing out the .NET 4.0 Parallel libraries. So we wrapped the loop in a: Parallel.ForEach(); After doing this, we start getting "random" Timeout Exceptions from SQL Server, and finally, after leaving it running all night, OutOfMemory unhandled exceptions. I haven't done deep debugging on this yet, but what do you guys think. Is this simply a limitation of SQL Server, or could it be our NHibernate setup, or what? cheers andy

    Read the article

  • Converting to Byte Array after reading a BLOB from SQL in C#

    - by Soham
    Hi All, I need to read a BLOB and store it in a byte[], before going forward with Deserializing; Consider: //Reading the Database with DataAdapterInstance.Fill(DataSet); DataTable dt = DataSet.Tables[0]; foreach (DataRow row in dt.Rows) { byte[] BinDate = Byte.Parse(row["Date"].ToString()); // convert successfully to byte[] } I need help in this C# statement, as I am not able to convert an object type into a byte[]. Note, "Date" field in the table is a blob and not of type Date; Help appreciated; Soham

    Read the article

  • A tool to find and fix incomplete source code documentation

    - by Pekka
    I have several finished, older PHP projects with a lot of includes that I would like to document in javadoc/phpDocumentor style. While working through each file manually and being forced to do a code review alongside the documenting would be the best thing, I am, simply out of time constraints, interested in tools to help me automate the task as much as possible. The tool I am thinking about would ideally have the following features: Parse a PHP project tree and tell me where there are undocumented files, classes, and functions/methods (i.e. elements missing the appropriate docblock comment) Provide a method to half-way easily add the missing docblocks by creating the empty structures and, ideally, opening the file in an editor (internal or external I don't care) so I can put in the description. Optional: Automatic recognition of parameter types, return values and such. But that's not really required. The language in question is PHP, though I could imagine that a C/Java tool might be able to handle PHP files after some tweaking. Thanks for your great input!

    Read the article

  • How to fetch message body and attachments in XML format using php/linux from Lotus Domino server?

    - by too
    Has anybody some information about accessing Lotus Domino server to fetch entire mail contents by http(s) requests from php linux server? The article by Andrei Kouvchinnikov describes well how to fetch message list in notes mail folders; after obtaining session id during login one can for example select top 100 messages by calling: https://your.server.domain/mail_db/mailbox.nsf/($Inbox)?ReadViewEntries&Start=1&Count=100 And this works perfectly. The problem arises when I am trying to get message contents (0A1DA5EEB7B65277C12576F50055D811 is an example message unique Id): https://your.server.domain/mail_db/mailbox.nsf/($Inbox)/0A1DA5EEB7B65277C12576F50055D811/?OpenDocument Such request in IE shows frameset with data hard to parse, in less common browsers like Opera it informs about unsupported browser. Ideally if it is possible to fetch notes message contents and all attachments by requesting it in the url, has anybody some information what request would it be? Link to Lotus web calls reference would be even more beneficial.

    Read the article

  • Guice + Quartz + iBatis

    - by DroidIn.net
    I'm trying to wire together Guice (Java), Quartz scheduler and iBatis (iBaGuice) to do the following: Start command line utility-scanner using main() Periodically scan directory (provided as argument) for files containing formatted output (XML or YAML) When file is detected, parse and output result to the database The problems: I used this example to wire Guice and Quartz. However I'm missing some important details which I'm asking in the comments but the post is somewhat dated so I'm quoting it here also: It's not obvious how to set-up the scheduler. Where and how would I wire the Trigger (I can use Trigger#makeMinutelyTrigger)? I really have just one type of job I will be executing, I understand that details in the JobFactory#newJob are coming from the TriggerFiredBundle parameter but where/how do I wire that? And where/how do I create or wire concrete Job?

    Read the article

  • How to use Netbeans platform syntax highlight with JEditorPane?

    - by Volta
    There are many tutorials online giving very complex or non-working examples on this. It seems that people recommend others to use the syntax highlighters offered by netbeans but I am totally puzzled on how to do so! I have checked many many sites on this and the best I can find is : http://www.antonioshome.net/kitchen/netbeans/nbms-standalone.php However I am still not able to use this example (as it is aimed to people who don't want to use the Netbeans platform but just a portion of it) and I am still not sure if I can just use syntax highlighting in a simple plug 'n play way. For example netbeans supports several language highlights by default, can I just use the highlighters in a JEditorPane to parse Ruby/Python/Java for example ? or do I need to write my own parser :-| ? I will really appreciate a small simple example on how to plug syntax highlight in a standalone application using the netbeans platform.

    Read the article

  • How to get video file details eg. duration in Android?

    - by spirytus
    I struggle to get specific video file details so duration etc. from the file the files recorded earlier. All I can currently do is to get cursor with all the files, then loop one by one. Cursor cursor = MediaStore.Video.query(getContext().getContentResolver(), MediaStore.Video.Media.EXTERNAL_CONTENT_URI, new String[]{MediaStore.Video.VideoColumns.DURATION,MediaStore.Video.VideoColumns.DATE_TAKEN,MediaStore.Video.VideoColumns.RESOLUTION,MediaStore.Video.VideoColumns.DISPLAY_NAME}); if(cursor.moveToFirst()) while(!cursor.isLast()){ if(cursor.getString(3)==fight.filename) { // do something here } cursor.moveToNext(); } I need however to access details of specific files so I tried to create URI but no luck as cursor returned is always null. Where do I go wrong? Uri uri = Uri.parse(Environment.DIRECTORY_DCIM+"/FightAll_BJJ_Scoring/"+(fight.filename)); Cursor cursor = MediaStore.Video.query(getContext().getContentResolver(), uri, new String[]{MediaStore.Video.VideoColumns.DURATION,MediaStore.Video.VideoColumns.DATE_TAKEN,MediaStore.Video.VideoColumns.RESOLUTION,MediaStore.Video.VideoColumns.DISPLAY_NAME}); // cursor is always null here

    Read the article

  • Recreation of MySQL DB using "mysql mydb < mydb.sql" is really slow when the table has tens of milli

    - by Jian Lin
    It seems that a MySQL database that has a table with tens of millions of records will get a big INSERT INTO statement when the following mysqldump some_db > some_db.sql is done to back up the database. (is it 1 insert statement that handles all the records?) So when reconstructing the DB using mysql some_db < some_db.sql then the CPU is hardly busy (about 1.8% usage by the mysql process... I don't see a mysqld either?) and also the hard disk doesn't seem to be too busy... Last time, the whole restore process took 5 hours. Is there a way to make it faster? Such as, when doing mysqldump, can it break the INSERT statement into shorter ones, so that the mysql doesn't need to parse the line so hard when restoring the DB?

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >