Search Results

Search found 14531 results on 582 pages for 'proxy pass'.

Page 493/582 | < Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >

  • What's the boost way to create a functor that binds out an argument

    - by Mordachai
    I have need for a function pointer that takes two arguments and returns a string. I would like to pass an adapter that wraps a function that takes one argument, and returns the string (i.e. discard one of the arguments). I can trivially build my own adapter, that takes the 2 arguments, calls the wrapped function passing just the one argument through. But I'd much rather have a simple way to create an adapter on the fly, if there is an easy way to do so in C++/boost? Here's some details to make this a bit more concrete: typedef boost::function<CString (int,int)> TooltipTextFn; class MyCtrl { public: MyCtrl(TooltipTextFn callback = boost::bind(&MyCtrl::GetCellText, this, _1, _2)) : m_callback(callback) { } // QUESTION: how to trivially wrapper GetRowText to conform to TooltipTextFn by just discarding _2 ?! void UseRowText() { m_callback = boost::bind(&MyCtrl::GetRowText, this, _1, ??); } private: CString GetCellText(int row, int column); CString GetRowText(int row); TooltipTextFn m_callback; } Obviously, I can supply a member that adapts GetRowText to take two arguments and only passes the first to GetRowText() itself. But is there already a boost binder / adapter that lets me do that?

    Read the article

  • Applications result affected by another running application.

    - by Jamie Keeling
    This is a follow on from my previous question although this is about something else. I've been having a problem where for some reason my message that I pass from one process to another only displays the first letter, in this case "M". My application is based on a MSDN sample so to make sure I hadn't missed something I create a separate solution, added the MSDN sample (without any changes for my needs) and unsurprisingly it works fine. Now for the weird bit, when I run the MSDN sample running (as in debugging) and have my own application running, the text prints out fine without any problems. The second I run my on its own without the original MSDN sample being open, and it fails to work and only shows an "M". I've looked in the debugger and don't seem to notice anything suspicious (it's a slightly dated picture, I've fixed the data type inconsistency). Can anyone provide a solution for this? I've never encountered anything like this before. To look at my source code it's easier to just look at the link I posted at the top of the question, there's no point in me posting it twice. Thank you for any help.

    Read the article

  • Are there pitfalls to using static class/event as an application message bus

    - by Doug Clutter
    I have a static generic class that helps me move events around with very little overhead: public static class MessageBus<T> where T : EventArgs { public static event EventHandler<T> MessageReceived; public static void SendMessage(object sender, T message) { if (MessageReceived != null) MessageReceived(sender, message); } } To create a system-wide message bus, I simply need to define an EventArgs class to pass around any arbitrary bits of information: class MyEventArgs : EventArgs { public string Message { get; set; } } Anywhere I'm interested in this event, I just wire up a handler: MessageBus<MyEventArgs>.MessageReceived += (s,e) => DoSomething(); Likewise, triggering the event is just as easy: MessageBus<MyEventArgs>.SendMessage(this, new MyEventArgs() {Message="hi mom"}); Using MessageBus and a custom EventArgs class lets me have an application wide message sink for a specific type of message. This comes in handy when you have several forms that, for example, display customer information and maybe a couple forms that update that information. None of the forms know about each other and none of them need to be wired to a static "super class". I have a couple questions: fxCop complains about using static methods with generics, but this is exactly what I'm after here. I want there to be exactly one MessageBus for each type of message handled. Using a static with a generic saves me from writing all the code that would maintain the list of MessageBus objects. Are the listening objects being kept "alive" via the MessageReceived event? For instance, perhaps I have this code in a Form.Load event: MessageBus<CustomerChangedEventArgs>.MessageReceived += (s,e) => DoReload(); When the Form is Closed, is the Form being retained in memory because MessageReceived has a reference to its DoReload method? Should I be removing the reference when the form closes: MessageBus<CustomerChangedEventArgs>.MessageReceived -= (s,e) => DoReload();

    Read the article

  • Ordering by formula fields in NHibernate

    - by Darin Dimitrov
    Suppose that I have the following mapping with a formula property: <class name="Planet" table="planets"> <id name="Id" column="id"> <generator class="native" /> </id> <!-- somefunc() is a native SQL function --> <property name="Distance" formula="somefunc()" /> </class> I would like to get all planets and order them by the Distance calculated property: var planets = session .CreateCriteria<Planet>() .AddOrder(Order.Asc("Distance")) .List<Planet>(); This is translated to the following query: SELECT Id as id0, somefunc() as formula0 FROM planets ORDER BY somefunc() Desired query: SELECT Id as id0, somefunc() as formula0 FROM planets ORDER BY formula0 If I set a projection with an alias it works fine: var planets = session .CreateCriteria<Planet>() .SetProjection(Projections.Alias(Projections.Property("Distance"), "dist")) .AddOrder(Order.Asc("dist")) .List<Planet>(); SQL: SELECT somefunc() as formula0 FROM planets ORDER BY formula0 but it populates only the Distance property in the result and I really like to avoid projecting manually over all the other properties of my object (there could be many other properties). Is this achievable with NHibernate? As a bonus I would like to pass parameters to the native somefunc() SQL function. Anything producing the desired SQL is acceptable (replacing the formula field with subselects, etc...), the important thing is to have the calculated Distance property in the resulting object and order by this distance inside SQL.

    Read the article

  • Trouble with Unions in C program.

    - by Jordan S
    I am working on a C program that uses a Union. The union definition is in FILE_A header file and looks like this... // FILE_A.h**************************************************** xdata union { long position; char bytes[4]; }CurrentPosition; If I set the value of CurrentPosition.position in FILE_A.c and then call a function in FILE_B.c that uses the union, the data in the union is back to Zero. This is demonstrated below. // FILE_A.c**************************************************** int main.c(void) { CurrentPosition.position = 12345; SomeFunctionInFileB(); } // FILE_B.c**************************************************** void SomeFunctionInFileB(void) { // After the following lines execute I see all zeros in the flash memory. WriteByteToFlash(CurrentPosition.bytes[0]; WriteByteToFlash(CurrentPosition.bytes[1]; WriteByteToFlash(CurrentPosition.bytes[2]; WriteByteToFlash(CurrentPosition.bytes[3]; } Now, If I pass a long to SomeFunctionInFileB(long temp) and then store it into CurrentPosition.bytes within that function, and finally call WriteBytesToFlash(CurrentPosition.bytes[n]... it works just fine. It appears as though the CurrentPosition Union is not global. So I tried changing the union definition in the header file to include the extern keyword like this... extern xdata union { long position; char bytes[4]; }CurrentPosition; and then putting this in the source (.c) file... xdata union { long position; char bytes[4]; }CurrentPosition; but this causes a compile error that says: C:\SiLabs\Optec Programs\AgosRot\MotionControl.c:76: error 91: extern definition for 'CurrentPosition' mismatches with declaration. C:\SiLabs\Optec Programs\AgosRot\/MotionControl.h:48: error 177: previously defined here So what am I doing wrong? How do I make the union global?

    Read the article

  • Swipe left/right to push next/previous details view

    - by Youssef
    I am working on a ios application, using storyboard. I have a simple table view with rows. Each row has a "key" string to identify it. When the user clicks on a row I push the next view "DetailsViewController" and pass the "key" as a parameter in the prepareForSegue method. The data in the second view will change based on the "key" passed. In the "DetailsViewController" I want to add a swipe gesture, when the user swipes right, I want to push the next details view. It will be as if he clicked on the next row in the table view. So to rephrase, the user has 2 ways of switching between items: he can click the item in the table view and the details view will show, he can then go back and select another item. Or he can click on a row, and in the details view he can swipe left and right to see the previous/next item. i added the uigesture recognizer: UISwipeGestureRecognizer *recognizer; recognizer = [[UISwipeGestureRecognizer alloc] initWithTarget:self action:@selector(handleSwipe:)]; [recognizer setDirection:UISwipeGestureRecognizerDirectionRight]; [[self view] addGestureRecognizer:recognizer]; - (void) handleSwipe:(UISwipeGestureRecognizer *) sender { //I want to go to the next item in the tableview } I hope I was clear. Thank you.

    Read the article

  • Design of std::ifstream class

    - by Nawaz
    Those of us who have seen the beauty of STL try to use it as much as possible, and also encourage others to use it wherever we see them using raw pointers and arrays. Scott Meyers have written a whole book on STL, with title Effective STL. Yet what happened to the developers of ifstream that they preferred char* over std::string. I wonder why the first parameter of ifstream::open() is of type const char*, instead of const std::string &. Please have a look at it's signature: void open(const char * filename, ios_base::openmode mode = ios_base::in ); Why this? Why not this: void open(const string & filename, ios_base::openmode mode = ios_base::in ); Is this a serious mistake with the design? Or this design is deliberate? What could be the reason? I don't see any reason why they have preferred char* over std::string. Note we could still pass char* to the latter function that takes std::string. That's not a problem! By the way, I'm aware that ifstream is a typedef, so no comment on my title.:P. It looks short that is why I used it. The actual class template is : template<class _Elem,class _Traits> class basic_ifstream;

    Read the article

  • Handling responses in libcurl

    - by sfactor
    i have a code to send a Form Post with login credentials to a webpage. it looks like this CURL *curl; CURLcode res; struct curl_httppost *formpost=NULL; struct curl_httppost *lastptr=NULL; struct curl_slist *headerlist=NULL; static const char buf[] = "Expect:"; curl_global_init(CURL_GLOBAL_ALL); /* Fill in the username */ curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "user", CURLFORM_COPYCONTENTS, "username", CURLFORM_END); /* Fill in the password */ curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "pass", CURLFORM_COPYCONTENTS, "password", CURLFORM_END); /* Fill in the submit field too, even if this is rarely needed */ curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "submit", CURLFORM_COPYCONTENTS, "send", CURLFORM_END); curl = curl_easy_init(); /* initalize custom header list (stating that Expect: 100-continue is not wanted */ headerlist = curl_slist_append(headerlist, buf); if(curl) { /* what URL that receives this POST */ curl_easy_setopt(curl, CURLOPT_URL, "http://127.0.0.1:8000/index.html"); /* only disable 100-continue header if explicitly requested */ curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headerlist); curl_easy_setopt(curl, CURLOPT_HTTPPOST, formpost); res = curl_easy_perform(curl); /* always cleanup */ curl_easy_cleanup(curl); /* then cleanup the formpost chain */ curl_formfree(formpost); /* free slist */ curl_slist_free_all (headerlist); } now i need to know is how do i handle the responses i get back from the server? i need to store the response into a string and then work on it.

    Read the article

  • How to create a "Shell IDList Array" to support drag-and-drop of virtual files from C# to Windows Ex

    - by JustABill
    I started trying to implement drag-and-drop of virtual files (from a C# 4/WPF app) with this codeplex tutorial. After spending some time trying to figure out a DV_E_FORMATETC error, I figured out I need to support the "Shell IDList Array" data format. But there seems to be next to zero documentation on what this format actually does. After some searching, I found this page on advanced data transfer which said that a Shell IDList Array was a pointer to a CIDA structure. This CIDA structure then contains the number of PIDLs, and a list of offsets to them. So what the hell is a PIDL? After some more searching, this page sort of implies it's a pointer to an ITEMIDLIST, which itself contains a single member that is a list of SHITEMIDs. My next idea was to try dragging a file from another application with virtual files. I just got a MemoryStream back for this format. At least I know what class to provide for the thing, but that doesn't help at all for explaining what to put in it. So now that that's explained, I still have no idea how to create one of these things so that it's valid. There's two real questions here: What is a valid "abID" member for a SHITEMID? These virtual files only exist with my program; will the thing they are dragged to pass the "abID" back later when it executes GetData? Do they have to be system-unique? Why are there two levels of lists; a list of PIDLs and each PIDL has a list of SHITEMIDs? I'm assuming one of them is one for each file, but what's the other one for? Multiple IDs for the same file? Any help or even links that explain what I should be doing would be greatly appreciated.

    Read the article

  • XSLT - Catching parameters

    - by Brian Roisentul
    The situation is I have two xslt files: one is called from my ASP.NET code, and there, the second xslt file is imported. What I'd like to accomplish is to pass a parameter to the first one so the second xslt(the one that is imported at the first xslt) can read it. My c# code looks like this: var oArgs = new XsltArgumentList(); oArgs.AddParam("fbLikeFeatureName", "", "Facebook_Like_Button"); ltlContentBody.Text = xmlUtil.TransformXML(oXmlDoc, Server.MapPath(eSpaceId + "/styles/ExploringXSLT/ExploreContentObjects.xslt"), true); And I'm catching the param at the first xslt this way: <xsl:param name="fbLikeFeatureName" /> And then, passing it to the second xslt like this(previously, I import that file): <xsl:call-template name="Articles"> <xsl:with-param name="fbLikeFeatureName"></xsl:with-param> </xsl:call-template> Finally, I'm catching the param on the second xslt file as following: <xsl:value-of select="$fbLikeButtonName"/> What Am I doing wrong? I'm kind of new at xslt.

    Read the article

  • Passing filtering functions to Where() in LINQ-to-SQL

    - by Daniel
    I'm trying to write a set of filtering functions that can be chained together to progressively filter a data set. What's tricky about this is that I want to be able to define the filters in a different context from that in which they'll be used. I've gotten as far as being able to pass a very basic function to the Where() clause in a LINQ statement: filters file: Func<item, bool> returnTrue = (i) => true; repository file: public IQueryable<item> getItems() { return DataContext.Items.Where(returnTrue); } This works. However, as soon as I try to use more complicated logic, the trouble begins: filters file: Func<item, bool> isAssignedToUser = (i) => i.assignedUserId == userId; repository file: public IQueryable<item> getItemsAssignedToUser(int userId) { return DataContext.Items.Where(isAssignedToUser); } This won't even build because userId isn't in the same scope as isAssignedToUser(). I've also tried declaring a function that takes the userId as a parameter: Func<item, int, bool> isAssignedToUser = (i, userId) => i.assignedUserId == userId; The problem with this is that it doesn't fit the function signature that Where() is expecting: Func<item, bool> There must be a way to do this, but I'm at a loss for how. I don't feel like I'm explaining this very well, but hopefully you get the gist. Thanks, Daniel

    Read the article

  • DSP - Filter sweep effect

    - by Trap
    I'm implementing a 'filter sweep' effect (I don't know if it's called like that). What I do is basically create a low-pass filter and make it 'move' along a certain frequency range. To calculate the filter cut-off frequency at a given moment I use a user-provided linear function, which yields values between 0 and 1. My first attempt was to directly map the values returned by the linear function to the range of frequencies, as in cf = freqRange * lf(x). Although it worked ok it looked as if the sweep ran much faster when moving through low frequencies and then slowed down during its way to the high frequency zone. I'm not sure why is this but I guess it's something to do with human hearing perceiving changes in frequency in a non-linear manner. My next attempt was to move the filter's cut-off frequency in a logarithmic way. It works much better now but I still feel that the filter doesn't move at a constant perceived speed through the range of frequencies. How should I divide the frequency space to obtain a constant perceived sweep speed? Thanks in advance.

    Read the article

  • Help with Hashmaps in Java

    - by Crystal
    I'm not sure how I use get() to get my information. Looking at my book, they pass the key to get(). I thought that get() returns the object associated with that key looking at the documentation. But I must be doing something wrong here.... Any thoughts? import java.util.*; public class OrganizeThis { /** Add a person to the organizer @param p A person object */ public void add(Person p) { staff.put(p, p.getEmail()); System.out.println("Person " + p + "added"); } /** * Find the person stored in the organizer with the email address. * Note, each person will have a unique email address. * * @param email The person email address you are looking for. * */ public Person findByEmail(String email) { Person aPerson = staff.get(email); return aPerson; } private Map<Person, String> staff = new HashMap<Person, String>(); public static void main(String[] args) { OrganizeThis testObj = new OrganizeThis(); Person person1 = new Person("J", "W", "111-222-3333", "[email protected]"); testObj.add(person1); System.out.println(testObj.findByEmail("[email protected]")); } }

    Read the article

  • Windows FTP batch sript to read & dl from external user list

    - by Will Sims
    i have several old, unused batches that i'm redoing.. I have a batch file for an old network arch from several years ago.. the main thing I'd like it to do now is read a list of files.. I'll explain the setup.. Server updates a complete list [CurrentMediaStores.txt] 2x a day. The laptops can set settings to DL this list through their start.bat which also runs addins and updates I aply to my pc's, to give my batches and myself a break from slavish folder assignments and add a lil more dynamics and less adminin the bats now call on a list the user makes by simply copying a line from the CMS.txt file and pasting it into their [Grab_List.txt] My problem is though I have the branch :: off right now and the code that detects if LAN is connected or not to switch to an ftp connection. I'd like for the ftp batch to call/ use the Grab_List also. but I just can't/ don't know how to pass and do the for loop with a ftp session to loop through x amount of files in the users req list.. Anyhelp would be greatly appreciated

    Read the article

  • django + south + python: strange behavior when using a text string received as a parameter in a func

    - by carlosescri
    Hello, this is my first question. I'm trying to execute a SQL query in django (south migration): from django.db import connection # ... class Migration(SchemaMigration): # ... def transform_id_to_pk(self, table): try: db.delete_primary_key(table) except: pass finally: cursor = connection.cursor() # This does not work cursor.execute('SELECT MAX("id") FROM "%s"', [table]) # I don't know if this works. try: minvalue = cursor.fetchone()[0] except: minvalue = 1 seq_name = table + '_id_seq' db.execute('CREATE SEQUENCE "%s" START WITH %s OWNED BY "%s"."id"', [seq_name, minvalue, table]) db.execute('ALTER TABLE "%s" ALTER COLUMN id SET DEFAULT nextval("%s")', [table, seq_name + '::regclass']) db.create_primary_key(table, ['id']) # ... I use this function like this: self.transform_id_to_pk('my_table_name') So it should: Find the biggest existent ID or 0 (it crashes) Create a sequence name Create the sequence Update the ID field to use sequence Update the ID as PK But it crashes and the error says: File "../apps/accounting/migrations/0003_setup_tables.py", line 45, in forwards self.delegation_table_setup(orm) File "../apps/accounting/migrations/0003_setup_tables.py", line 478, in delegation_table_setup self.transform_id_to_pk('accounting_delegation') File "../apps/accounting/migrations/0003_setup_tables.py", line 20, in transform_id_to_pk cursor.execute(u'SELECT MAX("id") FROM "%s"', [table.encode('utf-8')]) File "/Library/Python/2.6/site-packages/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) psycopg2.ProgrammingError: relation "E'accounting_delegation'" does not exist LINE 1: SELECT MAX("id") FROM "E'accounting_delegation'" ^ I have shortened the file paths for convenience. What does that "E'accounting_delegation'" mean? How could I get rid of it? Thank you! Carlos.

    Read the article

  • Sending and receiving IM messages via controller in Rails

    - by Grnbeagle
    Hi, I need a way to handle XMPP communication in my Rails app. My requirements are: Keep an instance of XMPP client running and logged in as one specific user (my bot user) Trigger an event from a controller to send a message and wait for a reply. The message is sent to another machine equipped with a bot so that the reply is supposed to be returned quickly. I installed xmpp4r and backgrounDrb similar to what's described here, but backgrounDrb seems to have evolved and I couldn't get it to wait for a reply. If it has to happen asynchronously, I am willing to use a server-push technology to notify the browser when the reply arrives. To give you a better idea, here are snippets of my code: (In controller) class ServicesController < ApplicationController layout 'simple' def index render :text => "index" end def show @my_service = Service.find(params[:id]) worker = MiddleMan.worker(:jabber_agent_worker) worker.send_request(:arg => {:jid => "someuser@someserver", :cmd => "help"}) render :text => "testing" end end (In worker script) require 'xmpp4r' require 'logger' class JabberAgentWorker < BackgrounDRb::MetaWorker set_worker_name :jabber_agent_worker def create(args = nil) jid = Jabber::JID.new('myagent@myserver') @client = Jabber::Client.new(jid) @client.connect @client.auth('pass') @client.send(Jabber::Presence.new.set_show(:chat).set_status('BackgrounDRb')) @client.add_message_callback do |message| logger.info("**** messaged received: #{message}") # never reaches here end end def send_request(args = nil) to_jid = Jabber::JID.new(args[:jid]) message = Jabber::Message::new(to_jid, args[:cmd]).set_type(:normal).set_id('1') @client.send(message) end end If anyone can tell me any of the following, I'd much appreciate it: issue with my backgrounDrb usage other background process alternatives appropriate for XMPP interactions other ways of achieving this Thanks in advance.

    Read the article

  • TransactionScope question - how can I keep the DTC from getting involved in this?

    - by larryq
    (I know the circumstances surrounding the DTC and promoting a transaction can be a bit mysterious to those of us not in the know, but let me show you how my company is doing things, and if you can tell me why the DTC is getting involved, and if possible, what I can do to avoid it, I'd be grateful.) I have code running on an ASP.Net webserver. We have one database, SQL 2008. Our data access code looks something like this-- We have a data access layer that uses a wrapper object for SQLConnections and SQLCommands. Typical use looks like this: void method1() { objDataObject = new DataAccessLayer(); objDataObject.Connection = SomeConnectionMethod(); SqlCommand objCommand = DataAccessUtils.CreateCommand(SomeStoredProc); //create some SqlParameters, add them to the objCommand, etc.... objDataObject.BeginTransaction(IsolationLevel.ReadCommitted); objDataObject.ExecuteNonQuery(objCommand); objDataObject.CommitTransaction(); objDataObject.CloseConnection(); } So indeed, a very thin wrapper around SqlClient, SqlConnection etc. I want to run several stored procs in a transaction, and the wrapper class doesn't allow me access to the SqlTransaction so I can't pass it from one component to the next. This led me to use a TransactionScope: using (TransactionScope tx1 = new TransactionScope(TransactionScope.RequiresNew)) { method1(); method2(); method3(); tx1.Complete(); } When I do this, the DTC gets involved, and unfortunately our webservers don't have "allow remote clients" enabled in the MSDTC settings-- getting IT to allow that will be a fight. I'd love to avoid DTC becoming involved but can I do it? Can I leave out the transactional calls in methods1-3() and just let the TransactionScope figure it all out?

    Read the article

  • c# delegate and abstract class

    - by BeraCim
    Hi all: I currently have 2 concrete methods in 2 abstract classes. One class contains the current method, while the other contains the legacy method. E.g. // Class #1 public abstract class ClassCurrent<T> : BaseClass<T> where T : BaseNode, new() { public List<T> GetAllRootNodes(int i) { //some code } } // Class #2 public abstract class MyClassLegacy<T> : BaseClass<T> where T : BaseNode, new() { public List<T> GetAllLeafNodes(int j) { //some code } } I want the corresponding method to run in their relative scenarios in the app. I'm planning to write a delegate to handle this. The idea is that I can just call the delegate and write logic in it to handle which method to call depending on which class/project it is called from (at least thats what I think delegates are for and how they are used). However, I have some questions on that topic (after some googling): 1) Is it possible to have a delegate that knows the 2 (or more) methods that reside in different classes? 2) Is it possible to make a delegate that spawns off abstract classes (like from the above code)? (My guess is a no, since delegates create concrete implementation of the passed-in classes) 3) I tried to write a delegate for the above code. But I'm being technically challenged: public delegate List GetAllNodesDelegate(int k); GetAllNodesDelegate del = new GetAllNodesDelegate(ClassCurrent.GetAllRootNodes); I got the following error: An object reference is required for the non-static field, method, property ClassCurrent<BaseNode>.GetAllRootNodes(int) I might have misunderstood something... but if I have to manually declare a delegate at the calling class, AND to pass in the function manually as above, then I'm starting to question whether delegate is a good way to handle my problem. Thanks.

    Read the article

  • How to get the output of an XslCompiledTransform into an XmlReader?

    - by Graham Clark
    I have an XslCompiledTransform object, and I want the output in an XmlReader object, as I need to pass it through a second stylesheet. I'm getting a bit confused - I can successfully transform some XML and read it using either a StreamReader or an XmlDocument, but when I try an XmlReader, I get nothing. In the example below, stylesheet is my XslCompiledTransform object. The first two Console.WriteLine calls output the correct transformed XML, but the third call gives no XML. I'm guessing it might be that the XmlTextReader is expecting text, so maybe I need to wrap this in a StreamReader..? What am I doing wrong? MemoryStream transformed = new MemoryStream(); stylesheet.Transform(input, args, transformed); transformed.Position = 0; StreamReader s = new StreamReader(transformed); Console.WriteLine("s = " + s.ReadToEnd()); // writes XML transformed.Position = 0; XmlDocument doc = new XmlDocument(); doc.Load(transformed); Console.WriteLine("doc = " + doc.OuterXml); // writes XML transformed.Position = 0; XmlReader reader = new XmlTextReader(transformed); Console.WriteLine("reader = " + reader.ReadOuterXml()); // no XML written

    Read the article

  • Ruby Design Problem for SQL Bulk Inserter

    - by crunchyt
    This is a Ruby design problem. How can I make a reusable flat file parser that can perform different data scrubbing operations per call, return the emitted results from each scrubbing operation to the caller and perform bulk SQL insertions? Now, before anyone gets narky/concerned, I have written this code already in a very unDRY fashion. Which is why I am asking any Ruby rockstars our there for some assitance. Basically, everytime I want to perform this logic, I create two nested loops, with custom processing in between, buffer each processed line to an array, and output to the DB as a bulk insert when the buffer size limit is reached. Although I have written lots of helpers, the main pattern is being copy pasted everytime. Not very DRY! Here is a Ruby/Pseudo code example of what I am repeating. lines_from_file.each do |line| line.match(/some regex/).each do |sub_str| # Process substring into useful format # EG1: Simple gsub() call # EG2: Custom function call to do complex scrubbing # and matching, emitting results to array # EG3: Loop to match opening/closing/nested brackets # or other delimiters and emit results to array end # Add processed lines to a buffer as SQL insert statement @buffer << PREPARED INSERT STATEMENT # Flush buffer when "buffer size limit reached" or "end of file" if sql_buffer_full || last_line_reached @dbc.insert(SQL INSERTS FROM BUFFER) @buffer = nil end end I am familiar with Proc/Lambda functions. However, because I want to pass two separate procs to the one function, I am not sure how to proceed. I have some idea about how to solve this, but I would really like to see what the real Rubyists suggest? Over to you. Thanks in advance :D

    Read the article

  • XSLT question, how to transform xml when I have xslt file stored, but object in mem?

    - by JL
    I have a function that takes 2 parameters : 1 = XML file, 2 = XSLT file, then performs a transformation and returns the resulting HTML. Here is the function: /// <summary> /// Will apply an XSLT style to any XML file and return the rendered HTML. /// </summary> /// <param name="xmlFileName"> /// The file name of the XML document. /// </param> /// <param name="xslFileName"> /// The file name of the XSL document. /// </param> /// <returns> /// The rendered HTML. /// </returns> public string TransformXml(string xmlFileName, string xslFileName) { var xtr = new XmlTextReader(xmlFileName) { WhitespaceHandling = WhitespaceHandling.None }; var xd = new XmlDocument(); xd.Load(xtr); var xslt = new System.Xml.Xsl.XslCompiledTransform(); xslt.Load(xslFileName); var stm = new MemoryStream(); xslt.Transform(xd, null, stm); stm.Position = 1; var sr = new StreamReader(stm); xtr.Close(); return sr.ReadToEnd(); } I want to change the function not to accept a file name, but rather a strongly typed object de-serialized (now in the form of a variable). Is this possible? So keep the xslt coming from a file, but the xml input should be a the serialized xml of the object I pass, and I want to do this without file system IO.

    Read the article

  • Python's asyncore to periodically send data using a variable timeout. Is there a better way?

    - by Nick Sonneveld
    I wanted to write a server that a client could connect to and receive periodic updates without having to poll. The problem I have experienced with asyncore is that if you do not return true when dispatcher.writable() is called, you have to wait until after the asyncore.loop has timed out (default is 30s). The two ways I have tried to work around this is 1) reduce timeout to a low value or 2) query connections for when they will next update and generate an adequate timeout value. However if you refer to 'Select Law' in 'man 2 select_tut', it states, "You should always try to use select() without a timeout." Is there a better way to do this? Twisted maybe? I wanted to try and avoid extra threads. I'll include the variable timeout example here: #!/usr/bin/python import time import socket import asyncore # in seconds UPDATE_PERIOD = 4.0 class Channel(asyncore.dispatcher): def __init__(self, sock, sck_map): asyncore.dispatcher.__init__(self, sock=sock, map=sck_map) self.last_update = 0.0 # should update immediately self.send_buf = '' self.recv_buf = '' def writable(self): return len(self.send_buf) > 0 def handle_write(self): nbytes = self.send(self.send_buf) self.send_buf = self.send_buf[nbytes:] def handle_read(self): print 'read' print 'recv:', self.recv(4096) def handle_close(self): print 'close' self.close() # added for variable timeout def update(self): if time.time() >= self.next_update(): self.send_buf += 'hello %f\n'%(time.time()) self.last_update = time.time() def next_update(self): return self.last_update + UPDATE_PERIOD class Server(asyncore.dispatcher): def __init__(self, port, sck_map): asyncore.dispatcher.__init__(self, map=sck_map) self.port = port self.sck_map = sck_map self.create_socket(socket.AF_INET, socket.SOCK_STREAM) self.bind( ("", port)) self.listen(16) print "listening on port", self.port def handle_accept(self): (conn, addr) = self.accept() Channel(sock=conn, sck_map=self.sck_map) # added for variable timeout def update(self): pass def next_update(self): return None sck_map = {} server = Server(9090, sck_map) while True: next_update = time.time() + 30.0 for c in sck_map.values(): c.update() # <-- fill write buffers n = c.next_update() #print 'n:',n if n is not None: next_update = min(next_update, n) _timeout = max(0.1, next_update - time.time()) asyncore.loop(timeout=_timeout, count=1, map=sck_map)

    Read the article

  • Improving field get and set performance with ASM

    - by ng
    I would like to avoid reflection in an open source project I am developing. Here I have classes like the following. public class PurchaseOrder { @Property private Customer customer; @Property private String name; } I scan for the @Property annotation to determine what I can set and get from the PurchaseOrder reflectively. There are many such classes all using java.lang.reflect.Field.get() and java.lang.reflect.Field.set(). Ideally I would like to generate for each property an invoker like the following. public interface PropertyAccessor<S, V> { public void set(S source, V value); public V get(S source); } Now when I scan the class I can create a static inner class of PurchaseOrder like so. static class customer_Field implements PropertyAccessor<PurchaseOrder, Customer> { public void set(PurchaseOrder order, Customer customer) { order.customer = customer; } public Customer get(PurchaseOrder order) { return order.customer; } } With these I totally avoid the cost of reflection. I can now set and get from my instances with native performance. Can anyone tell me how I would do this. A code example would be great. I have searched the net for a good example but can find nothing like this. The ASM examples are pretty poor also. The key here is that I have an interface that I can pass around. So I can have various implementations, perhaps one with Java Reflection as a default, one with ASM, and maybe one with Javassist? Any help would be greatly appreciated.

    Read the article

  • converting a timestring to a duration

    - by radman
    Hi, At the moment I am trying to read in a timestring formatted and create a duration from that. I am currently trying to use the boost date_time time_duration class to read and store the value. boost date_time provides a method time_duration duration_from_string(std::string) that allows a time_duration to be created from a time string and it accepts strings formatted appropriately ("[-]h[h][:mm][:ss][.fff]".). Now this method works fine if you use a correctly formatted time string. However if you submit something invalid like "ham_sandwich" or "100" then you will still be returned a time_duration that is not valid. Specifically if you try to pass it to a standard output stream then an assertion will occur. My question is: Does anyone know how to test the validity of the boost time_duration? and failing that can you suggest another method of reading a timestring and getting a duration from it? Note: I have tried the obvious testing methods that time_duration provides; is_not_a_date_time(), is_special() etc and they don't pick up that there is an issue. Using boost 1.38.0

    Read the article

  • How to receive Email in JEE application

    - by Hank
    Obviously it's not so difficult to send out emails from a JEE application via JavaMail. What I am interested in is the best pattern to receive emails (notification bounces, mostly)? I am not interested in IMAP/POP3-based approaches (polling the inbox) - my application shall react to inbound emails. One approach I could think of would be Keep existing MTA (postfix on linux in my case) - ops team already knows how to configure / operate it For every mail that arrives, spawn a Java app that receives the data and sends it off via JMS. I could do this via an entry in /etc/aliases like myuser: "|/path/to/javahelper" with javahelper calling the Java app, passing STDIN along. MDB (part of JEE application) receives JMS message, parses it, detects bounce message and acts accordingly. Another approach could be Open a listening network socket on port 25 on the JEE application container. Associate a SessionBean with the socket. Bean is part of JEE application and can parse/detect bounces/handle the messages directly. Keep existing MTA as inbound relay, do all its security/spam filtering, but forward emails to myuser (that pass the filter) to the JEE application container, port 25. The first approach I have done before (albeit in a different language/setup). From a performance and (perceived) cleanliness point of view, I think the second approach is better, but it would require me to provide a proper SMTP transport implementation. Also, I don't know if it's at all possible to connect a network socket with a bean... What is your recommendation? Do you have details about the second approach?

    Read the article

< Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >