Search Results

Search found 962 results on 39 pages for 'consumer'.

Page 33/39 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Hybrid EAV/CR model via WCF (and statically-typed language)?

    - by Pat
    Background I'm working on the architecture for a cloud-based LOB application, using Silverlight for the client, WCF, ASP.NET/C# for server and SQL Server for storage. The data model requires some flexibility per user (ability to add custom properties and define validation rules for them, for example), and a hybrid EAV/CR persistence model on the server side will suit nicely. Problem I need an efficient and maintainable technology and approach to handle the transformation from the persisted EAV model to/from WCF (and similarly allow the client to bind to the resulting data - DataGrid is a key UI element)? Admission: I don't yet know enough about WCF to understand if it supports ExpandoObject directly, but I suspect it will. Options I started off looking at WCF RIA services, but quickly discovered they're heavily dependent upon both static type data and compile-time code generation. Neither of these appeal. The options I'm considering include: Using WCF RIA services and pass the data over the network directly in EAV form (i.e. Dictionary), and handle the binding issue purely on the client side (like this) Using a dynamic language (probably IronPython) to handle both ends of the communication, with plumbing to generate the necessary CLR type data on the client to allow binding, and transform to/from EAV form on the server (spam preventer stopped me from posting a URL here, I'll try it in a comment). Dynamic LINQ (CreateClass() and friends), although I'm way out of my depth there and don't know what the limitations on that approach might be yet. I'm interested in comments on these approaches as well as alternative approaches that might solve the problem. Other Notes The Silverlight client will not be the only consumer of the service, making me slightly uncomfortable with option #1 above. While the data model is flexible, it's not expected to be modified heavily. For argument's sake, we could assume that we might have 25 distinct data models active at a given time, with something like 10-20 unique data fields/rules each. Modifications to the data model will happen infrequently (typically when a new user is initially configured).

    Read the article

  • Why won't VS2010 RC use my existing types when I add a service reference?

    - by Johan Driessen
    I have a huge problem getting services references in VS2010 RC to use existing assemblies. Even though I have a class library with all the data contracts (classes marked with DataContract and properties with DataMember) that is shared between the service project and the consuming project (which is a class library), when I add a service reference, the data contracts are regenerated withing the service reference instead of using the existing types. When I was using VS2010 beta 2, this worked fine, and I have existing service references using the very same data contracts. But if I add a new service reference, or even update an old one, it won't use the existing types anymore. I have made a mini-test-solution, with one service, one data contract type and one console app as a consumer (all in the same solution), and there it seems to work, but that's no great comfort to me. Is there any way to see why it can't use the existing types? Edit to clearify. It works to generate the proxy classes with svcutil.exe, and point to the data contracts dll, like this: svcutil.exe http://localhost/MyService.svc /reference:[Path To DataContracts]\DataContracts.dll /n:*,MyProject.MyServiceReference /ct:System.Collections.Generic.List`1 The question is, what possible reason could there be for Visual Studio to generate its own datacontracts instead of using the existing ones even though the "reuse" checkbox is checked and the datacontracts assembly is referenced.

    Read the article

  • core dump during std::_List_node_base::unhook()

    - by Ron
    I have a program where std::list is used. The program uses threads which act on the std::list as producers and consumers. When a message is dealt with by the consumer, it is removed from the list using pop_front(). But, during pop_front, there is a core dump. The gdb trace is as below. could you help getting me some insights into this issue? (gdb) bt full 0 0xf7531d7b in std::_List_node_base::unhook () from /usr/lib/libstdc++.so.6 No symbol table info available. 1 0x0805c600 in std::list ::_M_erase (this=0x806b08c, __position={_M_node = 0x8075308}) at /opt/target/usr/include/c++/4.2.0/bits/stl_list.h:1169 __n = (class std::_List_node<myMsg> *) 0x0 2 0x0805c6af in std::list ::pop_front (this=0x806b08c) at /opt/target/usr/include/c++/4.2.0/bits/stl_list.h:750 No locals. 3 0x0805afb6 in Base::run () at ../../src/Base.cc:342 nSentBytes = 130 tmpnm = {_vptr.myMsg = 0x80652c0, m_msg = 0x8075140 "{0130,MSG_TYPE=ND_FUNCTION,ORG_PNAME=P01vm01Ax,FUNCTION=LOG,PARAM_CNT=3,DATETIME=06/12/2010 02:59:26.187,LOGNAME=N,ENTRY=Debug 0 }", m_from = 0x8096ee0 "P01vm01Ax", m_to = 0x0, static m_logged = false, static m_pLogMutex = {_data = {_lock = 0, __count = 0, __owner = 0, __kind = 0, _nusers = 0, {_spins = 0, _list = {_next = 0x0}}}, __size = '\0' , __align = 0}} newMsg = {_vptr.myMsg = 0x80652c0, m_msg = 0x0, m_from = 0x0, m_to = 0x0, static m_logged = false, static m_pLogMutex = {_data = {_lock = 0, __count = 0, __owner = 0, __kind = 0, _nusers = 0, {_spins = 0, _list = {_next = 0x0}}}, __size = '\0' , __align = 0}} strBuffer = "{0440,MSG_TYPE=NG_FUNCTION,ORG_PNAME=mach01./opt/abc/VAvsk/abc/comp/DML/gendrs.pl.17560,DST_PNAME=P01vm01Ax,FUNCTION=DRS_REPLICATE,CAUSE_DML_ERROR=N,CORRUPT_DATA=N,CORRUPT_HEADER=N,DEBUG=Y,EXTENDED_RU"... fds = {{fd = 5, events = 1, revents = 0}} retval = 0 iWaitTime = 0 4 0x0805b277 in startRun () at ../../src/Base.cc:454 No locals. 5 0xf7effe7b in start_thread () from /lib/libpthread.so.0 No symbol table info available. 6 0xf744d82e in clone () from /lib/libc.so.6 No symbol table info available.

    Read the article

  • Transfer data between C++ classes efficiently

    - by David
    Hi, Need help... I have 3 classes, Manager which holds 2 pointers. One to class A another to class B . A does not know about B and vise versa. A does some calculations and at the end it puts 3 floats into the clipboard. Next, B pulls from clipboard the 3 floats, and does it's own calculations. This loop is managed by the Manager and repeats many times (iteration after iteration). My problem: Now class A produces a vector of floats which class B needs. This vector can have more than 1000 values and I don't want to use the clipboard to transfer it to B as it will become time consumer, even bottleneck, since this behavior repeats step by step. Simple solution is that B will know A (set a pointer to A). Other one is to transfer a pointer to the vector via Manager But I'm looking for something different, more object oriented that won't break the existent separation between A and B Any ideas ? Many thanks David

    Read the article

  • How to implement an ID field on a POCO representing an Identity field in MS SQL?

    - by Dr. Zim
    If I have a Domain Model that has an ID that maps to a SQL identity column, what does the POCO look like that contains that field? Candidate 1: Allows anyone to set and get the ID. I don't think we want anyone setting the ID except the Repository, from the SQL table. public class Thing { public int ID {get;set;} } Candidate 2: Allows someone to set the ID upon creation, but we won't know the ID until after we create the object (factory creates a blank Thing object where ID = 0 until we persist it). How would we set the ID after persisting? public class Thing { public Thing () : This (ID: 0) {} public Thing (int ID) { this.ID = ID } private int _ID; public int ID { get { return this.ID;}; } Candidate 3: Methods to set ID? Somehow we would need to allow the Repository to set the ID without allowing the consumer to change it. Any ideas? Is this barking up the wrong tree? Do we send the object to the Repository, save it, throw it away, then create a new object from the loaded version and return that as a new object?

    Read the article

  • Tips on designing a .NET API for future use with F#

    - by Drew Noakes
    I'm in the process of designing a .NET API to allow developers to create RoboCup agents for the 3D simulated soccer league. I'm pretty happy with how the API work with C# code, however I would like to use this project to improve my F# skill (which is currently based on reading rather than practice). So I would like to ask what kinds of things I should consider when designing an API that is to be consumed by both C# and F# code. Some points. I make fairly heavy use of matrix and vector math. These are currently immutable classes/structs. The API currently defines a few interfaces with the consumer implements (eg: IAgent), using instances of their implementations (eg: MyAgent) to construct other API classes (eg: new Client(myAgent)). The API fires events. The API exposes a few delegate types. The API includes several enums. I'd like to release a version of the API as soon as possible and don't want to make major changes to it later if I realise it's too difficult to work with from F#. Any advice is appreciated.

    Read the article

  • oauth problem( app engine)

    - by portoalet
    hi i am trying to pull user's documents data from google docs using oauth, but i cannot understand how to do it - what's the purpose of oauth_verifier - how to get the access token secret? - if i try to use DocsService below, then i have a "server error" - is there a clear tutorial for this? i cannot find any atm.. String oauth_verifier = req.getParameter("oauth_verifier"); String oauth_token = req.getParameter("oauth_token"); String oauthtokensecret = req.getParameter("oauth_token_secret"); GoogleOAuthParameters oauthparam = new GoogleOAuthParameters(); oauthparam.setOAuthConsumerKey("consumer key"); oauthparam.setOAuthConsumerSecret("secret"); oauthparam.setOAuthToken(oauth_token); oauthparam.setOAuthTokenSecret(oauthtokensecret); oauthparam.setOAuthVerifier(oauth_verifier); OAuthHmacSha1Signer signer = new OAuthHmacSha1Signer(); GoogleOAuthHelper oauthhelper = new GoogleOAuthHelper(signer); String accesstoken = ""; String accesstokensecret = ""; try { oauthhelper.getUnauthorizedRequestToken(oauthparam); accesstoken = oauthhelper.getAccessToken(oauthparam); accesstokensecret = oauthparam.getOAuthTokenSecret(); // DocsService client = new DocsService("yourCompany-YourAppName-v1"); ...

    Read the article

  • Anyone know exactly which JMS messages will be redelivered in CLIENT_ACKNOWLEDGE mode if the client

    - by user360612
    The spec says "Acknowledging a consumed message automatically acknowledges the receipt of all messages that have been delivered by its session" - but what I need to know is what it means by 'delivered'. For example, if I call consumer.receive() 6 times, and then call .acknowledge on the 3rd message - is it (a) just the first 3 messages that are ack'd, or (b) all 6? I'm really hoping it's option a, i.e. messages after the one you called acknowledge on WILL be redelivered, otherwise it's hard to see how you could prevent message lost in the event of my receiver process crashing before I've had a chance to persist and acknowledge the messages. But the spec is worded such that it's not clear. I get the impression the authors of the JMS spec considered broker failure, but didn't spend too long thinking about how to protect against client failure :o( Anyway, I've been able to test with SonicMQ and found that it implements (a), i.e. messages 'received' later than the message you call .ack on DO get redelivered in the event of a crash, but I'd love to know how other people read the standard, and if anyone knows how any other providers have implemented CLIENT_ACKNOWLEDGE? (i.e. what the 'de facto' standard is) Thanks Ben

    Read the article

  • Is nested synchronized block necessary?

    - by Dan
    I am writing a multithreaded program and I have a method that has a nested synchronized blocks and I was wondering if I need the inner sync or if just the outer sync is good enough. public class Tester { private BlockingQueue<Ticket> q = new LinkedBlockingQueue<>(); private ArrayList<Long> list = new ArrayList<>(); public void acceptTicket(Ticket p) { try { synchronized (q) { q.put(p); synchronized (list) { if (list.size() < 5) { list.add(p.getSize()); } else { list.remove(0); list.add(p.getSize()); } } } } catch (InterruptedException ex) { Logger.getLogger(Consumer.class.getName()).log(Level.SEVERE, null, ex); } } } EDIT: This isn't a complete class as I am still working on it. But essentially I am trying to emulate a ticket machine. The ticket machine maintains a list of tickets in the BlockingQueue q. Whenever a client adds a ticket to the machine, the machine also keeps track of the price of the last 5 tickets (ArrayList list)

    Read the article

  • What's the best way to resolve this scope problem?

    - by Peter Stewart
    I'm writing a program in python that uses genetic techniques to optimize expressions. Constructing and evaluating the expression tree is the time consumer as it can happen billions of times per run. So I thought I'd learn enough c++ to write it and then incorporate it in python using cython or ctypes. I've done some searching on stackoverflow and learned a lot. This code compiles, but leaves the pointers dangling. I tried 'this_node = new Node(...' . It didn't seem to work. And I'm not at all sure how I'd delete all the references as there would be hundreds. I'd like to use variables that stay in scope, but maybe that's not the c++ way. What is the c++ way? class Node { public: char *cargo; int depth; Node *left; Node *right; } Node make_tree(int depth) { depth--; if(depth <= 0) { Node tthis_node("value",depth,NULL,NULL); return tthis_node; } else { Node this_node("operator" depth, &make_tree(depth), &make_tree(depth)); return this_node; } };

    Read the article

  • How can I make keyword order more relevant in my search?

    - by Atomiton
    In my database, I have a keywords field that stores a comma-delimited list of keywords. For example, a Shrek doll might have the following keywords: ogre, green, plush, hero, boys' toys A "Beanie Baby" doll ( that happens to be an ogre ) might have: beanie baby, kids toys, beanbag toys, soft, infant, ogre (That's a completely contrived example.) What I'd like to do is if the consumer searches for "ogre" I'd like the "Shrek" doll to come up higher in the search results. My content administrator feels that if the keyword is earlier in the list, it should get a higher ranking. ( This makes sense to me and it makes it easy for me to let them control the search result relevance ). Here's a simplified query: SELECT p.ProductID AS ContentID , p.ProductName AS Title , p.ProductCode AS Subtitle , 100 AS Rank , p.ProductKeywords AS Keywords FROM Products AS p WHERE FREETEXT( p.ProductKeywords, @SearchPredicate ) I'm thinking something along the lines of replacing the RANK with: , 200 - INDEXOF(@SearchTerm) AS Rank This "should" rank the keyword results by their relevance I know INDEXOF isn't a SQL command... but it's something LIKE that I would like to accomplish. Am I approaching this the right way? Is it possible to do something like this? Does this make sense?

    Read the article

  • asyncore callbacks launching threads... ok to do?

    - by sbartell
    I'm unfamiliar with asyncore, and have very limited knowledge of asynchronous programming except for a few intro to twisted tutorials. I am most familiar with threads and use them in all my apps. One particular app uses a couchdb database as its interface. This involves longpolling the db looking for changes and updates. The module I use for couchdb is couchdbkit. It uses an asyncore loop to watch for these changes and send them to a callback. So, I figure from this callback is where I launch my worker threads. It seems a bit crude to mix asynchronous and threaded programming. I really like couchdbkit, but would rather not introduce issues into my program. So, my question is, is it safe to fire threads from an async callback? Here's some code... {{{ def dispatch(change): global jobs, db_url # jobs is my queue db = Database(db_url) work_order = db.get(change['id']) # change is an id to the document that changed. # i need to get the actual document (workorder) worker = Worker(work_order, db) # fire the thread jobs.append[worker] worker.start() return main() . . . consumer.wait(cb=dispatch, since=update_seq, timeout=10000) #wait constains the asyncloop. }}}

    Read the article

  • Java getInputStream 400 errors

    - by Bill Szerdy
    When I contact a web service using a Java HttpUrlConnection it just returns a 400 Bad Request (IOException). How do I get the XML information that the server is returning; it does not appear to be in the getErrorStream of the connection nor is it in any of the exception information. When I run the following PHP code against a web service: <?php $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "https://www.myclientaddress.com/here/" ); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1 ); curl_setopt($ch, CURLOPT_POST,1 ); curl_setopt($ch, CURLOPT_POSTFIELDS,"username=ted&password=scheckler&type=consumer&id=123456789&zip=12345"); $result=curl_exec ($ch); echo $result; ?> it returns the following: <?xml version="1.0" encoding="utf-8"?> <response> <status>failure</status> <errors> <error name="RES_ZIP">Zip code is not valid.</error> <error name="ZIP">Invalid zip code for residence address.</error> </errors> </response> so I know the information exists

    Read the article

  • Which pdf elements could cause crashes?

    - by Felixyz
    This is a very general question but it's based on a specific problem. I've created a pdf reader app for the iPad and it works fine except for certain pdf pages which always crash the app. We now found out that the very same pages cause Safari to crash as well, so as I had started to suspect the problem is somewhere in Apple's pdf rendering code. From what I have been able to see, the crashing pages cause the rendering libraries to start allocating memory like mad until the app is killed. I have nothing else to help me pinpoint what triggers this process. It doesn't necessarily happen with the largest documents, or the ones with the most shapes. In fact, we haven't found any parameter that helps us predict which pages will crash and which not. Now we just discovered that running the pages through a consumer program that lets you merge docs gets rid of the problem, but I haven't been able to detect which attribute or element it is that is the key. Changing documents by hand is also not an option for us in the long run. We need to run an automated process on our server. I'm hoping someone with deeper knowledge about the pdf file format would be able to point me in a reasonable direction to look for document features that could cause this kind of behavior. All I've found so far is something about JBIG2 images, and I don't think we have any of those.

    Read the article

  • identify accounts using zend-openid [HELP]

    - by tawfekov
    Hi! , i had successfully implemented simple open ID authentication with the [gmail , yahoo , myopenid, aol and many others ] exactly as stackoverflow do but i had a problem with identifying accounts for example 'openid_ns' => string 'http://specs.openid.net/auth/2.0' (length=32) 'openid_mode' => string 'id_res' (length=6) 'openid_op_endpoint' => string 'https://www.google.com/accounts/o8/ud' (length=37) 'openid_response_nonce' => string '2010-05-02T06:35:40Z9YIASispxMvKpw' (length=34) 'openid_return_to' => string 'http://dev.local/download/index/gettingback' (length=43) 'openid_assoc_handle' => string 'AOQobUf_2jrRKQ4YZoqBvq5-O9sfkkng9UFxoE9SEBlgWqJv7Sq_FYgD7rEXiLLXhnjCghvf' (length=72) 'openid_signed' => string 'op_endpoint,claimed_id,identity,return_to,response_nonce,assoc_handle' (length=69) 'openid_sig' => string 'knieEHpvOc/Rxu1bkUdjeE9YbbFX3lqZDOQFxLFdau0=' (length=44) 'openid_identity' => string 'https://www.google.com/accounts/o8/id?id=AItOawk0NAa51T3xWy-oympGpxFj5CftFsmjZpY' (length=80) 'openid_claimed_id' => string 'https://www.google.com/accounts/o8/id?id=AItOawk0NAa51T3xWy-oympGpxFj5CftFsmjZpY' (length=80) i thought that openid_claimed_id or openid_identity is identical for every account but after some testing turns out the claimed id is changeable by the users them self in yahoo and randomly changing in case of gmail FYI am using zend openid with a bit modification on consumer class and i can't use SERG [Simple Refistration Extension] cuz currently it uses openid v1 specification :( i hadn't read the specification of openid so if any body know please help me to how to identify openid accounts to know if this account has signed in before or not

    Read the article

  • Dealing with SQLException with spring,hibernate & Postgres

    - by mad
    Hi im working on a project using HibernateDaoSUpport from my Daos from Spring & spring-ws & hibernate & postgres who will be used in a national application (means a lot of users) Actually, every exception from hibernate is automatically transformed into some specific Spring dataAccesException. I have a table with a keyword on the dabatase & a unique constraint on the keywords : no duplicate keywords is allowed. I have found twows ways to deal with with that in the Insert Dao: 1- Check for the duplicate manually (with a select) prior to doing your insert. I means that the spring transaction will have a SERIALIZABLE isolation level. The obvious drawback is that we have now 2 queries for a simple insert.Advantage: independent of the database 2-let the insert gone & catch the SqlException & convert it to a userfriendly message & errorcode to the final consumer of our webservices. Solution 2: Spring has developped a way to translate specific exeptions into customized exceptions. see http://www.oracle.com/technology/pub/articles/marx_spring.html In my case i would have a ConstraintViolationException. Ideally i would like to write a custom SQLExceptionTranslator to map the duplicate word constraint in the database with a DuplicateWordException. But i can have many unique constraints on the same table. So i have to get the message of the SQLEXceptions in order to find the name of the constraint declared in the create table "uq_duplicate-constraint" for example. Now i have a strong dependency with the database. Thanks in advance for your answers & excuse me for my poor english (it is not my mother tongue)

    Read the article

  • How to implement an ID field on a POCO representing an Identity field in SQL Server?

    - by Dr. Zim
    If I have a Domain Model that has an ID that maps to a SQL Server identity column, what does the POCO look like that contains that field? Candidate 1: Allows anyone to set and get the ID. I don't think we want anyone setting the ID except the Repository, from the SQL table. public class Thing { public int ID {get;set;} } Candidate 2: Allows someone to set the ID upon creation, but we won't know the ID until after we create the object (factory creates a blank Thing object where ID = 0 until we persist it). How would we set the ID after persisting? public class Thing { public Thing () : This (ID: 0) {} public Thing (int ID) { this.ID = ID } private int _ID; public int ID { get { return this.ID;}; } Candidate 3: Methods to set ID? Somehow we would need to allow the Repository to set the ID without allowing the consumer to change it. Any ideas? Is this barking up the wrong tree? Do we send the object to the Repository, save it, throw it away, then create a new object from the loaded version and return that as a new object?

    Read the article

  • BlackBerry Deployment Strategies

    - by cagreen
    I'm new to large scale BB app deployment and I'm looking for some clarification on the various methods of deployment. Please bear with me as I'm sure there is more to it than my naive view would lead me to believe. My app is very targeted to corporate users and requires a subscription to some additional services before it can be used. In other words, it's not targeted towards the consumer market, so I'm not worried about people not being able to easily find it online. What do I need to be aware of when looking at deployment strategies? Any gotchas? From my understanding my choices are: - App World small upfront vendor fee users can easily search for and find my app billing handled by RIM 4 licensing models (static, single, pool, dynamic). Though I'm not sure I've seen enough info on the pool and dynamic to fully appreciate how it might help me. - Download from my website billing is handled by me can I enforce the number of licenses that are in use within an organization? is this easier/harder for a user? - What else am I missing?

    Read the article

  • How to implement message queuing and handling in AWS with NServiceBus

    - by Pete Lunenfeld
    I am creating a new ASP MVC order application in the Amazon (AWS) cloud with the persistence layer at my local datacenter. I will be using the CQRS pattern. The goal of the project is high availability using Queue(s) to store and forward writes (commands/events) that can be picked up and handled asynchronously at my local datacenter. Then, ff the WAN or my local datacenter fails, my cloud MVC app can still take orders and just queue them up until processing can resume. My first thought was to use AWS SQS for the queuing and create my own queue consumer/dispatcher/handler in my own c# application to process the incoming messages/events. MVC (@ Amazon) -- Event/POCO -- SQS -- QueueReader (@ my datacenter) -- DB Then I found NServiceBus. NSB seems to handle lots of details very nicely: message handling, retries, error handling, etc. I hate to reinvent the wheel, and NServiceBus seems like a full featured and mature product that would be perfect for me. But on further research, it does NOT look like NServiceBus is really meant to be used over the WAN in physically separated environments (Cloud to my Datacenter). Google and SO don't really paint a good picture of using NServiceBus across the WAN like I need. How can I use NServiceBus across the WAN? Or is there a better solution to handle queuing and message handling between Amazon an my local datacenter?

    Read the article

  • Rescuing a failed WCF call

    - by illdev
    Hello, I am happily using Castle's WcfFacility. From Monorail I know the handy concept of Rescues - consumer friendly results that often, but not necessarily, contain Data about what went wrong. I am creating a Silverlight application right now, doing quite a few WCF service calls. All these request return an implementation of public class ServiceResponse { private string _messageToUser = string.Empty; private ActionResult _result = ActionResult.Success; public ActionResult Result // Success, Failure, Timeout { get { return _result; } set { _result = value; } } public string MessageToUser { get { return _messageToUser; } set { _messageToUser = value; } } } public abstract class ServiceResponse<TResponseData> : ServiceResponse { public TResponseData Data { get; set; } } If the service has trouble responding the right way, I would want the thrown Exception to be intercepted and converted to the expected implementation. base on the thrown exception, I would want to pass on a nice message. here is how one of the service methods looks like: [Transaction(TransactionMode.Requires)] public virtual SaveResponse InsertOrUpdate(WarehouseDto dto) { var w = dto.Id > 0 ? _dao.GetById(dto.Id) : new Warehouse(); w.Name = dto.Name; _dao.SaveOrUpdate(w); return new SaveResponse { Data = new InsertData { Id = w.Id } }; } I need the thrown Exception for the Transaction to be rolled back, so i cannot actually catch it and return something else. Any ideas, where I could hook in?

    Read the article

  • Possible to Dynamic Form Generation Using PHP global variables

    - by J M 4
    I am currently a fairly new programmer but am trying to build a registration page for a medical insurance idea we have which captures individual information and subsequent pieces of information about that individual's sub parts. In this case, it is a fight promoter enrolling his 15+ boxers for fight testing services. Right now, I have the site fully laid out to accept 7 fighters worth of information. This is collected during the manager's enrollment. However, each fighter's information is passed and stored in session super globals such as: $_SESSION['F1Firstname']; and $_SESSION['F3SSN3'];. The issue I am running into is this, I want to create a drop down menu selector for the manager to add information for up to 20-30 fighters. Right now I use PHP to state: if ($_SESSION['Num_Fighters'] 6) ... then display the table form fields to collect consumer data. If I have to build hidden elements for 30 fighters AND provide javascript/php validation (yes I am doing both) then I fear the file size for the document will be unnecessarily large for the maanger who only wants to enroll 2 fighters. Can anybody help?

    Read the article

  • Communication between lexer and parser

    - by FredOverflow
    Every time I write a simple lexer and parser, I stumble upon the same question: how should the lexer and the parser communicate? I see four different approaches: The lexer eagerly converts the entire input string into a vector of tokens. Once this is done, the vector is fed to the parser which converts it into a tree. This is by far the simplest solution to implement, but since all tokens are stored in memory, it wastes a lot of space. Each time the lexer finds a token, it invokes a function on the parser, passing the current token. In my experience, this only works if the parser can naturally be implemented as a state machine like LALR parsers. By contrast, I don't think it would work at all for recursive descent parsers. Each time the parser needs a token, it asks the lexer for the next one. This is very easy to implement in C# due to the yield keyword, but quite hard in C++ which doesn't have it. The lexer and parser communicate through an asynchronous queue. This is commonly known under the title "producer/consumer", and it should simplify the communication between the lexer and the parser a lot. Does it also outperform the other solutions on multicores? Or is lexing too trivial? Is my analysis sound? Are there other approaches I haven't thought of? What is used in real-world compilers? It would be really cool if compiler writers like Eric Lippert could shed some light on this issue.

    Read the article

  • Paginating requests to an API

    - by user332912
    I'm consuming (via urllib/urllib2) an API that returns XML results. The API always returns the total_hit_count for my query, but only allows me to retrieve results in batches of, say, 100 or 1000. The API stipulates I need to specify a start_pos and end_pos for offsetting this, in order to walk through the results. Say the urllib request looks like "http://someservice?query='test'&start_pos=X&end_pos=Y". If I send an initial 'taster' query with lowest data transfer such as http://someservice?query='test'&start_pos=1&end_pos=1 in order to get back a result of, for conjecture, total_hits = 1234, I'd like to work out an approach to most cleanly request those 1234 results in batches of, again say, 100 or 1000 or... This is what I came up with so far, and it seems to work, but I'd like to know if you would have done things differently or if I could improve upon this: hits_per_page=1000 # or 1000 or 200 or whatever, adjustable total_hits = 1234 # retreived with BSoup from 'taster query' base_url = "http://someservice?query='test'" startdoc_positions = [n for n in range(1, total_hits, hits_per_page)] enddoc_positions = [startdoc_position + hits_per_page - 1 for startdoc_position in startdoc_positions] for start, end in zip(startdoc_positions, enddoc_positions): if end total_hits: end = total_hits print "url to request is:\n ", print "%s&start_pos=%s&end_pos=%s" % (base_url, start, end) p.s. I'm a long time consumer of StackOverflow, especially the Python questions, but this is my first question posted. You guys are just brilliant.

    Read the article

  • How can I refactor this to work without breaking the pattern horribly?

    - by SnOrfus
    I've got a base class object that is used for filtering. It's a template method object that looks something like this. public class Filter { public void Process(User u, GeoRegion r, int countNeeded) { List<account> selected = this.Select(u, r, countNeeded); // 1 List<account> filtered = this.Filter(selected, u, r, countNeeded); // 2 if (filtered.Count > 0) { /* do businessy stuff */ } // 3 if (filtered.Count < countNeeded) this.SendToSuccessor(u, r, countNeeded - filtered) // 4 } } Select(...), Filter(...) are protected abstract methods and implemented by the derived classes. Select(...) finds objects in the based on x criteria, Filter(...) filters those selected further. If the remaining filtered collection has more than 1 object in it, we do some business stuff with it (unimportant to the problem here). SendToSuccessor(...) is called if there weren't enough objects found after filtering (it's a composite where the next class in succession will also be derived from Filter but have different filtering criteria) All has been ok, but now I'm building another set of filters, which I was going to subclass from this. The filters I'm building however would require different params and I don't want to just implement those methods and not use the params or just add to the param list the ones I need and have them not used in the existing filters. They still perform the same logical process though. I also don't want to complicated the consumer code for this (which looks like this) Filter f = new Filter1(); Filter f2 = new Filter2(); Filter f3 = new Filter3(); f.Sucessor = f2; f2.Sucessor = f3; /* and so on adding filters as successors to previous ones */ foreach (User u in users) { foreach (GeoRegion r in regions) { f.Process(u, r, ##); } } How should I go about it?

    Read the article

  • Are these saml request-response good enough?

    - by Ashwin
    I have set up a single sign on(SSO) for my services. All the services confirm the identity of the user using the IDPorvider(IDP). In my case I am also the IDP. In my saml request, I have included the following: 1. the level for which auth. is required. 2. the consumer url 3. the destination service url. 4. Issuer Then, encrypting this message with the SP's(service provider) private key and then with the IDP's Public key. Then I am sending this request. The IDP on receiving the request, first decrypts with his own private key and then with SP's public key. In the saml response: 1. destination url 2. Issuer 3. Status of the response Is this good enough? Please give your suggestions?

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39  | Next Page >