Search Results

Search found 7324 results on 293 pages for 'operations research'.

Page 241/293 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • Providing custom database functionality to custom asp.net membership provider

    - by IrfanRaza
    Hello friends, I am creating custom membership provider for my asp.net application. I have also created a separate class "DBConnect" that provides database functionality such as Executing SQL statement, Executing SPs, Executing SPs or Query and returning SqlDataReader and so on... I have created instance of DBConnect class within Session_Start of Global.asax and stored to a session. Later using a static class I am providing the database functionality throughout the application using the same single session. In short I am providing a single point for all database operations from any asp.net page. I know that i can write my own code to connect/disconnect database and execute SPs within from the methods i need to override. Please look at the code below - public class SGI_MembershipProvider : MembershipProvider { ...... public override bool ChangePassword(string username, string oldPassword, string newPassword) { if (!ValidateUser(username, oldPassword)) return false; ValidatePasswordEventArgs args = new ValidatePasswordEventArgs(username, newPassword, true); OnValidatingPassword(args); if (args.Cancel) { if (args.FailureInformation != null) { throw args.FailureInformation; } else { throw new Exception("Change password canceled due to new password validation failure."); } } ..... //Database connectivity and code execution to change password. } .... } MY PROBLEM - Now what i need is to execute the database part within all these overriden methods from the same database point as described on the top. That is i have to pass the instance of DBConnect existing in the session to this class, so that i can access the methods. Could anyone provide solution on this. There might be some better techniques i am not aware of that. The approach i am using might be wrong. Your suggessions are always welcome. Thanks for sharing your valuable time.

    Read the article

  • Word forms with too many ActiveX checkboxes load slowly.

    - by Luke
    Hi there, my company's software product has a feature that allows users to generate forms from Word templates. The program auto fills some fields from the SQL database and the user can fill in other data that they desire. So we have a .dotx template that holds the design of the form, and then the user gets the .docx file to fill out when they call it from our program. The problem we're having is that some of our users have been finding that the forms take an exceptionally long time to open up and then, once open, are so slow to respond (scroll around, etc) that they're unusable. So in my investigations so far, I've found out that the problem systems are one with lower powered CPUs (unfortunately it happens for systems above our system requirements) and the Word forms that cause the problems are ones with large amount of ActiveX style checkboxes on them. I verified that reducing the ActiveX checkboxes fixes the form loading problems. So I have the following questions about solutions (we're using Word 2007): 1) Is there any way to configure Word, or some other settings, so that there won't be such a strain opening a Word form with lots of ActiveX checkboxes? Any way of speeding up Word's opening? 2) Using Legacy style checkboxes instead of the ActiveX ones makes the forms load fine, but it looks like the user has to double-click the checkbox and change Default Value-Checked. Is there a way to configure it so that they can simply click on the checkbox to tick it? "Legacy Forms" checkbox as a name kind of worries me (Legacy…), does that mean a future version of word at some point wouldn't load the checkboxes because they're "legacy"? 3) Yes, it became clear to me after a little bit of research into solutions that Word is not the tool for the job for forms like I'm describing. InfoPath seems to be exactly what we should have been using all along but unfortunately I wasn't involved in the decision making or development of these forms, just tasked with coming up with a solution. I'd appreciate answers to any of these, or if anyone has any other ideas for solutions to this problem. Thanks

    Read the article

  • Doing a POST to a Service Operation in ADO.NET data services

    - by DataServices123
    Is it possible to POST to a service operation defined in an ADO.NET data service (it is decorated with WebInvoke)? I had no problem calling the service operation as an HTTP GET. However, when I switched to doing a POST, the stack trace consistently comes back with "Parameter cannot be NULL". I am using the jQuery syntax below, and sending the POST as JSON (though I'm not including the $.ajax call, the parameter names line up exactly). It would be possible to try this with WCF, as opposed to ADO.NET data services (newly renamed as WCF data services). However, my preference would be to use this approach first. I have tried this with and without the stringify method. Most examples online only show how to use a POST to entities (i.e. for handling CRUD operations). However, in this case we are POSTing to a Service Operation. function PostToDataService() { varType = "POST"; varUrl = "http://dev-server/MyServices/MyService.svc/SampleSO"; varData = { CustomMessage: $("#TextBox1").val() }; varData = JSON.stringify(varData); varContentType = "application/json; charset=utf-8"; varDataType = "json"; varProcessData = true; CallService(); }

    Read the article

  • C++ Declarative Parsing Serialization

    - by Martin York
    Looking at Java and C# they manage to do some wicked processing based on special languaged based anotation (forgive me if that is the incorrect name). In C++ we have two problems with this: 1) There is no way to annotate a class with type information that is accessable at runtime. 2) Parsing the source to generate stuff is way to complex. But I was thinking that this could be done with some template meta-programming to achieve the same basic affect as anotations (still just thinking about it). Like char_traits that are specialised for the different types an xml_traits template could be used in a declaritive way. This traits class could be used to define how a class is serialised/deserialized by specializing the traits for the class you are trying to serialize. Example Thoughs: template<typename T> struct XML_traits { typedef XML_Empty Children; }; template<> struct XML_traits<Car> { typedef boost::mpl::vector<Body,Wheels,Engine> Children; }; template<typename T> std::ostream& Serialize(T const&) { // my template foo is not that strong. // but somthing like this. boost::mpl::for_each<typename XML_Traits<T>::Children,Serialize>(data); } template<> std::ostream& Serialize<XML_Empty>(T const&) { /* Do Nothing */ } My question is: Has anybody seen any projects/decumentation (not just XML) out there that uses techniques like this (template meta-programming) to emulate the concept of annotation used in languges like Java and C# that can then be used in code generation (to effectively automate the task by using a declaritive style). At this point in my research I am looking for more reading material and examples.

    Read the article

  • Plist data OK in iPhone simulator but disappears after installation onto device

    - by user183804
    I am testing an application on a first-generation iPod running OS 3.1.3 with a tableview populated from a plist. The application works well on the simulator. Initially after installation onto the iPod for testing, the tableview was blank (scrollable lines present, but empty rows). After searching this site, I found one issue to be a case-sensitive problem in the name of the plist file ("Data.plist in resources; "data.plist" in code). When the problem persisted, I again found on this site the advice to "Clean all Targets" prior to installation. I now have found that after performing a "Clean all Targets" operation twice, quitting XCode in between operations, the problem resolves, and the tableview appears properly populated after installation. I am wondering whether this issue signifies a corrupted plist file, prior to making the application available on the App store. Re-creating the plist file from scratch and then testing it likely would take several hours of work, so I do not want to do so unless absolutely necessary. Thanks in advance for any help.

    Read the article

  • Is there a best practice for concatenating MP3 Files, adjusting sample rates to match, while preserving original files?

    - by Scott
    Hello overflow community! Does anyone know if there is a "best practice" to concatenate mp3 files to create new files, while preserving the original files? I am working on a CentOS Linux machine, in command line. I will eventually call the command line from a PHP script. I have been doing research and I have come up with a process that I think could work. It combines general advice from different forums, blogs, and sources like this one. So here I go: Create a temporary folder Loop through files to create a new, converted copy, of file into a "raw" format (which one, I don't know. I didn't know "raw" files existed before too long ago. I could use some suggestions on this) Store the path to the temporary files, in the temporary folder, and then loop through the files to concatenate them and then put the new merged file the final "processed directory" Delete the contents of the temporary file with the temporary raw files inside. Convert the final file from "raw" to mp3 and enjoy the finished result I'm thinking that this course of action might be best because I can't necessarily control the quality of the original "source" mp3s. The only other option I could think of would be to create a script that would perform a similar process upon files being added to the system leaving only the files with the "proper" format and removing the original "erroneous" file. Hopefully you can see that I have put some thought into this and that I'm trying to leverage the collective knowledge of this community to choose the best direction. Perhaps there is a better path that I could take? By concatenate, I mean to join together in sequence to create a new audio file from the "concatenated files."

    Read the article

  • Shrinking the transaction log of a mirrored SQL Server 2005 database

    - by Peter Di Cecco
    I've been looking all over the internet and I can't find an acceptable solution to my problem, I'm wondering if there even is a solution without a compromise... I'm not a DBA, but I'm a one man team working on a huge web site with no extra funding for extra bodies, so I'm doing the best I can. Our backup plan sucks, and I'm having a really hard time improving it. Currently, there are two servers running SQL Server 2005. I have a mirrored database (no witness) that seems to be working well. I do a full backup at noon and at midnight. These get backed up to tape by our service provider nightly, and I burn the backup files to dvd weekly to keep old records on hand. Eventually I'd like to switch to log shipping, since mirroring seems kinda pointless without a witness server. The issue is that the transaction log is growing non-stop. From the research I've done, it seems that I can't truncate a log file of a mirrored database. So how do I stop the file from growing!? Based on this web page, I tried this: USE dbname GO CHECKPOINT GO BACKUP LOG dbname TO DISK='NULL' WITH NOFORMAT, INIT, NAME = N'dbnameLog Backup', SKIP, NOREWIND, NOUNLOAD GO DBCC SHRINKFILE('dbname_Log', 2048) GO But that didn't work. Everything else I've found says I need to disable the mirror before running the backup log command in order for it to work. My Question (TL;DR) How can I shrink my transaction log file without disabling the mirror?

    Read the article

  • Generics vs inheritance (when no collection classes are involved)

    - by Ram
    This is an extension of this questionand probably might even be a duplicate of some other question(If so, please forgive me). I see from MSDN that generics are usually used with collections The most common use for generic classes is with collections like linked lists, hash tables, stacks, queues, trees and so on where operations such as adding and removing items from the collection are performed in much the same way regardless of the type of data being stored. The examples I have seen also validate the above statement. Can someone give a valid use of generics in a real-life scenario which does not involve any collections ? Pedantically, I was thinking about making an example which does not involve collections public class Animal<T> { public void Speak() { Console.WriteLine("I am an Animal and my type is " + typeof(T).ToString()); } public void Eat() { //Eat food } } public class Dog { public void WhoAmI() { Console.WriteLine(this.GetType().ToString()); } } and "An Animal of type Dog" will be Animal<Dog> magic = new Animal<Dog>(); It is entirely possible to have Dog getting inherited from Animal (Assuming a non-generic version of Animal)Dog:Animal Therefore Dog is an Animal Another example I was thinking was a BankAccount. It can be BankAccount<Checking>,BankAccount<Savings>. This can very well be Checking:BankAccount and Savings:BankAccount. Are there any best practices to determine if we should go with generics or with inheritance ?

    Read the article

  • Ember Data Sycn - LocalStorage+REST+RealTime+Online/Offline

    - by Miguel Madero
    We have a combination of requirements in terms o data access. Pre-load some reference data. We need reference data to survive browser restarts instead of just living in memory to avoid loading it all the time. I'm currently using the LocalStorageAdapter for that. Once we have it, we would like to sync changes (polling or using Socket.IO in the background and updating the LocalStorage could do the trick) There're other models that are more transactional, where we would need to directly go to the Server and get/save them. It would be nice to use something like the RESTAdapter for that. Lastly, there're some operations that should work off-line and changes should be synced later. To make it more concrete: * We pre-load vendor and "favorite products" into Local Storage. We work offline with those. * We need to sync server changes to vendor and product information. * If they search the full catalog, that requires them to be online. * When offline, we need to allow users to add something to their cart or even submit and order. We would like to queue this action and submit it when they have an Internet Connection. So a few questions are derived from this: * Is there a way to user RESTAdapter in combination with LocalStorage? * Is there some Socket.IO support? (Happy to do this part manually) * Is there Queueing support? Ideally at the Ember-Data level. I know we will have to do a lot of this manually and pull together the different lego pieces, but I wanted to ask for some perspective from experience Ember devs.

    Read the article

  • Significance in R

    - by Gemsie
    Ok, this is quite hard to explain, but I'm at a complete loss what to do. I'm a relative newcomer to R and although I can completely admire how powerful it is, I'm not too good at actually using it.... Basically, I have some very contrived data that I need to analyse (it wasn't me who chose this, I can assure you!). I have the right and left hand lengths of lots of people, as well as some numeric data that shows their sociability. Now I would like to know if people who have significantly different lengths of hand are more or less sociable than those who have the same (leading into the research that 'symmetrical' people are more sociable and intelligent, etc. I have got as far as loading the data into R, then I have no idea where to go from there. How on Earth do I start to separate those who are close to symmetrical to those who aren't to then start to do the analysis? Ok, using Sasha's great advice, I did the cor.test and got the following: Pearson's product-moment correlation data: measurements$l.hand - measurements$r.hand and measurements$sociable t = 0.2148, df = 150, p-value = 0.8302 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: -0.1420623 0.1762437 sample estimates: cor 0.01753501 I have never used this test before, so am unsure how to intepret it...you wouldn't think I was on my fourth Scientific degree would you?! :(

    Read the article

  • What is the fastest way to do division in C for 8bit MCUs?

    - by Jordan S
    I am working on the firmware for a device that uses an 8bit mcu (8051 architecture). I am using SDCC (Small Device C Compiler). I have a function that I use to set the speed of a stepper motor that my circuit is driving. The speed is set by loading a desired value into the reload register for a timer. I have a variable, MotorSpeed that is in the range of 0 to 1200 which represents pulses per second to the motor. My function to convert MotorSpeed to the correct 16bit reload value is shown below. I know that float point operations are pretty slow and I am wondering if there is a faster way of doing this... void SetSpeed() { float t = MotorSpeed; unsigned int j = 0; t = 1/t ; t = t / 0.000001; j = MaxInt - t; TMR3RL = j; // Set reload register for desired freq return; }

    Read the article

  • Improving I/O performance in C++ programs[external merge sort]

    - by Ajay
    I am currently working on a project involving external merge-sort using replacement-selection and k-way merge. I have implemented the project in C++[runs on linux]. Its very simple and right now deals with only fixed sized records. For reading & writing I use (i/o)fstream classes. After executing the program for few iterations, I noticed that I/O read blocks for requests of size more than 4K(typical block size). Infact giving buffer sizes greater than 4K causes performance to decrease. The output operations does not seem to need buffering, linux seemed to take care of buffering output. So I issue a write(record) instead of maintaining special buffer of writes and then flushing them out at once using write(records[]). But the performance of the application does not seem to be great. How could I improve the performance? Should I maintain special I/O threads to take care of reading blocks or are there existing C++ classes providing this abstraction already?(Something like BufferedInputStream in java)

    Read the article

  • Rails paginate existing array of ActiveRecord results

    - by SaoiseK
    Hello, I generally use will_paginate for the pagination in my app, but have hit a stumbler on my search feature. I'm using Thinking Sphinx for doing my full-text search, which returns results paginated. The problem I'm having is that after I've received the results from Thinking Sphinx, I need to merge them with some other results and re-order them. Once I've finished processing them I have an Array of results that is very different from the original from TS. As there could be 1000+ results in this Array Pagination is a necessity. The problem is that I can't figure out how to get will_paginate to play with an existing array. I've done some research and it seems the only solutions to this problem are from several years ago and are based around the old built-in Paginator class. The most recent one I could find that makes use of will_paginate was from devchix from mid-2007: http://www.devchix.com/2007/07/23/will_paginate-array/comment-page-1/ - I've given this a go but it doesn't seem to do anything for me. Are there any current methods for applying pagination (preferably via will_paginate) for existing arrays of AR results?

    Read the article

  • What makes an effective UI for displaying versioning of structured hierarchical data

    - by Fadrian Sudaman
    Traditional version control system are displaying versioning information by grouping Projects-Folders-Files with Tree view on the left and details view on the right, then you will click on each item to look at revision history for that configuration history. Assuming that I have all the historical versioning information available for a project from Object-oriented model perspective (e.g. classes - methods - parameters and etc), what do you think will be the most effective way to present such information in UI so that you can easily navigate and access the snapshot view of the project and also the historical versioning information? Put yourself in the position that you are using a tool like this everyday in your job like you are currently using SVN, SS, Perforce or any VCS system, what will contribute to the usability, productivity and effectiveness of the tool. I personally find the classical way for display folders and files like above are very restrictive and less effective for displaying deep nested logical models. Assuming that this is a greenfield project and not restricted by specific technology, how do you think I should best approach this? I am looking for idea and input here to add values to my research project. Feel free to make any suggestions that you think is valuable. Thanks again for anyone that shares their thoughts.

    Read the article

  • Database structure for ecommerce site

    - by imanc
    Hey Guys, I have been tasked with designing an ecommerce solution. The aspect that is causing me the most problems is the database. Currently the site consists of 10+ country based shops each with their own database (all residing on the same mysql instance). For the new site I'd rather all these shop databases be merged into one database so that all tables (products, orders, customers etc.) have a shop_id field. From a programming perspective this seems to make the most sense as we won't have to manage data across multiple databases. Currently the entire site generates about 120k orders a year, but is experiencing fairly heavy growth and we need to design a solution that will scale. In 5 years there may be more than a million orders per year and a database that contains 5 years order history (archiving maybe a solution here). The question is - do we use a single database, or do we keep the database-per-shop structure? I am currently trying to find supporting evidence for either avenue. The company I am designing the solution for prefer the per-shop database structure because they believe it will allow the sites to scale. But my argument is that the shop's database probably won't get that busy over the next few years that they exceed the capacity of a mysql database and a "no expenses spared" hardware set-up. I am wondering if anyone has any advice either way? Does anyone have experience with websites / ecommerce sites that have tables containing millions of records? I know there is probably not a clear answer here, but at what stage do we have too many records or too large table files to have a fast loading site? Also, if anyone has any advice on sources of information - books, websites, etc. where I can do further research, it would be highly appreciated! Cheers, imanc

    Read the article

  • Mysql Performance Question - Essentially about normalizing efficiency

    - by freqmode
    Hi there. Just a quick question about database performance. I'll outline my site purpose below as background. I'm creating a dictionary site that saves the words users define to a database. What I'm wondering is whether or not to create a words table for each user or to keep one massive words table. This site will be used for entire schools so the single words table would be massive! The database structure is as follows: A user table with: User_ID PRIMARY KEY Username First Last Password Email Country Research Standings SendInfo Donated JoinedOn LastLogin Logins Correct Attempts Admin Active And one word table with: User_ID PRIMARY KEY Word Vocab Spell Defined DefinedAttempted Spelled SpelledAttempted Sentenced SentencedAttempted So what I'm asking is , performance-wise, should I create a new table for each user when they join the site - each user could have hundreds or thousands of words over time? Or is it better to have one massive table with thousands and thousands of records and filter by User_ID. I don't think I'll perform many table joins. My gut feeling is to create a new table for each user, but I thought I'd ask for expert advice! Thanks in advance.

    Read the article

  • Why doesn't java.lang.Number implement Comparable?

    - by Julien Chastang
    Does anyone know why java.lang.Number does not implement Comparable? This means that you cannot sort Numbers with Collections.sort which seems to me a little strange. Post discussion update: Thanks for all the helpful responses. I ended up doing some more research about this topic. The simplest explanation for why java.lang.Number does not implement Comparable is rooted in mutability concerns. For a bit of review, java.lang.Number is the abstract super-type of AtomicInteger, AtomicLong, BigDecimal, BigInteger, Byte, Double, Float, Integer, Long and Short. On that list, AtomicInteger and AtomicLong to do not implement Comparable. Digging around, I discovered that it is not a good practice to implement Comparable on mutable types because the objects can change during or after comparison rendering the result of the comparison useless. Both AtomicLong and AtomicInteger are mutable. The API designers had the forethought to not have Number implement Comparable because it would have constrained implementation of future subtypes. Indeed, AtomicLong and AtomicInteger were added in Java 1.5 long after java.lang.Number was initially implemented. Apart from mutability, there are probably other considerations here too. A compareTo implementation in Number would have to promote all numeric values to BigDecimal because it is capable of accommodating all the Number sub-types. The implication of that promotion in terms of mathematics and performance is a bit unclear to me, but my intuition finds that solution kludgy.

    Read the article

  • WPF and LINQ/SQL - how and where to keep track of changes?

    - by Groky
    I have a WPF application built using the MVVM pattern: My Models come from LINQ to SQL. I use the Repository Pattern to abstract away the DataContext. My ViewModels have a reference to a Model. Setting a property on the ViewModel causes that value to be written through to the Model. As you can see, my data is stored in my Model, and changes are therefore tracked by my DataContext. However, in this question I read: The guidelines from the MSDN documentation on the DataContext class are what I would recommend following: In general, a DataContext instance is designed to last for one "unit of work" however your application defines that term. A DataContext is lightweight and is not expensive to create. A typical LINQ to SQL application creates DataContext instances at method scope or as a member of short-lived classes that represent a logical set of related database operations. How do you track your changes? In your DataContext? In your ViewModel? Elsewhere?

    Read the article

  • What are the skills a Drupal Developer needs?

    - by hfidgen
    I'm trying to write out a list of key Drupal competencies, mainly so I can confirm what I know, don't know and don't know I don't know. (Thanks D. Rumsfeld for that quote!) I think some of these are really broad, for instance there's quite a difference between making a functional theme and creating a theme with good SEO, load times and so on, but I'm hoping you could assume that a half decent web developer would look after that anyway. Just interested to see what people here feel is also important. Able to install Drupal on a server (pretty obvious). Able to research and install modules to meet project requirements Able to configure all the basic modules and core settings to get a site running Able to create a custom Theme from scratch which validates with good HTML/CSS and also pays attention to usability and accessibility. (Whilst still looking kick-ass). Able to use Hooks in the theme template.php to alter forms, page layout and other core functionality Can make forms from scratch using the API - with validation and posting back to the DB/email Can use Views to create blocks or pages, use php snippets as arguments, etc. Can create custom modules from scratch utilising core hooks and other hooks.

    Read the article

  • WCF Custom SOAP Header Issues

    - by WayneC
    I'm trying to implement an endpoint behavior which injects a custom SOAP header into all messages to and from a service. I've gotten pretty close by implementing the approach from the accepted answer of this question: http://stackoverflow.com/questions/986455/wcf-wsdl-soap-header-on-all-operations/995951#995951 After implementing that solution, my custom SOAP header does indeed show up in the WSDL; however, when I try to call the methods on my service, I get the following exception/fault: <ExceptionDetail xmlns="http://schemas.datacontract.org/2004/07/System.ServiceModel" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <HelpLink i:nil="true" /> <InnerException i:nil="true" /> <Message>Index was outside the bounds of the array.</Message> <StackTrace> at System.ServiceModel.Dispatcher.DataContractSerializerOperationFormatter.AddHeadersToMessage(Message message, MessageDescription messageDescription, Object[] parameters, Boolean isRequest) at System.ServiceModel.Dispatcher.OperationFormatter.SerializeReply(MessageVersion messageVersion, Object[] parameters, Object result) at System.ServiceModel.Dispatcher.DispatchOperationRuntime.SerializeOutputs(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage3(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage2(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage1(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet)</StackTrace> <Type>System.IndexOutOfRangeException</Type> </ExceptionDetail> Looking in Reflector at the DataContractSerializerOperationFormatter.AddHeadersToMessage method thats throwing the exception, leads me to believe that the following snippet is causing the problem...but I'm not sure why. MessageHeaderDescription description = (MessageHeaderDescription) headerPart.Description; object parameterValue = parameters[description.Index]; I think the last line above is throwing the exception. The parameters variable is from IDispatchFormatter.SerializeReply What's going on?!?!! Any help would be greatly appreciated...

    Read the article

  • HTTP POST with URL query parameters -- good idea or not?

    - by Steven Huwig
    I'm designing an API to go over HTTP and I am wondering if using the HTTP POST command, but with URL query parameters only and no request body, is a good way to go. Considerations: "Good Web design" requires non-idempotent actions to be sent via POST. This is a non-idempotent action. It is easier to develop and debug this app when the request parameters are present in the URL. The API is not intended for widespread use. It seems like making a POST request with no body will take a bit more work, e.g. a Content-Length: 0 header must be explicitly added. It also seems to me that a POST with no body is a bit counter to most developer's and HTTP frameworks' expectations. Are there any more pitfalls or advantages to sending parameters on a POST request via the URL query rather than the request body? Edit: The reason this is under consideration is that the operations are not idempotent and have side effects other than retrieval. See the HTTP spec: In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered "safe". This allows user agents to represent other methods, such as POST, PUT and DELETE, in a special way, so that the user is made aware of the fact that a possibly unsafe action is being requested. ... Methods can also have the property of "idempotence" in that (aside from error or expiration issues) the side-effects of N 0 identical requests is the same as for a single request. The methods GET, HEAD, PUT and DELETE share this property. Also, the methods OPTIONS and TRACE SHOULD NOT have side effects, and so are inherently idempotent.

    Read the article

  • Still Confused About Identifying vs. Non-Identifying Relationships

    - by Jason
    So, I've been reading up on identifying vs. non-identifying relationships in my database design, and a number of the answers on SO seem contradicting to me. Here are the two questions I am looking at: What's the Difference Between Identifying and Non-Identifying Relationships Trouble Deciding on Identifying or Non-Identifying Relationship Looking at the top answers from each question, I appear to get two different ideas of what an identifying relationship is. The first question's response says that an identifying relationship "describes a situation in which the existence of a row in the child table depends on a row in the parent table." An example of this that is given is, "An author can write many books (1-to-n relationship), but a book cannot exist without an author." That makes sense to me. However, when I read the response to question two, I get confused as it says, "if a child identifies its parent, it is an identifying relationship." The answer then goes on to give examples such as SSN (is identifying of a Person), but an address is not (because many people can live at an address). To me, this sounds more like a case of the decision between primary key and non-primary key. My own gut feeling (and additional research on other sites) points to the first question and its response being correct. However, I wanted to verify before I continued forward as I don't want to learn something wrong as I am working to understand database design. Thanks in advance.

    Read the article

  • How do I build a filtered_streambuf based on basic_streambuf?

    - by swestrup
    I have a project that requires me to insert a filter into a stream so that outgoing data will be modified according to the filter. After some research, it seems that what I want to do is create a filtered_streambuf like this: template <class StreamBuf class filtered_streambuf: public StreamBuf { ... } And then insert a filtered_streambuf<> into whichever stream I need to be filtered. My problem is that I don't know what invariants I need to maintain while filtering a stream, in order to ensure that Derived classes can work as expected. In particular, I may find I have filtered_streambufs built over other filtered_streambufs. All the various stream inserters, extractors and manipulators work as expected. The trouble is that I just can't seem to work out what the minimal interface is that I need to supply in order to guarantee that an iostream will have what it needs to work correctly. In particular, do I need to fake the movement of the protected pointer variables, or not? Do I need a fake data buffer, or not? Can I just override the public functions, rewriting them in terms of the base streambuf, or is that too simplistic?

    Read the article

  • Analyzing an IronPython Scope

    - by Vercinegetorix
    I'm trying to write C# code with an embedded IronPython script. Then want to analyze the contents of the script, i.e. list all variables, functions, class and their members/methods. There's an easy way to start, assuming I've got a scope defined and code executed in it already: dynamic variables=pyScope.GetVariables(); foreach (string v in variables) { dynamic dynamicV=pyScope.GetVariable(); /*seems to return everything. variables, functions, classes, instances of classes*/ } But how do I figure out what the type of a variable is? For the following python 'objects', dynamicV.GetType() will return different values: x=5 --system.Int32 y="asdf" --system.String def func():... --IronPython.Runtime.PythonFunction z=class() -- IronPython.Runtime.Types.OldInstance, how can I identify what the actual python class is? class NewClass -- throws an error, GetType() is unavailable. This is almost what I'm looking for. I could capture the exception thrown when unavailable and assume it's a class declaration, but that seems unclean. Is there a better approach? To discover the members/methods of a class it looks like I can use: ObjectOperations op = pyEngine.Operations; object instance = op.Call("className"); foreach (string j in op.GetMemberNames("className")) { object member=op.GetMember(instance, j); Console.WriteLine(member.GetType()); /*once again, GetType() provides some info about the type of the member, but returns null sometimes*/ } Also, how do I get the parameters to a method? Thanks!

    Read the article

  • Why does GLSL's arithmetic functions yield so different results on the iPad than on the simulator?

    - by cheeesus
    I'm currently chasing some bugs in my OpenGL ES 2.0 fragment shader code which is running on iOS devices. The code runs fine in the simulator, but on the iPad it has huge problems and some of the calculations yield vastly different results, I had for example 0.0 on the iPad and 4013.17 on the simulator, so I'm not talking about small differences which could be the result of some rounding errors. One of the things I noticed is that, on the iPad, float1 = pow(float2, 2.0); can yield results which are very different from the results of float1 = float2 * float2; Specifically, when using pow(x, 2.0) on a variable containing a larger negative number like -8, it seemed to return a value which satified the condition if (powResult <= 0.0). Also, the result of both operations (pow(x, 2.0) as well as x*x) yields different results in the simulator than on the iPad. Used floats are mediump, but I get the same stuff with highp. Is there a simple explanation for those differences? I'm narrowing the problem down, but it takes so much time, so maybe someone can help me here with a simple explanation.

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >