Search Results

Search found 7324 results on 293 pages for 'operations research'.

Page 235/293 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • HATEOAS - Discovery and URI Templating

    - by Paul Kirby
    I'm designing a HATEOAS API for internal data at my company, but have been having troubles with the discovery of links. Consider the following set of steps for someone to retrieve information about a specific employee in this system: User sends GET to http://coredata/ to get all available resources, returns a number of links including one tagged as rel = "http://coredata/rels/employees" User follows HREF on the rel from the first request, performing a GET at (for example) http://coredata/employees The data returned from this last call is my conundrum and a situation where I've heard mixed suggestions. Here are some of them: That GET will return all employees (with perhaps truncated data), and the client would be responsible for picking the one it wants from that list. That GET would return a number of URI templated links describing how to query / get one employee / get all employees. Something like: "_links": { "http://coredata/rels/employees#RetrieveOne": { "href": "http://coredata/employees/{id}" }, "http://coredata/rels/employees#Query": { "href": "http://coredata/employees{?login,firstName,lastName}" }, "http://coredata/rels/employees#All": { "href": "http://coredata/employees/all" } } I'm a little stuck here with what remains closest to HATEOAS. For option 1, I really do not want to make my clients retrieve all employees every time for the sake of navigation, but I can see how using URI templating in example two introduces some out-of-band knowledge. My other thought was to use the RetrieveOne, Query, and All operations as my cool URLs, but that seems to violate the concept that you should be able to navigate to the resources you want from one base URI. Has anyone else managed to come up with a good way to handle this? Navigation is dead simple once you've retrieved one resource or a set of resources, but it seems very difficult to use for discovery.

    Read the article

  • Grails , how do I get an object NOT to save

    - by user350325
    Hello I am new to grails and trying to create a form which allows a user to change the email address associated with his/her account for a site I am creating. It asks for the user for their current password and also for the new email address they want to use. If the user enters the wrong password or an invalid email address then it should reject them with an appropriate error message. Now the email validation can be done through constraints in grails, but the password change has to match their current password. I have implemented this check as a method on a service class. See code below: def saveEmail = { def client = ClientUser.get(session.clientUserID) client.email = params.email if(clientUserService.checkPassword(session.clientUserID , params.password) ==false) { flash.message = "Incorrect Password" client.discard() redirect(action:'changeEmail') } else if(!client.validate()) { flash.message = "Invalid Email Address" redirect(action:'changeEmail') } else { client.save(); session.clientUserID = null; flash.message = "Your email address has been changed, please login again" redirect(controller: 'clientLogin' , action:'index') } } Now what I noticed that was odd was that if I entered an invalid email then it would not save the changes (as expected) BUT if I entered the wrong password and a valid email then it would save the changes and even write them back into the database even though it would give the correct "invalid password" error message. I was puzzled so set break points in all the if/else if/else blocks and found that it was hitting the first if statement as expected and not hitting the others , so it would never come accross a call to the save() method, yet it was saved anyway. After a little research I came accross documentation for the discard() method which you can see used in the code above. So I added this but still no avail. I even tried using discard then reloading the client object from the DB again but still no dice. This is very frustrating and I would be grateful for any help, since I think that this should surely not be a complicated requirement!

    Read the article

  • How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web in

    - by reckoner
    I am currently updating a java-based web application which allows database developers to create stored procedure regression test suites for database testing. Currently, for test setup, execution and clean-up stages, the user is provided with text boxes where they are able to enter SQL code which is executed by the isql command. I would like to extend the application to use DB Unit’s DatabaseOperation methods to provide more ways to setup the state of the database than just SQL statements. The main reason for using Db Unit rather than just SQL statements is to be able to create and store xml and xls DataSets on a server where they can be associated with their test cases and used for data setup. My question is: How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web interface? I have considered: Creating a simple programming language and a parser to read some simple syntax involving the DB Unit method names which accept a parameter being the file location to an xml or xls DataSet. I was thinking of allowing the user to register the files they need with the web app which would catalogue them and provide each file with an identifier which could passed as a parameter to the methods in this simple programming language. Creating an XML DTD which provides the user with the ability to specify operations and parameters. If I went this approach, how can I execute the methods and their parameters that I parse from the XML document? Creating a table in the database which stores the method and a FK relation to a catalogued DataSet file, however I don’t think this would be good solution due to the fact that data entry would be tedious. Thanks for your help.

    Read the article

  • When is a good time to start thinking about scaling?

    - by Slokun
    I've been designing a site over the past couple days, and been doing some research into different aspects of scaling a site horizontally. If things go as planned, in a few months (years?) I know I'd need to worry about scaling the site up and out, since the resources it would end up consuming would be huge. So, this got me to thinking, when is the best time to start thinking about, and designing for, scalability? If you start too early on, you could easily over complicate your design, and make it impossible to actually build. You could also get too caught up in the details, the architecture, whatever, and wind up getting nothing done. Also, if you do get it working, but the site never takes off, you may have wasted a good chunk of extra effort. On the other hand, you could be saving yourself a ton of effort down the road. Designing it from the ground up to be big would make it much easier later on to let it grow big, with very little rewriting going on. I know for what I'm working on, I've decided to make at least a few choices now on the side of scaling, but I'm not going to do a complete change of thinking to get it to scale completely. Notably, I've redesigned my database from a conventional relational design to one similar to what was suggested on the Reddit site linked below, and I'm going to give memcache a try. So, the basic question, when is a good time to start thinking or worrying about scaling, and what are some good designs, tips, etc. for when doing so? A couple of things I've been reading, for those who are interested: http://www.codinghorror.com/blog/2009/06/scaling-up-vs-scaling-out-hidden-costs.html http://highscalability.com/blog/2010/5/17/7-lessons-learned-while-building-reddit-to-270-million-page.html http://developer.yahoo.com/performance/rules.html

    Read the article

  • How to check if JavaScript object is JSON

    - by Wei Hao
    I have a nested JSON object that I need to loop through, and the value of each key could be a String, JSON array or another JSON object. Depending on the type of object, I need to carry out different operations. Is there any way I can check the type of the object to see if it is a String, JSON object or JSON array? I tried using typeof and instanceof but both didn't seem to work, as typeof will return an object for both JSON object and array, and instanceof gives an error when I do obj instanceof JSON. To be more specific, after parsing the JSON into a JS object, is there any way I can check if it is a normal string, or an object with keys and values (from a JSON object), or an array (from a JSON array)? For example: JSON var data = {"hi": {"hello": ["hi1","hi2"] }, "hey":"words" } JavaScript var jsonObj = JSON.parse(data); var level1 = jsonObj.hi; var text = jsonObj.hey; var arr = level1.hello; //how to check if level1 was formerly a JSON object? //how to check if arr was formerly a JSON array? //how to check if text is a string?

    Read the article

  • SQL Design: representing a default value with overrides?

    - by Mark Harrison
    I need a sparse table which contains a set of "override" values for another table. I also need to specify the default value for the items overridden. For example, if the default value is 17, then foo,bar,baz will have the values 17,21,17: table "things" table "xvalue" name stuff name xval ---- ----- ---- ---- foo ... bar 21 bar ... baz ... If I don't care about a FK from xvalue.name - things.name, I could simply put a "DEFAULT" name: table "xvalue" name xval ---- ---- DEFAULT 17 bar 21 But I like having a FK. I could have a separate default table, but it seems odd to have 2x the number of tables. table "xvalue_default" xval ---- 17 table "xvalue" name xval ---- ---- bar 21 I could have a "defaults table" tablename attributename defaultvalue xvalue xval 17 but then I run into type issues on defaultvalue. My operations guys prefer as compact a representation as possible, so they can most easily see the "diff" or deviations from the default. What's the best way to represent this, including the default value? This will be for Oracle 10.2 if that makes a difference.

    Read the article

  • C++ boost mpl vector

    - by Gokul
    I understand that the following code won't work, as i is a runtime parameter and not a compile time parameter. But i want to know, whether there is a way to achieve the same. i have a list of classes and i need to call a template function, with each of these classes. void GucTable::refreshSessionParams() { typedef boost::mpl::vector< SessionXactDetails, SessionSchemaInfo > SessionParams; for( int i = 0; i < boost::mpl::size<SessionParams>::value; ++i ) boost::mpl::at<SessionParams, i>::type* sparam = g_getSessionParam< boost::mpl::at<SessionParams, i>::type >(); sparam->updateFromGucTable(this); } } Can someone suggest me a easy and elegant way to perform the same? i need to iterate through the mpl::vector and use the type to call a global function and then use that parameter to do some run-time operations. Thanks in advance, Gokul. Working code typedef boost::mpl::vector< SessionXactDetails, SessionSchemaInfo > SessionParams; class GucSessionIterator { private: GucTable& m_table; public: GucSessionIterator(GucTable& table) :m_table(table) { } template< typename U > void operator()(const U& ) { g_getSessionParam<U>()->updateFromGucTable(m_table); } }; void GucTable::refreshSessionParams() { boost::mpl::for_each< SessionParams >( GucSessionIterator(*this) ); return; }

    Read the article

  • Dynamically created controls and the ASP.NET page lifecycle

    - by Dirk
    I'm working on an ASP.NET project in which the vast majority of the forms are generated dynamically at run time (form definitions are stored in a DB for customizability). Therefore, I have to dynamically create and add my controls to the Page every time OnLoad fires, regardless of IsPostBack. This has been working just fine and .NET takes care of managing ViewState for these controls. protected override void OnLoad(EventArgs e) { base.OnLoad(e); RenderDynamicControls() } private void RenderDynamicControls(){ //1. call service layer to retrieve form definition //2. create and add controls to page container } I have a new requirement in which if a user clicks on a given button (this button is created at design time) the page should be re-rendered in a slightly different way. So in addition to the code that executes in OnLoad (i.e. RenderDynamicControls()), I have this code: protected void MyButton_Click(object sender, EventArgs e) { RenderDynamicControlsALittleDifferently() } private void RenderDynamicControlsALittleDifferently() (){ //1. clear all controls from the page container added in RenderDynamicControls() //2. call service layer to retrieve form definition //3. create and add controls to page container } My question is, is this really the only way to accomplish what I'm after? It seems beyond hacky to effectively render the form twice simply to respond to a button click. I gather from my research that this is simply how the page-lifecycle works in ASP.NET: Namely, that OnLoad must fire on every Postback before child events are invoked. Still, it's worthwhile to check with the SO community before having to drink the kool-aid. On a related note, once I get this feature completed, I'm planning on throwing an UpdatePanel on the page to perform the page updates via Ajax. Any code/advice that make that transition easier would be much appreciated. Thanks

    Read the article

  • Using a php://memory wrapper causes errors...

    - by HorusKol
    I'm trying to extend the PHP mailer class from Worx by adding a method which allows me to add attachments using string data rather than path to the file. I came up with something like this: public function addAttachmentString($string, $name='', $encoding = 'base64', $type = 'application/octet-stream') { $path = 'php://memory/' . md5(microtime()); $file = fopen($path, 'w'); fwrite($file, $string); fclose($file); $this->AddAttachment($path, $name, $encoding, $type); } However, all I get is a PHP warning: PHP Warning: fopen() [<a href='function.fopen'>function.fopen</a>]: Invalid php:// URL specified There aren't any decent examples with the original documentation, but I've found a couple around the internet (including one here on SO), and my usage appears correct according to them. Has anyone had any success with using this? My alternative is to create a temporary file and clean up - but that will mean having to write to disc, and this function will be used as part of a large batch process and I want to avoid slow disc operations (old server) where possible. This is only a short file but has different information for each person the script emails.

    Read the article

  • Looking for all-in-one drm/installer/CD creation kit.

    - by user30997
    The company I work for has a download manager in place that handles distribution, DRM, installation of our products - when a user gets them off our website. However, we're using an clunky system for packaging and protecting our products when we do press releases or make retail CDs. Part of the antiquation problem is the fact that the automated system that works with the installer- and DRM-creation software we have is a disaster that needs to be put out of my misery. The list of products that we currently produce, that I would like a new system MUST be capable of producing: Retail CDs, with a certain level of obfuscation to make copying difficult. Downloadable installers that time out after a few hours of use of the product. After the time has expired, removing and reinstalling the product will leave you still blocked from use. Installers that will fail to work after a certain date. I'd love to be able to just feed a tool the directory where a complete product resides and have the installer generated with a couple command-line operations. (The command-line issue is non-negotiable this well be called by an automated tool.) A single-solution package would be far preferable. Software with royalty-based or per-unit based licensing is not an option.

    Read the article

  • Why this code generates different numbers?

    - by frbry
    Hello, I have this function that creates a unique number for hard-disk and CPU combination. DWORD hw_hash() { char drv[4]; char szNameBuffer[256]; DWORD dwHddUnique; DWORD dwProcessorUnique; DWORD dwUniqueKey; char *sysDrive = getenv ("SystemDrive"); strcpy(drv, sysDrive); drv[2] = '\\'; drv[3] = 0; GetVolumeInformation(drv, szNameBuffer, 256, &dwHddUnique, NULL, NULL, NULL, NULL); SYSTEM_INFO si; GetSystemInfo(&si); dwProcessorUnique = si.dwProcessorType + si.wProcessorArchitecture + si.wProcessorRevision; dwUniqueKey = dwProcessorUnique + dwHddUnique; return dwUniqueKey; } It returns different numbers if I format my hard-disk and install a new Windows. Any ideas, why? Thank you. Edit: OK, Got it: This function returns the volume serial number that the operating system assigns when a hard disk is formatted. To programmatically obtain the hard disk's serial number that the manufacturer assigns, use the Windows Management Instrumentation (WMI) Win32_PhysicalMedia property SerialNumber. I should do more research before posting my problems online. Sorry to bother you, let's keep this here in case anybody else can need it.

    Read the article

  • Rolling back file moves, folder deletes and mysql queries

    - by Workoholic
    This has been bugging me all day and there is no end in sight. When the user of my php application adds a new update and something goes wrong, I need to be able to undo a complex batch of mixed commands. They can be mysql update and insert queries, file deletes and folder renaming and creations. I can track the status of all insert commands and undo them if an error is thrown. But how do I do this with the update statements? Is there a smart way (some design pattern?) to keep track of such changes both in the file structure and the database? My database tables are MyISAM. It would be easy to just convert everything to InnoDB, so that I can use transactions. That way I would only have to deal with the file and folder operations. Unfortunately, I cannot assume that all clients have InnoDB support. It would also require me to convert many tables in my database to InnoDB, which I am hesitant to do.

    Read the article

  • Destructors not called when native (C++) exception propagates to CLR component

    - by Phil Nash
    We have a large body of native C++ code, compliled into DLLs. Then we have a couple of dlls containing C++/CLI proxy code to wrap the C++ interfaces. On top of that we have C# code calling into the C++/CLI wrappers. Standard stuff, so far. But we have a lot of cases where native C++ exceptions are allowed to propagate to the .Net world and we rely on .Net's ability to wrap these as System.Exception objects and for the most part this works fine. However we have been finding that destructors of objects in scope at the point of the throw are not being invoked when the exception propagates! After some research we found that this is a fairly well known issue. However the solutions/ workarounds seem less consistent. We did find that if the native code is compiled with /EHa instead of /EHsc the issue disappears (at least in our test case it did). However we would much prefer to use /EHsc as we translate SEH exceptions to C++ exceptions ourselves and we would rather allow the compiler more scope for optimisation. Are there any other workarounds for this issue - other than wrapping every call across the native-managed boundary in a (native) try-catch-throw (in addition to the C++/CLI layer)?

    Read the article

  • Refactoring exercise with generics

    - by Berryl
    I have a variation on a Quantity (Fowler) class that is designed to facilitate conversion between units. The type is declared as: public class QuantityConvertibleUnits<TFactory> where TFactory : ConvertableUnitFactory, new() { ... } In order to do math operations between dissimilar units, I convert the right hand side of the operation to the equivalent Quantity of whatever unit the left hand side is in, and do the math on the amount (which is a double) before creating a new Quantity. Inside the generic Quantity class, I have the following: protected static TQuantity _Add<TQuantity>(TQuantity lhs, TQuantity rhs) where TQuantity : QuantityConvertibleUnits<TFactory>, new() { var toUnit = lhs.ConvertableUnit; var equivalentRhs = _Convert<TQuantity>(rhs.Quantity, toUnit); var newAmount = lhs.Quantity.Amount + equivalentRhs.Quantity.Amount; return _Convert<TQuantity>(new Quantity(newAmount, toUnit.Unit), toUnit); } protected static TQuantity _Subtract<TQuantity>(TQuantity lhs, TQuantity rhs) where TQuantity : QuantityConvertibleUnits<TFactory>, new() { var toUnit = lhs.ConvertableUnit; var equivalentRhs = _Convert<TQuantity>(rhs.Quantity, toUnit); var newAmount = lhs.Quantity.Amount - equivalentRhs.Quantity.Amount; return _Convert<TQuantity>(new Quantity(newAmount, toUnit.Unit), toUnit); } ... same for multiply and also divide I need to get the typing right for a concrete Quantity, so an example of an add op looks like: public static ImperialLengthQuantity operator +(ImperialLengthQuantity lhs, ImperialLengthQuantity rhs) { return _Add(lhs, rhs); } The question is those verbose methods in the Quantity class. The only change between the code is the math operator (+, -, *, etc.) so it seems that there should be a way to refactor them into a common method, but I am just not seeing it. How can I refactor that code? Cheers, Berryl

    Read the article

  • .NET multithreading, volatile and memory model

    - by fedor-serdukov
    Assume that we have the following code: class Program { static volatile bool flag1; static volatile bool flag2; static volatile int val; static void Main(string[] args) { for (int i = 0; i < 10000 * 10000; i++) { if (i % 500000 == 0) { Console.WriteLine("{0:#,0}",i); } flag1 = false; flag2 = false; val = 0; Parallel.Invoke(A1, A2); if (val == 0) throw new Exception(string.Format("{0:#,0}: {1}, {2}", i, flag1, flag2)); } } static void A1() { flag2 = true; if (flag1) val = 1; } static void A2() { flag1 = true; if (flag2) val = 2; } } } It's fault! The main quastion is Why... I suppose that CPU reorder operations with flag1 = true; and if(flag2) statement, but variables flag1 and flag2 marked as volatile fields...

    Read the article

  • The fastest way to iterate through a collection of objects

    - by Trev
    Hello all, First to give you some background: I have some research code which performs a Monte Carlo simulation, essential what happens is I iterate through a collection of objects, compute a number of vectors from their surface then for each vector I iterate through the collection of objects again to see if the vector hits another object (similar to ray tracing). The pseudo code would look something like this for each object { for a number of vectors { do some computations for each object { check if vector intersects } } } As the number of objects can be quite large and the amount of rays is even larger I thought it would be wise to optimise how I iterate through the collection of objects. I created some test code which tests arrays, lists and vectors and for my first test cases found that vectors iterators were around twice as fast as arrays however when I implemented a vector in my code in was somewhat slower than the array I was using before. So I went back to the test code and increased the complexity of the object function each loop was calling (a dummy function equivalent to 'check if vector intersects') and I found that when the complexity of the function increases the execution time gap between arrays and vectors reduces until eventually the array was quicker. Does anyone know why this occurs? It seems strange that execution time inside the loop should effect the outer loop run time.

    Read the article

  • Sparse (Pseudo) Infinite Grid Data Structure for Web Game

    - by Ming
    I'm considering trying to make a game that takes place on an essentially infinite grid. The grid is very sparse. Certain small regions of relatively high density. Relatively few isolated nonempty cells. The amount of the grid in use is too large to implement naively but probably smallish by "big data" standards (I'm not trying to map the Internet or anything like that) This needs to be easy to persist. Here are the operations I may want to perform (reasonably efficiently) on this grid: Ask for some small rectangular region of cells and all their contents (a player's current neighborhood) Set individual cells or blit small regions (the player is making a move) Ask for the rough shape or outline/silhouette of some larger rectangular regions (a world map or region preview) Find some regions with approximately a given density (player spawning location) Approximate shortest path through gaps of at most some small constant empty spaces per hop (it's OK to be a bad approximation often, but not OK to keep heading the wrong direction searching) Approximate convex hull for a region Here's the catch: I want to do this in a web app. That is, I would prefer to use existing data storage (perhaps in the form of a relational database) and relatively little external dependency (preferably avoiding the need for a persistent process). Guys, what advice can you give me on actually implementing this? How would you do this if the web-app restrictions weren't in place? How would you modify that if they were? Thanks a lot, everyone!

    Read the article

  • QueryHistory against a codeplex project hangs indefinitely

    - by Robaticus
    I'm working on a TFS utility that gets the changesets for a particular project in TFS. I've got a home TFS 2010 server which I primarily use for testing, but I decided to give it a try against a codeplex project to which I contribute. That way, I can test functionality against a larger number of changesets than I have locally. While it works fine in my environment, heading out over the wire to codeplex has left me stumped. My application queries the history, but then, when trying to iterate through the history (which is when it lazy-loads the IEnumerable), my application hangs. Looking at Intellitrace, I see a couple of "first chance" exceptions that the "item doesn't exist at the specified version"-- which is patently not true, as I'm trying to get history for "$/" at VersionSpec.Latest. I also see two or three consecutive server 500 errors being returned to me after forcing debugging to pause. Other operations (like GetItems() ) work fine, so I'm pretty sure authentication isn't an issue. Any thoughts? Here's the code: IEnumerable items = vcs.QueryHistory("$/", VersionSpec.Latest, 1, RecursionType.None, null, null, null, 5, true, false); List<ChangesetItem> returnList = new List<ChangesetItem>(); foreach (Changeset cs in items) //hangs here on first iteraiton { ChangesetItem newItem = new ChangesetItem() { ChangesetId = cs.ChangesetId, //ChangesetNote = cs.CheckinNote.Values[0].Value, Comment = cs.Comment, Committer = cs.Committer, CreationDate = cs.CreationDate }; returnList.Add(newItem); }

    Read the article

  • Which source control paradigm and solution to embed in a custom editor application?

    - by Greg Harman
    I am building an application that manages a number of custom objects, which may be edited concurrently by multiple users (using different instances of the application). These objects have an underlying serialized representation, and my plan is to persist them (through my application UI) in an external source control system. Of course this implies that my application can check the current version of an object for updates, a merging interface for each object, etc. My question is what source control paradigm(s) and specific solution(s) to support and why. The way I (perhaps naively) see the source control world is three general paradigms: Single-repository, locked access (MS SourceSafe) Single-repository, concurrent access (CVS/SVN) Distributed (Mercurial, Git) I haven't heard of anyone using #1 for quite a number of years, so I am planning to disregard this case altogether (unless I get a compelling argument otherwise). However, I'm at a loss as to whether to support #2 or #3, and which specific implementations. I'm concerned that the use paradigms are subtly different enough that I can't adequately capture basic operations in a single UI. The last bit of information I should convey is that this application is intended to be deployed in a commercial setting, where a source control system may already be in use. I would prefer not to support more than one solution unless it's really a deal-breaker, so wide adoption in a corporate setting is a plus.

    Read the article

  • Porting - Shared Memory x32 & x64 processes

    - by dpb
    A 32 bit host Windows application setups shared memory (using memory mapped file / CreateFileMapping() API), and then other 32 bit client processes use this shared memory to communicate with each other. I am planning to port the host application to 64 bit platform and once it is ready, I intend that both 32 bit and 64 bit client processes should be able to use the shared memory setup by the main 64 bit host application. The original code written for host x32 application uses "size_t" almost everywhere, since this differs from 4 bytes to 8 bytes as we move from x32 to x64, I am looking for replacing it. I intend to replace "size_t" by "unsigned long long", so that its size will be same on 32 bit & 64 bit. Can you please suggest me better alternative? Also, will the use of "unsigned long long" have performance impact on x32 app .. i guess yes? Research Done - Found very useful articles - a) 20 issue in porting from 32 bit to 64 bit (www.viva64.com) b) No way to restrict/change "size_t" on x64 platform to 4 bytes using compiler flags or any hooks/crooks since it is typedef

    Read the article

  • Flex 4 front end connecting to Java Jersey Web Service

    - by user305801
    I created a Java REST service using Jersey. I use three of the HTTP "verbs" GET, POST and DELETE. I want to create several prototype front ends for the service. After much research, a lot dating to 2008 and 2009, I have been unable to find anything remotely simple. My three options are: 1) resthttpservice. This project hasn't been updated in a year. The only activity are one off suggestions that individual users have implemented. http://code.google.com/p/resthttpservice/ 2) Create an AIR application. This isn't unfeasible. 3) Writing my own socket level code but there is a security restriction with flash players and I need to implement a policy server. I have already read the question posted about asking whether using Flex for REST services were worth it. That information is old as well. I want to introduce REST services to my company but Flex's limited support for HTTP PUT and DELETE are discouraging. My service also uses the Accept header to determine if JSON or XML will be returned to the client. I can't seem to change HTTP headers without doing socket programming. I'm fine with that but the security policy thing is annoying. Is there an easy way to use Flex 4 with RESTful services that uses PUT/DELETE and the Accept HTTP header? Please help. I'm very frustrated.

    Read the article

  • Proftpd log issue

    - by ggenov
    Hi, I'm developing an application where I read the current changes in certain folders on the server via FTP. The problem is that on files which begin with non-standart symbols as "~" the FTP server doesn't write full path name in the log Here is a part of my log: myuser [14/Jun/2010:20:50:11 +0000] ::ffff:217.145.84.66 [STOR] [/home/myuser/public_html/galleries/~ajax-loader.gif] myuser [14/Jun/2010:20:50:11 +0000] ::ffff:217.145.84.66 [MFMT] [-] myuser [14/Jun/2010:20:50:11 +0000] ::ffff:217.145.84.66 [MLST] [-] myuser [14/Jun/2010:23:50:11 +0300] ::ffff:217.145.84.66 [MLST] [-] myuser [14/Jun/2010:20:50:11 +0000] ::ffff:217.145.84.66 [MLST] [-] myuser [14/Jun/2010:20:50:12 +0000] ::ffff:217.145.84.66 [PASV] [-] myuser [14/Jun/2010:20:50:12 +0000] ::ffff:217.145.84.66 [STOR] [/home/myuser/public_html/galleries/~~ajax-loader.gif                                                                                            ] myuser [14/Jun/2010:20:50:12 +0000] ::ffff:217.145.84.66 [MFMT] [-] myuser [14/Jun/2010:20:50:12 +0000] ::ffff:217.145.84.66 [MLST] [-] myuser [14/Jun/2010:23:50:12 +0300] ::ffff:217.145.84.66 [MLST] [-] myuser [14/Jun/2010:23:50:22 +0300] ::ffff:217.145.84.66 [DELE] [] As you can see, all operations work very well except [DELE] command. My proftpd.conf file and its log's settings: LogFormat               default "%U %t %a [%m] [%f]" ExtendedLog             /var/log/proftpd ALL default #SystemLog               /var/log/proftpd ALL default #TransferLog             /var/log/proftpd ALL default Does anybody have a solution for this issue ?

    Read the article

  • Should I use MEF or Prism for my Silverlight project?

    - by Daniel
    Hi! My team(3 developers) will be building a Silverlight LOB application. This is the first Silverlight project for us. We've been doing mostly Winforms. We'll be using Silverlight4 / VS2010 / possibly WCF RIA Services, and ASP.NET Web application to handle authentication and host the silverlight pages. We need a way to.. Modularize the silverlight project so we can work in different parts of the application, then integrate them. Dynamically load different parts of the application, so the initial download size of the xap file wouldn't be too large. After some research, I found out that Prism and MEF are possible solutions to these goals. Can you give me advice on which framework to use? or possibly another solution? We don't have much experience on Silverlight and the project needs to be finished in 3 months, so the learning curves for frameworks should be considered. Thank you for reading! Any inputs will be much appreciated.

    Read the article

  • Representing complex scheduled reoccurance in a database

    - by David Pfeffer
    I have the interesting problem of representing complex schedule data in a database. As a guideline, I need to be able to represent the entirety of what the iCalendar -- ics -- format can represent, but in a database. I'm not actually implementing anything relating to ics, but it gives a good scope of the type of rules I need to be able to model. I need to allow allow representation of a single event or a reoccurring event based on multiple times per day, days of the week, week of a month, month, year, or some combination of those. For example, the third Thursday in November annually, or the 25th of December annually, or every two weeks starting November 2 and continuing until September 8 the following year. I don't care about insertion efficiency but query efficiency is critical. The operation I will be doing most often is providing either a single date/time or a date/time range, and trying to determine if the defined schedule matches any part of the date/time range. Other operations can be slower. For example, given January 15, 2010 at 10:00 AM through January 15, 2010 at 11:00 AM, find all schedules that match at least part of that time. (i.e. a schedule that covers 10:30 - 11:00 still matches.) Any suggestions? I looked at http://stackoverflow.com/questions/1016170/how-would-one-represent-scheduled-events-in-an-rdbms but it doesn't cover the scope of the type of reoccurance rules I'd like to model.

    Read the article

  • Best XML format for log events in terms of tool support for data mining and visualization?

    - by Thorbjørn Ravn Andersen
    We want to be able to create log files from our Java application which is suited for later processing by tools to help investigate bugs and gather performance statistics. Currently we use the traditional "log stuff which may or may not be flattened into text form and appended to a log file", but this works the best for small amounts of information read by a human. After careful consideration the best bet has been to store the log events as XML snippets in text files (which is then treated like any other log file), and then download them to the machine with the appropriate tool for post processing. I'd like to use as widely supported an XML format as possible, and right now I am in the "research-then-make-decision" phase. I'd appreciate any help both in terms of XML format and tools and I'd be happy to write glue code to get what I need. What I've found so far: log4j XML format: Supported by chainsaw and Vigilog. Lilith XML format: Supported by Lilith Uninvestigated tools: Microsoft Log Parser: Apparently supports XML. OS X log viewer: plus there is a lot of tools on http://www.loganalysis.org/sections/parsing/generic-log-parsers/ Any suggestions?

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >