Search Results

Search found 18191 results on 728 pages for 'single board'.

Page 575/728 | < Previous Page | 571 572 573 574 575 576 577 578 579 580 581 582  | Next Page >

  • Java and tomcat vs ASP.NET and IIS

    - by Mark Cooper
    Until recently I'd considered myself to be a pretty good web programmer (coming up for 10yrs commercial experience on a variety of e-commerce, static and enterprise applications). I'm self taught and have always used the Microsoft product stack (ASP, ASP.NET)... My applications are always functional, relatively bug free, but have never been lightening quick. As a frequent web user I always found this to be the norm... how fast are the websites from the big tech players (eBay, Facebook, Microsoft, IBM, Dell, Telerik etc etc) - in truth none are particularly fast. I always attributed this to "the way things are with web apps"... ...then I cam across a product called Jira from atlasian and this has stopped me in my tracks... This application is fast, and I mean blindingly fast.. too fast to time the switches between pages, fully live content, lots of images and data and cross references etc etc... I run this on an intranet, with a large application DB, and this is running on a very normal server (single processor, SATA HDD, 8GB RAM). Am I missing something?? Are my programming techniques that bad?? I am wondering if this speed gain is down to it being written in Java and running on Tomcat. Does anyone have any benchmarks to compare JSP / ASP or Tomcat / IIS??? Thanks, Mark NOTE: this isn't a blatant plug for Jira. I don't work for them or have any affiliation to them... but I would like to be able to write applications like them :)

    Read the article

  • Can YAML have inheritance?

    - by Jason
    This question involves a lot of symfony but it should be easy enough for someone to follow who only knows YAML and not symfony. My symfony models come from a three-step process: First, I create the tables in MySQL. Second, I run a symfony command (symfony doctrine:build-schema) to convert my table structure into a YAML file. Third, I run another symfony command (symfony doctrine:build-model) to convert the YAML file into PHP code. Here's the problem: there are some tables in the database that I don't want to end up in my symfony code. For example, let's say I have two tables: one called my_table and another called wordpress. The YAML file I end up with might look like this: MyTable: connection: doctrine tableName: my_table Wordpress: connection: doctrine tableName: wordpress That's great except the wordpress table has nothing to do with my symfony models. The result is that every single time I make a change to my database and generate this YAML file, I have to manually remove wordpress. It's annoying! I'd like to be able to create a file called baseConfig.php or something that looks like this: $config = array( 'MyTable' => array( 'connection' => 'doctrine', 'tableName' => 'my_table', ), 'Wordpress' => array( 'connection' => 'doctrine', 'tableName' => 'wordpress', ), ); And then I could have a separate file called config.php or something where I could make modifications to the base config: unset($config['Wordpress']); So my question is: is there any way to convert YAML into executable PHP code (as opposed to load YAML INTO PHP code like what sfYaml::load() does) to achieve this sort of thing? Or is there maybe some other way to achieve YAML inheritance? Thanks, Jason

    Read the article

  • AntFarm anti-pattern -- strategies to avoid, antidotes to help heal from

    - by alchemical
    I'm working on a 10 page web site with a database back-end. There are 500+ objects in use, trying to implement the MVP pattern in ASP.Net. I'm tracing the code-execution from a single-page, my finger has been on F-11 in Visual Studio for about 40 minutes, there seems to be no end, possibly 1000+ method calls for one web page! If it was just 50 objects that would be one thing, however, code execution snakes through all these objects just like millions of ants frantically woring in their giant dirt mound house, riddled with object tunnels. Hence, a new anti-pattern is born : AntFarm. AntFarm is also known as "OO-Madnes", "OO-Fever", OO-ADD, or simply design-pattern junkie. This is not the first time I've seen this, nor my associates at other companies. It seems that this style is being actively propogated, or in any case is a misunderstanding of the numerous OO/DP gospels going around... I'd like to introduce an anti-pattern to the anti-pattern: GST or "Get Stuff Done" AKA "Get Sh** done" AKA GRD (GetRDone). This pattern focused on just what it says, getting stuff done, in a simple way. I may try to outline it more in a later post, or please share your ideas on this antidote pattern. Anyway, I'm in the midst of a great example of AntFarm anti-pattern as I write (as a bonus, there is no documentation or comments). Please share you thoughts on how this anti-pattern has become so prevelant, how we can avoid it, and how can one undo or deal with this pattern in a live system one must work with!

    Read the article

  • Is there an "embedded DBMS" to support multiple writer applications (processes) on the same db files

    - by Amir Moghimi
    I need to know if there is any embedded DBMS (preferably in Java and not necessarily relational) which supports multiple writer applications (processes) on the same set of db files. BerkeleyDB supports multiple readers but just one writer. I need multiple writers and multiple readers. UPDATE: It is not a multiple connection issue. I mean I do not need multiple connections to a running DBMS application (process) to write data. I need multiple DBMS applications (processes) to commit on the same storage files. HSQLDB, H2, JavaDB (Derby) and MongoDB do not support this feature. I think that there may be some File System limitations that prohibit this. If so, is there a File System that allows multiple writers on a single file? Use Case: The use case is a high-throughput clustered system that intends to store its high-volume business log entries into a SAN storage. Storing business logs in separate files for each server does not fit because query and indexing capabilities are needed on the whole biz logs. Because "a SAN typically is its own network of storage devices that are generally not accessible through the regular network by regular devices", I want to use SAN network bandwidth for logging while cluster LAN bandwidth is being used for other server to server and client to server communications.

    Read the article

  • Hibernate limitations on using variables in queries

    - by sammichy
    I had asked the following question I have the following table structure for a table Player Table Player { Long playerID; Long points; Long rank; } Assuming that the playerID and the points have valid values, can I update the rank for all the players based on the number of points in a single query? If two people have the same number of points, they should tie for the rank. And received the answer from Daniel Vassalo (thank you). UPDATE player JOIN (SELECT p.playerID, IF(@lastPoint <> p.points, @curRank := @curRank + 1, @curRank) AS rank, IF(@lastPoint = p.points, @curRank := @curRank + 1, @curRank), @lastPoint := p.points FROM player p JOIN (SELECT @curRank := 0, @lastPoint := 0) r ORDER BY p.points DESC ) ranks ON (ranks.playerID = player.playerID) SET player.rank = ranks.rank; When I try to execute this as a native query in Hibernate, the following exception is thrown. java.lang.IllegalArgumentException: org.hibernate.QueryException: Space is not allowed after parameter prefix ':' Apparently this has been an open issue for the last couple of years, I want to know if the ranking query can be made to work either Without using any variables in the SQL query OR Using any workaround for Hibernate.

    Read the article

  • Strange rare out-of-order data received using Indy

    - by Jim
    We're having a bizarre problem with Indy10 where two large strings (a few hundred characters each) that we send out one after the other are appearing at the other end intertwined oddly. This happens extremely infrequently. Each string is a complete XML message terminated with a LF and in general the READ process reads an entire XML message, returning when it sees the LF. The call to actually send the message is protected by a critical section around the call to the IOHandler's writeln method and so it is not possible for two threads to send at the same time. (We're certain the critical section is implemented/working properly). This problem happens very rarely. The symptoms are odd...when we send string A followed by string B what we received at the other end (on the rare occasions where we have failure) is the trailing section of string A by itself (i.e., there's a LF at the end of it) followed by the leading section of string A and then the entire string B followed by a single LF. We've verified that the "timed out" property is not true after the partial read - we log that property after every read that returns content. Also, we know there are no embedded LF characters in the string, as we explicitly replace all non-alphanumeric characters in the string with spaces before appending the LF and sending it. We have log mechanisms inside the critical sections on both the transmission and receiving ends and so we can see this behavior at the "wire". We're completely baffled and wondering (although always the lowest possibility) whether there could be some low-level Indy issues that might cause this issue, e.g., buffers being sent in the wrong order....very hard to believe this could be the issue but we're grasping at straws. Does anyone have any bright ideas?

    Read the article

  • Is there a better way to create a generic convert string to enum method or enum extension?

    - by Kelsey
    I have the following methods in an enum helper class (I have simplified it for the purpose of the question): static class EnumHelper { public enum EnumType1 : int { Unknown = 0, Yes = 1, No = 2 } public enum EnumType2 : int { Unknown = 0, Dog = 1, Cat = 2, Bird = 3 } public enum EnumType3 : int { Unknown = 0, iPhone = 1, Andriod = 2, WindowsPhone7 = 3, Palm = 4 } public static EnumType1 ConvertToEnumType1(string value) { return (string.IsNullOrEmpty(value)) ? EnumType1.Unknown : (EnumType1)(Enum.Parse(typeof(EnumType1), value, true)); } public static EnumType2 ConvertToEnumType2(string value) { return (string.IsNullOrEmpty(value)) ? EnumType2.Unknown : (EnumType2)(Enum.Parse(typeof(EnumType2), value, true)); } public static EnumType3 ConvertToEnumType3(string value) { return (string.IsNullOrEmpty(value)) ? EnumType3.Unknown : (EnumType3)(Enum.Parse(typeof(EnumType3), value, true)); } } So the question here is, can I trim this down to an Enum extension method or maybe some type of single method that can handle any type. I have found some examples to do so with basic enums but the difference in my example is all the enums have the Unknown item that I need returned if the string is null or empty (if no match is found I want it to fail). Looking for something like the following maybe: EnumType1 value = EnumType1.Convert("Yes"); // or EnumType1 value = EnumHelper.Convert(EnumType1, "Yes"); One function to do it all... how to handle the Unknown element is the part that I am hung up on.

    Read the article

  • Finding the last focused element with jQuery.

    - by Joshua Cody
    I'm looking to determine which element had the last focus in a series of inputs, that are added dynamically by the user. This code can only get the inputs that are available on page load: $('input.item').focus(function(){ $(this).siblings('ul').slideDown(); }); And this code sees all elements that have ever had focus: $('input.item').live('focus', function(){ $(this).siblings('ul').slideDown(); }); The HTML structure is this: <ul> <li><input class="item" name="goals[]"> <ul> <li>long list here</li> <li>long list here</li> <li>long list here</li> </ul></li> </ul> <a href="#" id="add">Add another</a> On page load, a single input loads. Then with each add another, a new copy of the top unordered list's contents are made and appended. And when each gets focus, I'd like to show the list beneath it. But I don't seem to be able to "watch for the most recently focused element, which exists now or in the future." Do I have some sort of fundamental assumption wrong?

    Read the article

  • Deploy multiple instances of an EAR (representing versions) to Glassfish

    - by Thorbjørn Ravn Andersen
    I basically want to be able to deploy multiple versions of the same EAR file to the same server (Glassfish instance?) , and have a unique path to each version separating them. From my reading on this it appears that multiple EARs deploy to the root of the web server namespace so that they can coexist if they do not have colliding context-root's of WAR's. In my case I'd rather have that instead of everything going under "/", I'd like to be able to brand a given EAR-file build to ALWAYS deploy under a given path like "/foo-20100319" or "/foo-CUSTOMER-20010101". This can easily be done with a single WAR file just by renaming it. I do not need or want them to disturb each other. It is my understanding that this remapping is outside the scope of the application.xml file, so I found that http://docs.sun.com/app/docs/doc/820-7693/beayr?a=view says that I can specify web-uri and context-root, but I am not certain that what I wish to do, can be specified with these in Glassfish. How should I approach this? I have full control over the build process. (I have found http://stackoverflow.com/questions/877390/deploying-multiple-java-web-apps-to-glassfish-in-one-go but I am not certain how to apply this to what I need).

    Read the article

  • How can I return json from my WCF rest service (.NET 4), using Json.Net, without it being a string,

    - by Samuel Meacham
    The DataContractJsonSerializer is unable to handle many scenarios that Json.Net handles just fine when properly configured (specifically, cycles). A service method can either return a specific object type (in this case a DTO), in which case the DataContractJsonSerializer will be used, or I can have the method return a string, and do the serialization myself with Json.Net. The problem is that when I return a json string as opposed to an object, the json that is sent to the client is wrapped in quotes. Using DataContractJsonSerializer, returning a specific object type, the response is: {"Message":"Hello World"} Using Json.Net to return a json string, the response is: "{\"Message\":\"Hello World\"}" I do not want to have to eval() or JSON.parse() the result on the client, which is what I would have to do if the json comes back as a string, wrapped in quotes. I realize that the behavior is correct; it's just not what I want/need. I need the raw json; the behavior when the service method's return type is an object, not a string. So, how can I have my method return an object type, but not use the DataContractJsonSerializer? How can I tell it to use the Json.Net serializer instead? Or, is there someway to directly write to the response stream? So I can just return the raw json myself? Without the wrapping quotes? Here is my contrived example, for reference: [DataContract] public class SimpleMessage { [DataMember] public string Message { get; set; } } [ServiceContract] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)] public class PersonService { // uses DataContractJsonSerializer // returns {"Message":"Hello World"} [WebGet(UriTemplate = "helloObject")] public SimpleMessage SayHelloObject() { return new SimpleMessage("Hello World"); } // uses Json.Net serialization, to return a json string // returns "{\"Message\":\"Hello World\"}" [WebGet(UriTemplate = "helloString")] public string SayHelloString() { SimpleMessage message = new SimpleMessage() { Message = "Hello World" }; string json = JsonConvert.Serialize(message); return json; } // I need a mix of the two. Return an object type, but use the Json.Net serializer. }

    Read the article

  • Dealing with asynchronous control structures (Fluent Interface?)

    - by Christophe Herreman
    The initialization code of our Flex application is doing a series of asynchronous calls to check user credentials, load external data, connecting to a JMS topic, etc. Depending on the context the application runs in, some of these calls are not executed or executed with different parameters. Since all of these calls happen asynchronously, the code controlling them is hard to read, understand, maintain and test. For each call, we need to have some callback mechanism in which we decide what call to execute next. I was wondering if anyone had experimented with wrapping these calls in executable units and having a Fluent Interface (FI) that would connect and control them. From the top of my head, the code might look something like: var asyncChain:AsyncChain = execute(LoadSystemSettings) .execute(LoadAppContext) .if(IsAutologin) .execute(AutoLogin) .else() .execute(ShowLoginScreen) .etc; asyncChain.execute(); The AsyncChain would be an execution tree, build with the FI (and we could of course also build one without a FI). This might be an interesting idea for environments that run in a single threaded model like the Flash Player, Silverlight, JavaFX?, ... Before I dive into the code to try things out, I was hoping to get some feedback.

    Read the article

  • How should I write Jquery Mobile app for browsers with and without javascript support?

    - by Adrian Grigore
    Hi, I'm trying to wrap my head around jQuery Mobile. My aim is to build a very fast application with a look and feel as close as possible to a native app (at least for modern devices). I understand there are two ways of navigating between pages: Loading each page as a separate page and linking to other pages with regular html anchors. Putting all (or many) pages on one single web page and navigating between them by means of javascript ($.mobile.changePage (method) and similar api functions. The first approach should work on all browsers, but performs quite poorly since there is a delay between each page transition. The second looks like it should be much faster, so I would definitely prefer this approach. But how would that work for mobile device browsers without javascript support? It certainly seems to violate jQuery Mobile's aim to provide a gracefully degraded experience for C-grade browsers. It looks to me like I need to implement my app twice, once optimized for browsers with javascript support, once for browsers without? Using may be another option, but that looks even more messy. What's the recommended way to approach this dilemma? Is there anything I have not noticed? Thanks, Adrian

    Read the article

  • How to get contacts in order of they upcoming birthdays?

    - by Pentium10
    I have code to read contacts details and to read birthday. But how do I get a list of contacts in order of they upcoming birthday? For a single contact identified by id I get details and birthday like this. Cursor c = null; try { Uri uri = ContentUris.withAppendedId( ContactsContract.Contacts.CONTENT_URI, id); c = ctx.getContentResolver().query(uri, null, null, null, null); if (c != null) { if (c.moveToFirst()) { DatabaseUtils.cursorRowToContentValues(c, data); } } c.close(); // read birthday c = ctx.getContentResolver() .query( Data.CONTENT_URI, new String[] { Event.DATA }, Data.CONTACT_ID + "=" + id + " AND " + Data.MIMETYPE + "= '" + Event.CONTENT_ITEM_TYPE + "' AND " + Event.TYPE + "=" + Event.TYPE_BIRTHDAY, null, Data.DISPLAY_NAME); if (c != null) { try { if (c.moveToFirst()) { this.setBirthday(c.getString(0)); } } finally { c.close(); } } return super.load(id); } catch (Exception e) { Log.v(TAG(), e.getMessage(), e); e.printStackTrace(); return false; } finally { if (c != null) c.close(); }

    Read the article

  • What's the best way to fill or paint around an image in Java?

    - by wsorenson
    I have a set of images that I'm combining into a single image mosaic using JAI's MosaicDescriptor. Most of the images are the same size, but some are smaller. I'd like to fill in the missing space with white - by default, the MosaicDescriptor is using black. I tried setting the the double[] background parameter to { 255 }, and that fills in the missing space with white, but it also introduces some discoloration in some of the other full-sized images. I'm open to any method - there are probably many ways to do this, but the documentation is difficult to navigate. I am considering converting any smaller images to a BufferedImage and calling setRGB() on the empty areas (though I am unsure what to use for the scansize on the batch setRGB() method). My question is essentially: What is the best way to take an image (in JAI, or BufferedImage) and fill / add padding to a certain size? Is there a way to accomplish this in the MosaicDescriptor call without side-effects? For reference, here is the code that creates the mosaic: for (int i = 0; i < images.length; i++) { images[i] = JPEGDescriptor.create(new ByteArraySeekableStream(images[i]), null); if (i != 0) { images[i] = TranslateDescriptor.create(image, (float) (width * i), null, null, null); } } RenderedOp finalImage = MosaicDescriptor.create(ops, MosaicDescriptor.MOSAIC_TYPE_OVERLAY, null, null, null, null, null);

    Read the article

  • Phantom class definitions in Flash CS5?

    - by cc
    I'm trying to get FlashPunk working in the Flash CS5 IDE (don't ask), and I'm having trouble getting it to compile. In strict mode, the error I'm getting is: net/flashpunk/FP.as, Line 95 1119: Access of possibly undefined property _inherit through a reference with static type World. Typically, this means that there is a missing variable definition or the class being compiled cannot see that variable. Presumably, the framework compiles for others, so I'm pretty sure this isn't the issue, but I went in anyway and made sure the variables existed and set these variables to public (they were set to internal), but the error still occurred. It was almost like the compiler wasn't seeing the property definitions. If I turn off "strict mode", the app compiles, but then I get this error: ArgumentError: Error #1063: Argument count mismatch on World(). Expected 2, got 0. Now, World is a class in the FlashPunk package. In the class definition for it, the constructor does not expect any arguments: public function World() { ... ...and yet, for some reason, Flash is expecting two arguments. So it appears that everything is correct, but Flash is somehow expecting something different than what World's constructor defines. These two issues combined makes it seem like Flash is getting some other phantom version of another class called "World" which has two constructor arguments and different properties. I've gone in and checked the ActionScript settings. The only external-to-project stuff referenced is the default "$(AppConfig)/ActionScript 3.0/libs". And I'm not using any of my own code other than a single "Main.as" file that super's Engine to set a few parameters - certainly, there's no new World class. With a generic name like "World", I thought perhaps this is a reserved class name within Flash or something, maybe imported from the default libs mentioned above, but some Googling turning up empty seems to put the lie to that. Any idea what might be going on?

    Read the article

  • Using Mapping Models to migrate between Core Data Object Models

    - by westsider
    I have a fairly simply scheme. Essentially, Run <-- Data (where a Run holds a data, e.g., Temperature, sampled from some sort of sensor). Now, it seems that sensors can have more than one measurement (e.g., Temperature and Humidity). So, a single Run could have multiple data samples. Hence, Run <-- Sample and Sample <-- Data. (And for simplicity I am leaving Run <-- Data in place, for now.) If I create a new mapping model, then things generally work - except that no new Samples are created, no relationships are established between Runs and Samples nor between Samples and Datas. I am trying to get mapping model to migrate my model but even the slightest change to the generated mapping model results in Cocoa error 134110. For example, if I take the "Sample" mapping (which has no Source) and set its Source to 'Run' (so that I can set Sample's inverse relationship 'run' appropriately) then the mapping changes its name to "RunToSample". There are two relationships handled in this mapping: data and run. The data property gets set automatically to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "DataToData", $source.dataSet) Following this example, I set the run property to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToRun", $source) Similarly, I set the 'sample' property mapping in RunToRun to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToSample", $source) and the 'sample' property in DataToData to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToSample", $source.run) So, what, I wonder, is going wrong? I have tried various permutations, such as leaving the 'inverse' relationships unspecified. But I continue to get the same error (134110) regardless. I imagine that this is a lot easier than it seems and that I am missing some fundamental but minor piece. I have also tried subclassing NSEntityMigrationPolicy and overriding -createDestinationInstancesForSourceInstance: but these efforts have met with much the same results. Thanks in advance for any pointers or (relevant :-) advice.

    Read the article

  • Simulating Google Appengine's Task Queue with Gearman

    - by sotangochips
    One of the characteristics I love most about Google's Task Queue is its simplicity. More specifically, I love that it takes a URL and some parameters and then posts to that URL when the task queue is ready to execute the task. This structure means that the tasks are always executing the most current version of the code. Conversely, my gearman workers all run code within my django project -- so when I push a new version live, I have to kill off the old worker and run a new one so that it uses the current version of the code. My goal is to have the task queue be independent from the code base so that I can push a new live version without restarting any workers. So, I got to thinking: why not make tasks executable by url just like the google app engine task queue? The process would work like this: User request comes in and triggers a few tasks that shouldn't be blocking. Each task has a unique URL, so I enqueue a gearman task to POST to the specified URL. The gearman server finds a worker, passes the url and post data to a worker The worker simply posts to the url with the data, thus executing the task. Assume the following: Each request from a gearman worker is signed somehow so that we know it's coming from a gearman server and not a malicious request. Tasks are limited to run in less than 10 seconds (There would be no long tasks that could timeout) What are the potential pitfalls of such an approach? Here's one that worries me: The server can potentially get hammered with many requests all at once that are triggered by a previous request. So one user request might entail 10 concurrent http requests. I suppose I could have a single worker with a sleep before every request to rate-limit. Any thoughts?

    Read the article

  • JPA merge fails due to duplicate key

    - by wobblycogs
    I have a simple entity, Code, that I need to persist to a MySQL database. public class Code implements Serializable { @Id private String key; private String description; ...getters and setters... } The user supplies a file full of key, description pairs which I read, convert to Code objects and then insert in a single transaction using em.merge(code). The file will generally have duplicate entries which I deal with by first adding them to a map keyed on the key field as I read them in. A problem arises though when keys differ only by case (for example: XYZ and XyZ). My map will, of course, contain both entries but during the merge process MySQL sees the two keys as being the same and the call to merge fails with a MySQLIntegrityConstraintViolationException. I could easily fix this by uppercasing the keys as I read them in but I'd like to understand exactly what is going wrong. The conclusion I have come to is that JPA considers XYZ and XyZ to be different keys but MySQL considers them to be the same. As such when JPA checks its list of known keys (or does whatever it does to determine whether it needs to perform an insert or update) it fails to find the previous insert and issuing another which then fails. Is this corrent? Is there anyway round this other than better filtering the client data? I haven't defined .equals or .hashCode on the Code class so perhaps this is the problem.

    Read the article

  • How many files in a directory is too many?

    - by Kip
    Does it matter how many files I keep in a single directory? If so, how many files in a directory is too many, and what are the impacts of having too many files? (This is on a Linux server.) Background: I have a photo album website, and every image uploaded is renamed to an 8-hex-digit id (say, a58f375c.jpg). This is to avoid filename conflicts (if lots of "IMG0001.JPG" files are uploaded, for example). The original filename and any useful metadata is stored in a database. Right now, I have somewhere around 1500 files in the images directory. This makes listing the files in the directory (through FTP or SSH client) take a few seconds. But I can't see that it has any affect other than that. In particular, there doesn't seem to be any impact on how quickly an image file is served to the user. I've thought about reducing the number of images by making 16 subdirectories: 0-9 and a-f. Then I'd move the images into the subdirectories based on what the first hex digit of the filename was. But I'm not sure that there's any reason to do so except for the occasional listing of the directory through FTP/SSH.

    Read the article

  • RESTful idempotence

    - by DutrowLLC
    I'm designing a RESTful web service utilizing ROA(Resource oriented architecture). I'm trying to work out an efficient way to guarantee idempotence for PUT requests that create new resources in cases that the server designates the resource key. From my understanding, the traditional approach is to create a type of transaction resource such as /CREATE_PERSON. The the client-server interaction for creating a new person resource would be in two parts: Step 1: Get unique transaction id for creating the new PERSON resource::: **Client request:** GET /CREATE_PERSON **Server response:** 200 OK transaction-id:"as8yfasiob" Step 2: Create the new person resource in a request guaranteed to be unique by using the transaction id::: **Client request** PUT /CREATE_PERSON/{transaction_id} first_name="Big bubba" **Server response** 201 Created // (If the request is a duplicate, it would send this PersonKey="398u4nsdf" // same response without creating a new resource. It // would perhaps send an error response if the was used // on a transaction id non-duplicate request, but I have // control over the client, so I can guarantee that this // won't happen) The problem that I see with this approach is that it requires sending two requests to the server in order to do to single operation of creating a new PERSON resource. This creates a performance issues increasing the chance that the user will be waiting around for the client to complete their request. I've been trying to hash out ideas for eliminating the first step such as pre-sending transaction-id's with each request, but most of my ideas have other issues or involve sacrificing the statelessness of the application. Is there a way to do this?

    Read the article

  • Starting CLI application programmatically does not work depending on arguments

    - by Daniel Beck
    I try to start plink.exe (PuTTY Link, the command line utility/version of PuTTY) from a C# application to establish an SSH reverse tunnel, but it does no longer work as soon as I pass the correct parameters. What does that mean? The following works as expected: it opens a command line window, displays that I forgot to pass the password for the -pw argument quits, and shows the prompt. I know it got the arguments, since it specifically requests the one thing I did not provide. Uri uri = omitted; ProcessStartInfo info = new ProcessStartInfo(); info.FileName = "cmd"; info.Arguments = "/k \"C:\\Program Files (x86)\\PuTTY\\plink.exe\" -R 3389:" + uri.Host + ":" + uri.Port + " -N -l username -pw"; // TODO pwd Process p = Process.Start(info); I tried the same think with calling plink.exe directly instead of cmd.exe /k, but the window closes immediately, which is unfortunate for bug-hunting. BUT when I pass a password in the arguments, plink.exe displays the program help (showing available parameters) and quits: Uri uri = omitted; ProcessStartInfo info = new ProcessStartInfo(); info.FileName = "cmd"; info.Arguments = "/k \"C:\\Program Files (x86)\\PuTTY\\plink.exe\" -R 3389:" + uri.Host + ":" + uri.Port + " -N -l username -pw secretpassword"; Process p = Process.Start(info); No indication of missing parameters. Both the cmd /k and plink.exe variants do not work (the latter closes immediately, so no information regarding different behaviour). When I launch the application from the Windows 7 Start Menu launcher with the identical parameters, it opens a cmd.exe window and establishes the connection as requested. What's wrong? Is there a way plink.exe notices it's not running in a real shell? If yes, how can I circumvent it, like the Start Menu "prompt" does? I hope this question is right on SO, since it, though specifically for a single application, revolves around launching it successfully programmatically.

    Read the article

  • How does lucene index documents?

    - by Mehdi Amrollahi
    Hello, I read some document about Lucene; also I read the document in this link (http://lucene.sourceforge.net/talks/pisa). I don't really understand how Lucene indexes documents and don't understand which algorithms Lucene uses for indexing? On the above link, it says Lucene uses this algorithm for indexing: incremental algorithm: maintain a stack of segment indices create index for each incoming document push new indexes onto the stack let b=10 be the merge factor; M=8 for (size = 1; size < M; size *= b) { if (there are b indexes with size docs on top of the stack) { pop them off the stack; merge them into a single index; push the merged index onto the stack; } else { break; } } How does this algorithm provide optimized indexing? Does Lucene use B-tree algorithm or any other algorithm like that for indexing - or does it have a particular algorithm? Thank you for reading my post.

    Read the article

  • IIS error hosting WCF Data Service on shared web host

    - by jkohlhepp
    My client has a website hosted on a shared web server. I don't have access to IIS. I am trying to deploy a WCF Data Service onto his site. I am getting this error: IIS specified authentication schemes 'IntegratedWindowsAuthentication, Anonymous', but the binding only supports specification of exactly one authentication scheme. Valid authentication schemes are Digest, Negotiate, NTLM, Basic, or Anonymous. Change the IIS settings so that only a single authentication scheme is used. I have searched SO and other sites quite a bit but can't seem to find someone with my exact situation. I cannot change the IIS settings because this is a third party's server and it is a shared web server. So my only option is to change things in code or in the service config. My service config looks like this: <system.serviceModel xdt:Transform="Insert"> <serviceHostingEnvironment> <baseAddressPrefixFilters> <add prefix="http://www.somewebsite.com"/> </baseAddressPrefixFilters> </serviceHostingEnvironment> <bindings> <webHttpBinding> <binding name="{Binding Name}" > <security mode="None" /> </binding> </webHttpBinding> </bindings> <services> <service name="{Namespace to Service}"> <endpoint address="" binding="webHttpBinding" bindingConfiguration="{Binding Name}" contract="System.Data.Services.IRequestHandler"> </endpoint> </service> </services> </system.serviceModel> As you can see I tried to set the security mode to "None" but that didn't seem to help. What should I change to resolve this error?

    Read the article

  • iText PDFReader Extremely Slow To Open

    - by Wbmstrmjb
    I have some code that combines a few pages of acro forms (with acrofields in tact) and then at the end writes some JS to the entire document. It is the PdfReader in the function adding the JS that is taking extremely long to instantiate (about 12 seconds for a 1MB file). Here is the code (pretty simple): public static byte[] AddJavascript(byte[] document, string js) { PdfReader reader = new PdfReader(new RandomAccessFileOrArray(document), null); MemoryStream msOutput = new MemoryStream(); PdfStamper stamper = new PdfStamper(reader, msOutput); PdfWriter writer = stamper.Writer; writer.AddJavaScript(js); stamper.Close(); reader.Close(); byte[] withJS = msOutput.GetBuffer(); return withJS; } I have benchmarked the above and the line that is slow is the first one. I have tried reading it from a file instead of memory and tried using a MemoryStream instead of the RandomAccessFileOrArray. Nothing makes it any faster. If I add JS to a single page document, it is very fast. So my thought is that the code that combines the pages is somehow making the PDF slow to read for the PdfReader. Here is the combine code: public static byte[] CombineFiles(List<byte[]> sourceFiles) { MemoryStream output = new MemoryStream(); PdfCopyFields copier = new PdfCopyFields(output); try { output.Position = 0; foreach (var fileBytes in sourceFiles) { PdfReader fileReader = new PdfReader(fileBytes); copier.AddDocument(fileReader); } } catch (Exception exception) { //throw } finally { copier.Close(); } byte[] concat = output.GetBuffer(); return concat; } I am using PdfCopyFields because I need to preserve the form fields and so cannot use the PdfCopy or PdfSmartCopy. This combine code is very fast (few ms) and produces working documents. The AddJS code above is called after it and the PdfReader open is the slow piece. Any ideas?

    Read the article

  • DataTableReader is invalid for current DataTable 'TempTable'

    - by Sk93
    Hi, I'm getting the following error whenever my code creates a DataTableReader from a valid DataTable Object: "DataTableReader is invalid for current DataTable 'TempTable'." The thing is, if I reboot my machine, it works fine for an undertimed amount of time, then dies with the above. The code that throws this error could have been working fine for hours and then: bang. you get this error. It's not limited to one line either; it's every single location that a DataTableReader is used. Also, this error does NOT occur on the production web server - ever. This is an example of one of the lines where it falls over: If I step over this line, I get this: However, if I do this in the immediate window: I get no problems. Same goes if I actually use that line in the code. This has been driving me nuts for the best part of a week, and I've failed to find anything on Google that could help (as I'm pretty positive this isn't a coding issue). Some technical info: DEV Box: Vista 32bit (with all current windows updates) Visual Studio 2008 v9.0.30729.1 SP dotNet Framework 3.5 SP1 SQL Server: Microsoft SQL Server 2005 Standard Edition- 9.00.4035.00 (X64) Windows 2003 64bit (with all current windows updates) Web Server: Windows 2003 64bit (with all current windows updates) any help, ideas, or advice would be greatly appreciated! Cheers, Ian

    Read the article

< Previous Page | 571 572 573 574 575 576 577 578 579 580 581 582  | Next Page >