Search Results

Search found 6580 results on 264 pages for 'require'.

Page 215/264 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • Creating a 'flexible' XML schema

    - by Fiona Holder
    I need to create a schema for an XML file that is pretty flexible. It has to meet the following requirements: Validate some elements that we require to be present, and know the exact structure of Validate some elements that are optional, and we know the exact structure of Allow any other elements Allow them in any order Quick example: XML <person> <age></age> <lastname></lastname> <height></height> </person> My attempt at an XSD: <xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="person"> <xs:complexType> <xs:sequence> <xs:element name="firstname" minOccurs="0" type="xs:string"/> <xs:element name="lastname" type="xs:string"/> <xs:any processContents="lax" minOccurs="0" maxOccurs="unbounded" /> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> Now my XSD satisfies requirements 1 and 3. It is not a valid schema however, if both firstname and lastname were optional, so it doesn't satisfy requirement 2, and the order is fixed, which fails requirement 4. Now all I need is something to validate my XML. I'm open to suggestions on any way of doing this, either programmatically in .NET 3.5, another type of schema etc. Can anyone think of a solution to satisfy all 4 requirements?

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • Doubly Linked Lists Implementation

    - by user552127
    Hi All, I have looked at most threads here about Doubly linked lists but still unclear about the following. I am practicing the Goodrich and Tamassia book in Java. About doubly linked lists, please correct me if I am wrong, it is different from a singly linked list in that a node could be inserted anywhere, not just after the head or after the tail using both the next and prev nodes available, while in singly linked lists, this insertion anywhere in the list is not possible ? If one wants to insert a node in a doubly linked list, then the default argument should be either the node after the to-be inserted node or node before the to-be inserted node ? But if this is so, then I don't understand how to pass the node before or after. Should we be displaying all nodes that were inserted till now and ask the user to select the node before or after which some new node is to be inserted ? My doubt is how to pass this default node. Because I assume that will require the next and prev nodes of these nodes as well. For e.g, Head<-A<-B<-C<-D<-E<-tail If Z is the new node to be inserted after say D, then how should node D be passed ? I am confused with this though it seems pretty simple to most. But pl do explain. Thanks, Sanjay

    Read the article

  • What's the most efficient way to load data from a file to a collection on-demand?

    - by Dan
    I'm working on a java project that will allows users to parse multiple files with potentially thousands of lines. The information parsed will be stored in different objects, which then will be added to a collection. Since the GUI won't require to load ALL these objects at once and keep them in memory, I'm looking for an efficient way to load/unload data from files, so that data is only loaded into the collection when a user requests it. I'm just evaluation options right now. I've also thought of the case where, after loading a subset of the data into the collection, and presenting it on the GUI, the best way to reload the previously observed data. Re-run the parser/Populate collection/Populate GUI? or probably find a way to keep the collection into memory, or serialize/deserialize the collection itself? I know that loading/unloading subsets of data can get tricky if some sort of data filtering is performed. Let's say that I filter on ID, so my new subset will contain data from two previous analyzed subsets. This would be no problem is I keep a master copy of the whole data in memory. I've read that google-collections are good and efficient when handling big amounts of data, and offer methods that simplify lots of things so this might offer an alternative to allow me to keep the collection in memory. This is just general talking. The question on what collection to use is a separate and complex thing. Do you know what's the general recommendation on this type of task? I'd like to hear what you've done with similar scenarios. I can provide more specifics if needed.

    Read the article

  • Extracting word from file using grep or sed

    - by Marco
    Hi, I have a file in the format below: File : \\dvtbbnkapp115\nautilus\030db28a-f241-4054-a0e3-9bfa7e002535.dip was processed. Entries Found : 0 Unarchived Documents : 1 File Size : 1 K Error : The following line could not be processed. Bad Document Type. Error : Marketing and Contact preference change update||7000003735||078ef1f3-db6b-46a8-bb0d-c40bb2296ab5.pdf File : \\dvtbbnkapp115\nautilus\078ef1f3-db6b-46a8-bb0d-c40bb2296ab5.dip was processed. Entries Found : 0 Unarchived Documents : 1 File Size : 1 K Error : The following line could not be processed. Bad Document Type. Error : Declined - Bureau Data (process)||7000003723|252204|2f1d71f4-052c-49f1-95cf-9ca9b4268f0c.pdf File : \\dvtbbnkapp115\nautilus\2f1d71f4-052c-49f1-95cf-9ca9b4268f0c.dip was processed. Entries Found : 0 Unarchived Documents : 1 File Size : 1 K Error : The following line could not be processed. Bad Document Type. Error : Unable to call - please contact|40640510016710|7000003180||3e6a792f-c136-4a4b-a654-37f4476ccef8.pdf I require to extract just the pdf file names after the double pipe and write them to a file. I am a novice when it comes to unix/sed/grep commands, i have tried but no luck? any ideas or examples i could use to extract the information above? thanks

    Read the article

  • Design an Application That Stores and Processes Files

    - by phasetwenty
    I'm tasked with writing an application that acts as a central storage point for files (usually document formats) as provided by other applications. It also needs to take commands like "file 395 needs a copy in X format", at which point some work is offloaded to a 3rd party application. I'm having trouble coming up with a strategy for this. I'd like to keep the design as simple as possible, so I'd like to avoid big extra frameworks or techniques like threads for as long as it makes sense. The clients are expected to be web applications (for example, one is a django application that receives files from our customers; the others are not yet implemented). The platform it will be running on is likely going to be Python on Linux, unless I have a strong argument to use something else. In the beginning I thought I could fit the information I wanted to communicate in the filenames, and let my application parse the filename to figure out what it needed to do, but this is proving too inflexible with the amount of information I'm realizing I need to make available. Another idea is to pair FTP with a database used as a communication medium (client uploads a file and updates the database with a command as a row in a table) but I don't like this idea because adding commands (a known change) looks like it will require adding code as well as changing database schemas. It will also muddy up the interface my clients will have to use. I looked into Pyro to let applications communicate more directly but I don't like the idea of running an extra nameserver for this one purpose. I also don't see a good way to do file transfer within this framework. What I'm looking for is techniques and/or technologies applicable to my problem. At the simplest level, I need the ability to accept files and messages with them.

    Read the article

  • Imlpementations of an Interface with Different Types?

    - by b3njamin
    Searched as best I could but unfortunately I've learned nothing relevant; basically I'm trying to work around the following problem in C#... For example, I have three possible references (refA, refB, refC) and I need to load the correct one depending on a configuration option. So far however I can't see a way of doing it that doesn't require me to use the name of said referenced object all through the code (the referenced objects are provided, I can't change them). Hope the following code makes more sense: public ??? LoadedClass; public Init() { /* load the object, according to which version we need... */ if (Config.Version == "refA") { Namespace.refA LoadedClass = new refA(); } else if (Config.Version == "refB") { Namespace.refB LoadedClass = new refB(); } else if (Config.Version == "refC") { Namespace.refC LoadedClass = new refC(); } Run(); } private void Run(){ { LoadedClass.SomeProperty... LoadedClass.SomeMethod(){ etc... } } As you can see, I need the Loaded class to be public, so in my limited way I'm trying to change the type 'dynamically' as I load in which real class I want. Each of refA, refB and refC will implement the same properties and methods but with different names. Again, this is what I'm working with, not by my design. All that said, I tried to get my head around Interfaces (which sound like they're what I'm after) but I'm looking at them and seeing strict types - which makes sense to me, even if it's not useful to me. Any and all ideas and opinions are welcome and I'll clarify anything if necessary. Excuse any silly mistakes I've made in the terminology, I'm learning all this for the first time. I'm really enjoying working with an OOP language so far though - coming from PHP this stuff is blowing my mind :-)

    Read the article

  • How do I bind to a custom view in Cocoa using Xcode 4?

    - by Newt
    I'm a beginner when it comes to writing Mac apps and working with Cocoa, so please forgive my ignorance. I'm looking to create a custom view, that exposes some properties, which I can then bind to an NSObjectController. Since it's a custom view, the Bindings Inspector obviously doesn't list any of the properties I've added to the view that I can then bind to using Interface Builder. After turning to the Stackoverflow/Google for help, I've stumbled across a couple of possible solutions, but neither seem to be quite right for my situation. The first suggested creating an IBPlugin, which would then mean my bindings would be available in the Bindings Inspector. I could then bind the view to the controller using IB. Apparently IBPlugins aren't supported in Xcode 4, so that one's out the window. I'm also assuming (maybe wrongly) that IBPlugins are no longer supported because there's a better way of doing such things these days? The second option was to bind the controller to the view programmatically. I'm a bit confused as to exactly how I would achieve this. Would it require subclassing NSObjectController so I can add the calls to bind to the view? Would I need to add anything to the view to support this? Some examples I've seen say you'd need to override the bind method, and others say you don't. Also, I've noticed that some example custom views call [self exposeBinding:@"bindingName"] in the initializer. From what I gather from various sources, this is something that's related to IBPlugins and isn't something I need to do if I'm not using them. Is that correct? I've found a post on Stackoverflow here which seems to discuss something very similar to my problem, but there wasn't any clear winner as to the best answer. The last comment by noa on 12th Sept seems interesting, although they mention you should be calling exposeBinding:. Is this comment along the right track? Is the call to exposeBinding really necessary? Apologies for any dumb questions. Any help greatly appreciated.

    Read the article

  • Restoring dev db from production: Running a set of SQL scripts based on a list stored in a table?

    - by mattley
    I need to restore a backup from a production database and then automatically reapply SQL scripts (e.g. ALTER TABLE, INSERT, etc) to bring that db schema back to what was under development. There will be lots of scripts, from a handful of different developers. They won't all be in the same directory. My current plan is to list the scripts with the full filesystem path in table in a psuedo-system database. Then create a stored procedure in this database which will first run RESTORE DATABASE and then run a cursor over the list of scripts, creating a command string for SQLCMD for each script, and then executing that SQLCMD string for each script using xp_cmdshell. The sequence of cursor-sqlstring-xp_cmdshell-sqlcmd feels clumsy to me. Also, it requires turning on xp_cmdshell. I can't be the only one who has done something like this. Is there a cleaner way to run a set of scripts that are scattered around the filesystem on the server? Especially, a way that doesn't require xp_cmdshell?

    Read the article

  • Forwarding HTTP Request with Direct Server Return

    - by Daniel Crabtree
    I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users. The following shows a simple example: 1) The user's browser requests http://www.example.com/files/file1.zip 2) Request goes to server A, based on the DNS A record for example.com. 3) Server A analyzes the request and works out that /files/file1.zip is stored on server B. 4) Server A forwards the request to server B. 5) Server B returns file1.zip directly to the user without going through server A. Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain. From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy. For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required. For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address. My problem is how to actually achieve step 4? I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution. Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything.

    Read the article

  • Are application servers necessary? Advantages of using one? (And other JEE questions)

    - by Mike
    Apologies for the long question.. there seems to be other similar questions on here but none really clear up my confusion. I'd be really grateful if someone could confirm or correct my understanding: Java Enterprise Edition is a set of APIs for building enterprise applications, which take away the burden of developing parts of the system that aren't actually features of the application you are trying to build (i.e. messaging, transactions etc). To do this, you can use an application server, which implements these APIs. So you could use JBoss, Glassfish, WebSphere, WebLogic etc which would provide your application with these enterprise services. However, there are many other implementations of these individual services available such as ActiveMQ for messaging, Hibernate for persistence, OpenEJB etc. You can download these implementations as Java libraries and include them in your application, and use the services they provide in a similar way to using the services provided by an application server. So if my understanding is correct, my questions are: I've read a lot of places that application servers are necessary for JEE features like EJB, but can't you just use an implementation such as OpenEJB and not need an application server at all? Are there any features that an application server provides which you cannot get from another source? Why would/wouldn't I choose an application server over a custom stack such as Tomcat, OpenEJB, ActiveMQ, and Hibernate? Is Spring a complete alternative to JEE? Does it ever require an application server or always just a servlet container? Why would someone choose Spring over JEE? Any help would be much appreciated!

    Read the article

  • Multithreaded Delegates/Events

    - by Matt
    I am trying to disable parts of the UI in a .NET app based on polling done on a background thread. The background thread checks to see if the global database connection the app uses is still open and operable. What I need to do, and would prefer to do it without polling on the UI thread, is to add an event handler that can be raised by the background thread if the connection status changes. That way, any form can have a handler that will disable those parts of the UI that require the connection to function. Attempting to use a straight event declaration in the class that holds the thread sub, and raising the event in the background thread causing cross-thread execution errors regarding accessing UI controls from other threads. Obviously, there is a correct way to do this, but we have limited experience with events (single threaded apps mainly), and almost none with delegates. I've read through documentation and examples for delegates, and it seems to be closer to what we need, but I'm not sure how to make it work in this instance. The app is written mainly in VB.NET, but an example or help in C# is fine too.

    Read the article

  • Starting an STA thread, but with parameters to the final function

    - by DRapp
    I'm a bit weak on how some delegates behave, such as passing a method as the parameter to be invoked. While trying to do some NUnit test scripts, I have something that I need to run many test with. Each of these tests requires a GUI created and thus the need for an STA thread. So, I have something like public class MyTest { // the Delegate "ThreadStart" is part of the System.Threading namespace and is defined as // public delegate void ThreadStart(); protected void Start_STA_Thread(ThreadStart whichMethod) { Thread thread = new Thread(whichMethod); thread.SetApartmentState(ApartmentState.STA); //Set the thread to STA thread.Start(); thread.Join(); } [Test] public void Test101() { // Since the thread issues an INVOKE of a method, I'm having it call the // corresponding "FromSTAThread" method, such as Start_STA_Thread( Test101FromSTAThread ); } protected void Test101FromSTAThread() { MySTA_RequiredClass oTmp = new MySTA_RequiredClass(); Assert.IsTrue( oTmp.DoSomething() ); } } This part all works fine... Now the next step. I now have a different set of tests that ALSO require an STA thread. However, each "thing" I need to do requires two parameters... both strings (for this case). How do I go about declaring proper delegate so I can pass in the method I need to invoke, AND the two string parameters in one shot... I may have 20+ tests to run with in this pattern and may have future of other similar tests with different parameter counts and types of parameters too. Thanks.

    Read the article

  • @selector and return value

    - by user320926
    The idea it's very easy, i have an http download class, this class must support the http authentication but it's basically a background thread so i would like to avoid to prompt directly to the screen, i would like to use a delegate method to require from outside of the class, like a viewController. But i don't know if is possible or if i have to use a different syntax. This class use this delegate protocol: //Updater.h @protocol Updater <NSObject> -(NSDictionary *)authRequired; @optional -(void)statusUpdate:(NSString *)newStatus; -(void)downloadProgress:(int)percentage; @end @interface Updater : NSThread { ... } This is the call to the delegate method: //Updater.m // This check always fails :( if ([self.delegate respondsToSelector:@selector(authRequired:)]) { auth = [delegate authRequired]; } This is the implementation of the delegate method //rootViewController.m -(NSDictionary *)authRequired; { // TODO: some kind of popup or modal view NSMutableDictionary *ret=[[NSMutableDictionary alloc] init]; [ret setObject:@"utente" forKey:@"user"]; [ret setObject:@"password" forKey:@"pass"]; return ret; }

    Read the article

  • SQL query for an access database needed

    - by masfenix
    Hey guys, first off all sorry, i can't login using my yahoo provider. anyways I have this problem. Let me explain it to you, and then I'll show you a picture. I have a access db table. It has 'report id', 'recpient id', and 'recipient name' and 'report req'. What the table "means" is that do the user using that report still require it or can we decommission it. Here is how the data looks like (blocked out company userids and usernames): *check the link below, I cant post pictures cuz yahoo open id provider isnt working. So basically I need to have 3 select queries: 1) Select all the reports where for each report, ALL the users have said no to 'reportreq'. In plain English, i want a listing of all the reports that we have to decommission because no user wants it. 2) Select all the reports where the report is required, and the batchprintcopy is more then 0. This way we can see which report needs to be printed and save paper instead of printing all the reports. 3)A listing of all the reports where the reportreq field is empty. I think i can figure this one out myself. This is using Access/VBA and the data will be exported to an excel spreadsheet. I just a simple query if it exists, OR an alogorithm to do it quickly. I just tried making a "matrix" and it took about 2 hours to populate. https://docs.google.com/uc?id=0B2EMqbpeBpQkMTIyMzA5ZjMtMGQ3Zi00NzRmLWEyMDAtODcxYWM0ZTFmMDFk&hl=en_US

    Read the article

  • what's the performance difference between int and varchar for primary keys

    - by user568576
    I need to create a primary key scheme for a system that will need peer to peer replication. So I'm planning to combine a unique system ID and a sequential number in some way to come up with unique ID's. I want to make sure I'll never run out of ID's, so I'm thinking about using a varchar field, since I could always add another character if I start running out. But I've read that integers are better optimized for this. So I have some questions... 1) Are integers really better optimized? And if they are, how much of a performance difference is there between varchars and integers? I'm going to use firebird for now. But I may switch later. Or possibly support multiple db's. So I'm looking for generalizations, if that's possible. 2) If integers are significantly better optimized, why is that? And is it likely that varchars will catch up in the future, so eventually it won't matter anyway? My varchar keys won't have any meaning, except for the unique system ID part. But I may want to obscure that somehow. Also, I plan to efficiently use all the bits of each character. I don't, for example, plan to code the integer 123 as the character string "123". So I don't think varchars will require more space than integers.

    Read the article

  • In Django, how to define a "location path" in order to display it to the users?

    - by naw
    I want to put a "location path" in my pages showing where the user is. Supposing that the user is watching one product, it could be Index > Products > ProductName where each word is also a link to other pages. I was thinking on passing to the template a variable with the path like [(_('Index'), 'index_url_name'), (_('Products'), 'products_list_url_name'), (_('ProductName'), 'product_url_name')] But then I wonder where and how would you define the hierarchy without repeating myself (DRY)? As far I know I have seen two options To define the hierarchy in the urlconf. It could be a good place since the URL hierarchy should be similar to the "location path", but I will end repeating fragments of the paths. To write a context processor that guesses the location path from the url and includes the variable in the context. But this would imply to maintain a independient hierarchy wich will need to be kept in sync with the urls everytime I modify them. Also, I'm not sure about how to handle the urls that require parameters. Do you have any tip or advice about this? Is there any canonical way to do this?

    Read the article

  • JQuery Delegate and using traveral options in function

    - by Brian
    I am having trouble figuring out how to use the JQuery delegate function to do what I require. Basically, I need to allow users to add Panels (i.e. divs) to a form dynamically by selecting a button. Then when a user clicks a button within a given Panel, I want to be able to to something to that Panel (like change the color in this example). Unfortunately, it seems that references to the JQuery traversing functions don't work in this instance. Can anybody explain how to achieve this effect? Is there anyway to bind a different delegate to the each panel as its added. $('.addPanels').delegate('*', 'click', function() { $(this).parent.css('background-color', 'black'); $('.placeholder').append('Add item'); }); <div class="addPanels"> <div class="panel"> <a href="#" class="addLink">Add item</a> text</div> <div class="placeholder"/> </div> </div>

    Read the article

  • Implementing client callback functionality in WCF

    - by PoweredByOrange
    The project I'm working on is a client-server application with all services written in WCF and the client in WPF. There are cases where the server needs to push information to the client. I initially though about using WCF Duplex Services, but after doing some research online, I figured a lot of people are avoiding it for many reasons. The next thing I thought about was having the client create a host connection, so that the server could use that to make a service call to the client. The problem however, is that the application is deployed over the internet, so that approach requires configuring the firewall to allow incoming traffic and since most of the users are regular users, that might also require configuring the router to allow port forwarding, which again is a hassle for the user. My third option is that in the client, spawns a background thread which makes a call to the GetNotifications() method on server. This method on the server side then, blocks until an actual notification is created, then the thread is notified (using an AutoResetEvent object maybe?) and the information gets sent to the client. The idea is something like this: Client private void InitializeListener() { Task.Factory.StartNew(() => { while (true) { var notification = server.GetNotifications(); // Display the notification. } }, CancellationToken.None, TaskCreationOptions.LongRunning, TaskScheduler.Default); } Server public NotificationObject GetNotifications() { while (true) { notificationEvent.WaitOne(); return someNotificationObject; } } private void NotificationCreated() { // Inform the client of this event. notificationEvent.Set(); } In this case, NotificationCreated() is a callback method called when the server needs to send information to the client. What do you think about this approach? Is this scalable at all?

    Read the article

  • Rails upload file to ftp server

    - by Bob
    I'm on Rails 2.3.5 and Ruby 1.8.6 and trying to figure out how to let a user upload a file to a FTP server on a different machine than my Rails app. Also my Rails app will be hosted on Heroku which doesn't facilitate the writing of files to the local filesystem. index.html.erb <% form_tag '/ftp/upload', :method => :post, :multipart => true do %> <label for="file">File to Upload</label> <%= file_field_tag "file" %> <%= submit_tag 'Upload' %> <% end %> ftp_controller.rb require 'net/ftp' class FtpController < ApplicationController def upload file = params[:file] ftp = Net::FTP.new('remote-ftp-server') ftp.login(user = "***", passwd = "***") ftp.puttextfile(file.read, File.basename(file.original_filename)) ftp.quit() end def index end end Currently I'm just trying to get the Rails app to work on my Windows laptop. With the above code, I'm getting this error Errno::ENOENT in FtpController#upload No such file or directory -.... followed by a dump of the file contents Anyone knows what's going on?

    Read the article

  • Algorithm for generating an array of non-equal costs for a transport problem optimization

    - by Carlos
    I have an optimizer that solves a transportation problem, using a cost matrix of all the possible paths. The optimiser works fine, but if two of the costs are equal, the solution contains one more path that the minimum number of paths. (Think of it as load balancing routers; if two routes are same cost, you'll use them both.) I would like the minimum number of routes, and to do that I need a cost matrix that doesn't have two costs that are equal within a certain tolerance. At the moment, I'm passing the cost matrix through a baking function which tests every entry for equality to each of the other entries, and moves it a fixed percentage if it matches. However, this approach seems to require N^2 comparisons, and if the starting values are all the same, the last cost will be r^N bigger. (r is the arbitrary fixed percentage). Also there is the problem that by multiplying by the percentage, you end up on top of another value. So the problem seems to have an element of recursion, or at least repeated checking, which bloats the code. The current implementation is basically not very good (I won't paste my GOTO-using code here for you all to mock), and I'd like to improve it. Is there a name for what I'm after, and is there a standard implementation? Example: {1,1,2,3,4,5} (tol = 0.05) becomes {1,1.05,2,3,4,5}

    Read the article

  • TouchXML to read in twitter feed for iphone app

    - by Fiona
    Hello there, So I've managed to get the feed from twitter and am attempting to parse it... I only require the following fields from the feed: name, description, time_zone and created_at I am successfully pulling out name and description.. however time_zone and created_at always are nil... The following is the code... Anyone see why this might not be working? -(void) friends_timeline_callback:(NSData *)data{ NSString *string = [[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding]; NSLog(@"Data from twitter: %@", string); NSMutableArray *res = [[NSMutableArray alloc] init]; CXMLDocument *doc = [[[CXMLDocument alloc] initWithData:data options:0 error:nil] autorelease]; NSArray *nodes = nil; //! searching for item nodes nodes = [doc nodesForXPath:@"/statuses/status/user" error:nil]; for (CXMLElement *node in nodes) { int counter; Contact *contact = [[Contact alloc] init]; for (counter = 0; counter < [node childCount]; counter++) { //pulling out name and description only for the minute!!! if ([[[node childAtIndex:counter] name] isEqual:@"name"]){ contact.name = [[node childAtIndex:counter] stringValue]; }else if ([[[node childAtIndex:counter] name] isEqual:@"description"]) { // common procedure: dictionary with keys/values from XML node if ([[node childAtIndex:counter] stringValue] == NULL){ contact.nextAction = @"No description"; }else{ contact.nextAction = [[node childAtIndex:counter] stringValue]; } }else if ([[[node childAtIndex:counter] name] isEqual:@"created_at"]){ contact.date == [[node childAtIndex:counter] stringValue]; }else if([[[node childAtIndex:counter] name] isEqual:@"time_zone"]){ contact.status == [[node childAtIndex:counter] stringValue]; [res addObject:contact]; [contact release]; } } } self.contactsArray = res; [res release]; [self.tableView reloadData]; } Thanks in advance for your help!! Fiona

    Read the article

  • Testing ActionMailer's receive method (Rails)

    - by Brian Armstrong
    There is good documentation out there on testing ActionMailer send methods which deliver mail. But I'm unable to figure out how to test a receive method that is used to parse incoming mail. I want to do something like this: require 'test_helper' class ReceiverTest < ActionMailer::TestCase test "parse incoming mail" do email = TMail::Mail.parse(File.open("test/fixtures/emails/example1.txt",'r').read) assert_difference "ProcessedMail.count" do Receiver.receive email end end end But I get the following error on the line which calls Receiver.receive NoMethodError: undefined method `index' for #<TMail::Mail:0x102c4a6f0> /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/stringio.rb:128:in `gets' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/mail.rb:392:in `parse_header' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/mail.rb:139:in `initialize' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/stringio.rb:43:in `open' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/port.rb:340:in `ropen' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/mail.rb:138:in `initialize' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/mail.rb:123:in `new' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/mail.rb:123:in `parse' /Library/Ruby/Gems/1.8/gems/actionmailer-2.3.4/lib/action_mailer/base.rb:417:in `receive' Tmail is parsing the test file I have correctly. So that's not it. Thanks!

    Read the article

  • Starting a code library.

    - by Rob Stevenson-Leggett
    Hi, I've been meaning to start a library of reusable code snippets for a while and never seem to get round to it. I think my main problems are: Where to start. What structure should my library take? Should it be a compiled library (where appropriate or just classes I can drop into any project? Or a library project that can be included? In my experience, a built library will quickly become out of date and the source will get lost. So I'm leaning towards source libraries that I can export from SVN and include in any project. Intellectual property. I am employeed, so a lot of the code I write is not my IP. How can I ensure that I don't give my own IP away using it on projects in work and at home? I'm thinking the best way would be to licence my library with an open source licence and make sure I only add to it in my own time using my own equipment and therefore making sure that if I use it in a work project the same rules apply as if I was using a third party library. I write in many different languages and often would require two or more parts of this library. Should I look at implementing a few template projects and a core project for each of my chosen reusable components and languages? Has anyone else got this sort of library and how do you organise and update it?

    Read the article

  • Pass an array from IronRuby to C#

    - by cgyDeveloper
    I'm sure this is an easy fix and I just can't find it, but here goes: I have a C# class (let's call it Test) in an assembly (let's say SOTest.dll). Here is something along the lines of what I'm doing: private List<string> items; public List<string> list_items() { return this.items; } public void set_items(List<string> new_items) { this.items = new_items; } In the IronRuby interpreter I run: >>> require "SOTest.dll" true >>> include TestNamespace Object >>> myClass = Test.new TestNamespace.Test >>> myClass.list_items() ['Apples', 'Oranges', 'Pears'] >>> myClass.set_items ['Peaches', 'Plums'] TypeError: can't convert Array into System::Collections::Generic::List(string) I get a similar error whether I make the argument a 'List< string ', 'List< object ' or 'string[ ]'. What is the proper syntax? I can't find a documented mapping of types anywhere (because it's likely too complicated to define in certain scenarios given what Ruby can do).

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >