Search Results

Search found 11972 results on 479 pages for 'writing'.

Page 435/479 | < Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >

  • objective-c Add to/Edit .plist file

    - by Dave
    Does writeToFile:atomically, add data to an existing .plist? Is it possible to modify a value in a .plist ? SHould the app be recompiled to take effect the change? I have a custom .plist list in my app. The structure is as below: <array> <dict> <key>Title</key> <string>MyTitle</string> <key>Measurement</key> <dict> <key>prop1</key> <real>28.86392</real> <key>prop2</key> <real>75.12451</real> </dict> <key>Distance</key> <dict> <key>prop3</key> <real>37.49229</real> <key>prop4</key> <real>58.64502</real> </dict> </dict> </array> The array tag holds multiple items with same structure. I need to add items to the existing .plist. Is that possible? I can write a UIView to do just that, if so. *EDIT - OK, I just tried to writetofile hoping it would atleast overwrite if not add to it. Strangely, the following code selects different path while reading and writing. NSString *rootPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0]; The .plist file is in my project. I can read from it. When I write to it, it is creating a .plist file and saving it in my /users/.../library/! Does this make sense to anyone?

    Read the article

  • Connection Refused running multiple environments on Selenium Grid 1.04 via Ubuntu 9.04

    - by ReadyWater
    Hello, I'm writing a selenium grid test suite which is going to be run on a series of different machines. I wrote most of it on my macbook but have recently transfered it over to my work machine, which is running ubuntu 9.04. That's actually my first experience with a linux machine, so I may be missing something very simple (I have disabled the firewall though). I haven't been able to get the multienvironment thing working at all, and I've been trying and manual reviewing for a while. Any recommendations and help would be greatly, greatly appreciated! The error I'm getting when I run the test is: [java] FAILED CONFIGURATION: @BeforeMethod startFirstEnvironment("localhost", 4444, "*safari", "http://remoteURL:8080/tutor") [java] java.lang.RuntimeException: Could not start Selenium session: ERROR: Connection refused I thought it might be the mac refusing the connection, but using wireshark I determined that no connection attempt was made on the mac . Here's the code for setting up the session, which is where it seems to be dying @BeforeMethod(groups = {"default", "example"}, alwaysRun = true) @Parameters({"seleniumHost", "seleniumPort", "firstEnvironment", "webSite"}) protected void startFirstEnvironment(String seleniumHost, int seleniumPort, String firstEnvironment, String webSite) throws Exception { try{ startSeleniumSession(seleniumHost, seleniumPort, firstEnvironment, webSite); session().setTimeout(TIMEOUT); } finally { closeSeleniumSession(); } } @BeforeMethod(groups = {"default", "example"}, alwaysRun = true) @Parameters({"seleniumHost", "seleniumPort", "secondEnvironment", "webSite"}) protected void startSecondEnvironment(String seleniumHost, int seleniumPort, String secondEnvironment, String webSite) throws Exception { try{ startSeleniumSession(seleniumHost, seleniumPort, secondEnvironment, webSite); session().setTimeout(TIMEOUT); } finally { closeSeleniumSession(); } } and the accompanying build script used to run the test <target name="runMulti" depends="compile" description="Run Selenium tests in parallel (20 threads)"> <echo>${seleniumHost}</echo> <java classpathref="runtime.classpath" classname="org.testng.TestNG" failonerror="true"> <sysproperty key="java.security.policy" file="${rootdir}/lib/testng.policy"/> <sysproperty key="webSite" value="${webSite}" /> <sysproperty key="seleniumHost" value="${seleniumHost}" /> <sysproperty key="seleniumPort" value="${seleniumPort}" /> <sysproperty key="firstEnvironment" value="${firstEnvironment}" /> <sysproperty key="secondEnvironment" value="${secondEnvironment}" /> <arg value="-d" /> <arg value="${basedir}/target/reports" /> <arg value="-suitename" /> <arg value="Selenium Grid Java Sample Test Suite" /> <arg value="-parallel"/> <arg value="methods"/> <arg value="-threadcount"/> <arg value="15"/> <arg value="testng.xml"/> </java>

    Read the article

  • Data access pattern

    - by andlju
    I need some advice on what kind of pattern(s) I should use for pushing/pulling data into my application. I'm writing a rule-engine that needs to hold quite a large amount of data in-memory in order to be efficient enough. I have some rather conflicting requirements; It is not acceptable for the engine to always have to wait for a full pre-load of all data before it is functional. Only fetching and caching data on-demand will lead to the engine taking too long before it is running quickly enough. An external event can trigger the need for specific parts of the data to be reloaded. Basically, I think I need a combination of pushing and pulling data into the application. A simplified version of my current "pattern" looks like this (in psuedo-C# written in notepad): // This interface is implemented by all classes that needs the data interface IDataSubscriber { void RegisterData(Entity data); } // This interface is implemented by the data access class interface IDataProvider { void EnsureLoaded(Key dataKey); void RegisterSubscriber(IDataSubscriber subscriber); } class MyClassThatNeedsData : IDataSubscriber { IDataProvider _provider; MyClassThatNeedsData(IDataProvider provider) { _provider = provider; _provider.RegisterSubscriber(this); } public void RegisterData(Entity data) { // Save data for later StoreDataInCache(data); } void UseData(Key key) { // Make sure that the data has been stored in cache _provider.EnsureLoaded(key); Entity data = GetDataFromCache(key); } } class MyDataProvider : IDataProvider { List<IDataSubscriber> _subscribers; // Make sure that the data for key has been loaded to all subscribers public void EnsureLoaded(Key key) { if (HasKeyBeenMarkedAsLoaded(key)) return; PublishDataToSubscribers(key); MarkKeyAsLoaded(key); } // Force all subscribers to get a new version of the data for key public void ForceReload(Key key) { PublishDataToSubscribers(key); MarkKeyAsLoaded(key); } void PublishDataToSubscribers(Key key) { Entity data = FetchDataFromStore(key); foreach(var subscriber in _subscribers) { subscriber.RegisterData(data); } } } // This class will be spun off on startup and should make sure that all data is // preloaded as quickly as possible class MyPreloadingThread { IDataProvider _provider; MyPreloadingThread(IDataProvider provider) { _provider = provider; } void RunInBackground() { IEnumerable<Key> allKeys = GetAllKeys(); foreach(var key in allKeys) { _provider.EnsureLoaded(key); } } } I have a feeling though that this is not necessarily the best way of doing this.. Just the fact that explaining it seems to take two pages feels like an indication.. Any ideas? Any patterns out there I should have a look at?

    Read the article

  • Keeping files or database records? Java and Python

    - by danpalmer
    My website will use a Neural Network to predict thing based on user data. The user can select the data to be used in training the network and then use their trained network to predict things. I am using a framework to create, train and query the networks. This uses Java. The framework has persistence for saving a network to an XML file. What is the best way to store these files? I can see several potential ideas, but I need help on choosing which is best: Save each network to a separate XML file with a name that is stored in the database. Load this each time. Save all the networks to the same XML file with each network having a different name that is stored in the database. Somehow pass what would normally be written to an XML file to the Django site for writing to the database. This would need to be returned to the Java code when a prediction needs to be made. I am able to do 1 or 2, but I think their performance will be quite limited and I am on shared hosting at the moment, so I don't know how pleased they would be with thousands of files. Also, after adding a few thousand records to one XML file, I was noticing a massive performance hit on saving to it. If I were able to implement version 3 somehow I think it would be best. No issues with separate processes accessing the database and I think performance would be better. Not to mention having no files lying around. However, the stuff in the neural network framework I am using (Encog) for saving to a file needs access to a Java file object, not a string that could be saved to a database. Unless there is some Java magic I can do here (I know very little Java), the only way I can see of doing this would be with a temporary files but I don't know if this is the correct way to do it. I would appreciate any ideas on the best way to implement any of the above 3 ideas or any alternatives. Thanks!

    Read the article

  • How should I provide access to this custom DAL?

    - by Casey
    I'm writing a custom DAL (VB.NET) for an ordering system project. I'd like to explain how it is coded now, and receive some alternate ideas to make coding against the DAL easier/more readable. The DAL is part of an n-tier (not n-layer) application, where each tier is in it's own assembly/DLL. The DAL consists of several classes that have specific behavior. For instance, there is an Order class that is responsible for retrieving and saving orders. Most of the classes have only two methods, a "Get" and a "Save," with multiple overloads for each. These classes are marked as Friend and are only visible to the DAL (which is in it's own assembly). In most cases, the DAL returns what I will call a "Data Object." This object is a class that contains only data and validation, and is located in a common assembly that both the BLL and DAL can read. To provide public access to the DAL, I currently have a static (module) class that has many shared members. A simplified version looks something like this: Public Class DAL Private Sub New End Sub Public Shared Function GetOrder(OrderID as String) as OrderData Dim OrderGetter as New OrderClass Return OrderGetter.GetOrder(OrderID) End Function End Class Friend Class OrderClass Friend Function GetOrder(OrderID as string) as OrderData End Function End Class The BLL would call for an order like this: DAL.GetOrder("123456") As you can imagine, this gets cumbersome very quickly. I'm mainly interested in structuring access to the DAL so that Intellisense is very intuitive. As it stands now, there are too many methods/functions in the DAL class with similar names. One idea I had is to break down the DAL into nested classes: Public Class DAL Private Sub New End Sub Public Class Orders Private Sub New End Sub Public Shared Function Get(OrderID as string) as OrderData End Function End Class End Class So the BLL would call like this: DAL.Orders.Get("12345") This cleans it up a bit, but it leaves a lot of classes that only have references to other classes, which I don't like for some reason. Without resorting to passing DB specific instructions (like where clauses) from BLL to DAL, what is the best or most common practice for providing a single point of access for the DAL?

    Read the article

  • Django conditional template inheritance

    - by Ed
    I have template that displays object elements with hyperlinks to other parts of my site. I have another function that displays past versions of the same object. In this display, I don't want the hyperlinks. I'm under the assumption that I can't dynamically switch off the hyperlinks, so I've included both versions in the same template. I use an if statement to either display the hyperlinked version or the plain text version. I prefer to keep them in the same template because if I need to change the format of one, it will be easy to apply it to the other right there. The template extends framework.html. Framework has a breadcrumb system and it extends base.html. Base has a simple top menu system. So here's my dilemma. When viewing the standard hyperlink data, I want to see the top menu and the breadcrumbs. But when viewing the past version plain text data, I only want the data, no menu, no breadcrumbs. I'm unsure if this is possible given my current design. I tried having framework inherit the primary template so that I could choose to call either framework (and display the breadcrumbs), or the template itself, thus skipping the breadcrumbs, but I want framework.html available for other templates as well. If framework.html extends a specific template, I lose the ability to display it in other templates. I tried writing an if statement that would display a the top_menu block and the nav_menu block from base.html and framework.html respectively. This would overwrite their blocks and allow me to turn off those elements conditional on the if. Unfortunately, it doesn't appear to be conditional; if the block elements are in the template at all, surrounded by an if or not, I lose the menus. I thought about using {% include %} to pick up the breadcrumbs and a split out top menu. In that case though, I'll have to include it all the time. No more inheritance. Is this the best option given my requirement?

    Read the article

  • Is it possible to change HANDLE that has been opened for synchronous I/O to be opened for asynchrono

    - by Martin Dobšík
    Dear all, Most of my daily programming work in Windows is nowadays around I/O operations of all kind (pipes, consoles, files, sockets, ...). I am well aware of different methods of reading and writing from/to different kinds of handles (Synchronous, asynchronous waiting for completion on events, waiting on file HANDLEs, I/O completion ports, and alertable I/O). We use many of those. For some of our applications it would be very useful to have only one way to treat all handles. I mean, the program may not know, what kind of handle it has received and we would like to use, let's say, I/O completion ports for all. So first I would ask: Let's assume I have a handle: HANDLE h; which has been received by my process for I/O from somewhere. Is there any easy and reliable way to find out what flags it has been created with? The main flag in question is FILE_FLAG_OVERLAPPED. The only way known to me so far, is to try to register such handle into I/O completion port (using CreateIoCompletionPort()). If that succeeds the handle has been created with FILE_FLAG_OVERLAPPED. But then only I/O completion port must be used, as the handle can not be unregistered from it without closing the HANDLE h itself. Providing there is an easy a way to determine presence of FILE_FLAG_OVERLAPPED, there would come my second question: Is there any way how to add such flag to already existing handle? That would make a handle that has been originally open for synchronous operations to be open for asynchronous. Would there be a way how to create opposite (remove the FILE_FLAG_OVERLAPPED to create synchronous handle from asynchronous)? I have not found any direct way after reading through MSDN and googling a lot. Would there be at least some trick that could do the same? Like re-creating the handle in same way using CreateFile() function or something similar? Something even partly documented or not documented at all? The main place where I would need this, is to determine the the way (or change the way) process should read/write from handles sent to it by third party applications. We can not control how third party products create their handles. Dear Windows gurus: help please! With regards Martin

    Read the article

  • Handling Exceptions for ThreadPoolExecutor

    - by HonorGod
    I have the following code snippet that basically scans through the list of task that needs to be executed and each task is then given to the executor for execution. The JobExecutor intern creates another executor (for doing db stuff...reading and writing data to queue) and completes the task. JobExecutor returns a Future for the tasks submitted. When one of the task fails, I want to gracefully interrupt all the threads and shutdown the executor by catching all the exceptions. What changes do I need to do? public class DataMovingClass { private static final AtomicInteger uniqueId = new AtomicInteger(0); private static final ThreadLocal<Integer> uniqueNumber = new IDGenerator(); ThreadPoolExecutor threadPoolExecutor = null ; private List<Source> sources = new ArrayList<Source>(); private static class IDGenerator extends ThreadLocal<Integer> { @Override public Integer get() { return uniqueId.incrementAndGet(); } } public void init(){ // load sources list } public boolean execute() { boolean succcess = true ; threadPoolExecutor = new ThreadPoolExecutor(10,10, 10, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(1024), new ThreadFactory() { public Thread newThread(Runnable r) { Thread t = new Thread(r); t.setName("DataMigration-" + uniqueNumber.get()); return t; }// End method }, new ThreadPoolExecutor.CallerRunsPolicy()); List<Future<Boolean>> result = new ArrayList<Future<Boolean>>(); for (Source source : sources) { result.add(threadPoolExecutor.submit(new JobExecutor(source))); } for (Future<Boolean> jobDone : result) { try { if (!jobDone.get(100000, TimeUnit.SECONDS) && success) { // in case of successful DbWriterClass, we don't need to change // it. success = false; } } catch (Exception ex) { // handle exceptions } } } public class JobExecutor implements Callable<Boolean> { private ThreadPoolExecutor threadPoolExecutor ; Source jobSource ; public SourceJobExecutor(Source source) { this.jobSource = source; threadPoolExecutor = new ThreadPoolExecutor(10,10,10, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(1024), new ThreadFactory() { public Thread newThread(Runnable r) { Thread t = new Thread(r); t.setName("Job Executor-" + uniqueNumber.get()); return t; }// End method }, new ThreadPoolExecutor.CallerRunsPolicy()); } public Boolean call() throws Exception { boolean status = true ; System.out.println("Starting Job = " + jobSource.getName()); try { // do the specified task ; }catch (InterruptedException intrEx) { logger.warn("InterruptedException", intrEx); status = false ; } catch(Exception e) { logger.fatal("Exception occurred while executing task "+jobSource.getName(),e); status = false ; } System.out.println("Ending Job = " + jobSource.getName()); return status ; } } }

    Read the article

  • How do you unit test a unit test?

    - by FlySwat
    I was watching Rob Connerys webcasts on the MVCStoreFront App, and I noticed he was unit testing even the most mundane things, things like: public Decimal DiscountPrice { get { return this.Price - this.Discount; } } Would have a test like: [TestMethod] public void Test_DiscountPrice { Product p = new Product(); p.Price = 100; p.Discount = 20; Assert.IsEqual(p.DiscountPrice,80); } While, I am all for unit testing, I sometimes wonder if this form of test first development is really beneficial, for example, in a real process, you have 3-4 layers above your code (Business Request, Requirements Document, Architecture Document), where the actual defined business rule (Discount Price is Price - Discount) could be misdefined. If that's the situation, your unit test means nothing to you. Additionally, your unit test is another point of failure: [TestMethod] public void Test_DiscountPrice { Product p = new Product(); p.Price = 100; p.Discount = 20; Assert.IsEqual(p.DiscountPrice,90); } Now the test is flawed. Obviously in a simple test, it's no big deal, but say we were testing a complicated business rule. What do we gain here? Fast forward two years into the application's life, when maintenance developers are maintaining it. Now the business changes its rule, and the test breaks again, some rookie developer then fixes the test incorrectly...we now have another point of failure. All I see is more possible points of failure, with no real beneficial return, if the discount price is wrong, the test team will still find the issue, how did unit testing save any work? What am I missing here? Please teach me to love TDD, as I'm having a hard time accepting it as useful so far. I want too, because I want to stay progressive, but it just doesn't make sense to me. EDIT: A couple people keep mentioned that testing helps enforce the spec. It has been my experience that the spec has been wrong as well, more often than not, but maybe I'm doomed to work in an organization where the specs are written by people who shouldn't be writing specs.

    Read the article

  • WPF DataTemplates with VS2010 designer support + reusable - would you do it that way?

    - by Christian
    Ok, I am currently tidying up all my old stuff. I ran into the issue of "code only DataTemplates" - which are really a pain in the ass. You can't see anything, they are really hard to design, and I want to improve my project. So I had the idea to use the following solution. The main benefits are: You have designer support for your data template You can easily include example sample data The file naming is consistent and easy to remember The preview does not require an additional XAML wrapper (even with code only controls) I will try to explain and illustrate my solution using a few pictures. I am interested in feedback, especially if you can imagine a better way to do it. And, of course, if you see any maintenance or performance issues. Ok, lets start with a simple PreviewObject. I want to have some data in it, so I create a subclass which will automatically fill in some dummy data. Then I add a list to the control, and name this list. Afterwards I add a DataTemplate, this is the sole reason for the whole control (to be able to see and edit the DataTemplate in place): Now I use this control to get my DataTemplate, to use it in other places. To make this easier, I added some code in the code behind, see here: Now I want a control to show me a list of PreviewItems, so I created a "code-only" control which creates an instance of my service (or gets one using DI in real world) and fills its list box with it: To view the result of this work, I added this control inside the same named XAML, this is basically only to be able to see the final result: What I do not like in this solution: The need to create the last control in "code only". So I tried something different while writing this post. The following two screenshots illustrate the approach. I am creating an instance of the service inside the DataContext, and I am using bindings to supply the Itemssourc and the ItemTemplate. The reason for the strange "static property" is refactoring support. If I hardcode the path in the designer (e.g. using "Path = PreviewHistory") and I refactor the names (which happens quite often, early design phase) - I screw up my controls without realizing it. Does anyone has a better idea for this? I am using Resharper, btw. Thanks for any input, and sorry for the image overkill. Just easier to explain that way.. Chris

    Read the article

  • Extracting the source code of a facebook page with JavaScript

    - by Hafizi Vilie
    If I write code in the JavaScript console of Chrome, I can retrieve the whole HTML source code by entering: var a = document.body.InnerHTML; alert(a); For fb_dtsg on Facebook, I can easily extract it by writing: var fb_dtsg = document.getElementsByName('fb_dtsg')[0].value; Now, I am trying to extract the code "h=AfJSxEzzdTSrz-pS" from the Facebook Page. The h value is especially useful for Facebook reporting. How can I get the h value for reporting? I don't know what the h value is; the h value is totally different when you communicate with different users. Without that h correct value, you can not report. Actually, the h value is AfXXXXXXXXXXX (11 character values after 'Af'), that is what I know. Do you have any ideas for getting the value or any function to generate on Facebook page. The Facebook Source snippet is below, you can view source on facebook profile, and search h=Af, you will get the value: <code class="hidden_elem" id="ukftg4w44"> <!-- <div class="mtm mlm"> ... .... <span class="itemLabel fsm">Unfriend...</span></a></li> <li class="uiMenuItem" data-label="Report/Block..."> <a class="itemAnchor" role="menuitem" tabindex="-1" href="/ajax/report/social.php?content_type=0&amp;cid=1352686914&amp;rid=1352686914&amp;ref=http%3A%2F%2Fwww.facebook.com%2 F%3Fq&amp;h=AfjSxEzzdTSrz-pS&amp;from_gear=timeline" rel="dialog"> <span class="itemLabel fsm">Report/Block...</span></a></li></ul></div> ... .... </div> --> </code> Please guide me. How can extract the value exactly? I tried with following code, but the comment block prevent me to extract the code. How can extract the value which is inside comment block? var a = document.getElementsByClassName('hidden_elem')[3].innerHTML;alert(a);

    Read the article

  • What was your the most impressive technical programming achievement performed to impress a romantic

    - by DVK
    OK, so the archetypal human story is for a guy to go out and impress the girl with some wonderful achievement like slaying a dragon or building a monument or conquering neighboring tribe. This being enlightened 21st century on SO, let's morph this into a: StackOverflower performing a feat of programming to impress a romantic interest. There are two ways to do this: Technical achievement: Impressing a person with suitable background/understanding of programming with actual coding powerss you displayed. A dumb movie example would be that kid in "Hackers" move showing off his hacking skills in front of Angeline Jolie. Artistic achievement: Impressing a person with a result of running said code, whether they understand just how incredible the code itself is. An example is the animated ANSI rose (for a guy who actually wrote the ANSI code) This question is only about the first kind (technical achievements) - e.g. the person of interest was presented with impressive code/design that (s)he was able to properly appreciate. Rules (what doesn't qualify): The target audience must have been a person of romantic interest (prospective or present significant other or random hook-up). E.g. showing your program to your sister who's also a software developer doesn't count. The achievement must have been done specifically with the goal to impress such a person. However, it is OK if the achievement was done to impress a generic qualifying person, not someone specific. Although... if you write code to impress girls in general, I'd say "get a better idea of the opposite sex" The achievement must have been done with the goal of impressing the person. In other words, if you would have done it without romantic interest's knowledge anyway, it doesn't count. As examples, the following does not count: programming for your job. Programming for a coding contest. Open Source program that you'd have done anyway. The precise nature of the awesomeness of the achievement is somewhat irrelevant - from learning entire J2EE in 2 days to writing fancy game engine to implementing Python compiler in LOGO. As long as it's programming/software development related. The achievement should preferably be something other people would rank highly as well. If your date was impressed with your skill at calculating Fibonacci sequence without recursive function calls, it doesn't mean most developers will be. But it does mean you need to start finding better things to do on dates ;)

    Read the article

  • Why does one loop take longer to detect a shared memory update than another loop?

    - by Joseph Garvin
    I've written a 'server' program that writes to shared memory, and a client program that reads from the memory. The server has different 'channels' that it can be writing to, which are just different linked lists that it's appending items too. The client is interested in some of the linked lists, and wants to read every node that's added to those lists as it comes in, with the minimum latency possible. I have 2 approaches for the client: For each linked list, the client keeps a 'bookmark' pointer to keep its place within the linked list. It round robins the linked lists, iterating through all of them over and over (it loops forever), moving each bookmark one node forward each time if it can. Whether it can is determined by the value of a 'next' member of the node. If it's non-null, then jumping to the next node is safe (the server switches it from null to non-null atomically). This approach works OK, but if there are a lot of lists to iterate over, and only a few of them are receiving updates, the latency gets bad. The server gives each list a unique ID. Each time the server appends an item to a list, it also appends the ID number of the list to a master 'update list'. The client only keeps one bookmark, a bookmark into the update list. It endlessly checks if the bookmark's next pointer is non-null ( while(node->next_ == NULL) {} ), if so moves ahead, reads the ID given, and then processes the new node on the linked list that has that ID. This, in theory, should handle large numbers of lists much better, because the client doesn't have to iterate over all of them each time. When I benchmarked the latency of both approaches (using gettimeofday), to my surprise #2 was terrible. The first approach, for a small number of linked lists, would often be under 20us of latency. The second approach would have small spats of low latencies but often be between 4,000-7,000us! Through inserting gettimeofday's here and there, I've determined that all of the added latency in approach #2 is spent in the loop repeatedly checking if the next pointer is non-null. This is puzzling to me; it's as if the change in one process is taking longer to 'publish' to the second process with the second approach. I assume there's some sort of cache interaction going on I don't understand. What's going on?

    Read the article

  • Rails scalar query

    - by Craig
    I need to display a UI element (e.g. a star or checkmark) for employees that are 'favorites' of the current user (another employee). The Employee model has the following relationship defined to support this: has_and_belongs_to_many :favorites, :class_name => "Employee", :join_table => "favorites", :association_foreign_key => "favorite_id", :foreign_key => "employee_id" The favorites has two fields: employee_id, favorite_id. If I were to write SQL, the following query would give me the results that I want: SELECT id, account, IF( ( SELECT favorite_id FROM favorites WHERE favorite_id=p.id AND employee_id = ? ) IS NULL, FALSE, TRUE) isFavorite FROM employees Where the '?' would be replaced by the session[:user_id]. How do I represent the isFavorite scalar query in Rails? Another approach would use a query like this: SELECT id, account, IF(favorite_id IS NULL, FALSE, TRUE) isFavorite FROM employees e LEFT OUTER JOIN favorites f ON e.id=f.favorite_id AND employee_id = ? Again, the '?' is replaced by the session[:user_id] value. I've had some success writing this in Rails: ee=Employee.find(:all, :joins=>"LEFT OUTER JOIN favorites ON employees.id=favorites.favorite_id AND favorites.employee_id=1", :select=>"employees.*,favorites.favorite_id") Unfortunately, when I try to make this query 'dynamic' by replacing the '1' with a '?', I get errors. ee=Employee.find(:all, :joins=>["LEFT OUTER JOIN favorites ON employees.id=favorites.favorite_id AND favorites.employee_id=?",1], :select=>"employees.*,favorites.favorite_id") Obviously, I have the syntax wrong, but can :joins expressions be 'dynamic'? Is this a case for a Lambda expression? I do hope to add other filters to this query and use it with will_paginate and acts_as_taggable_on, if that makes a difference. edit errors from trying to make :joins dynamic: ActiveRecord::ConfigurationError: Association named 'LEFT OUTER JOIN favorites ON employees.id=favorites.favorite_id AND favorites.employee_id=?' was not found; perhaps you misspelled it? from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1906:in `build' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1911:in `build' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1910:in `each' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1910:in `build' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1830:in `initialize' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1789:in `new' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1789:in `add_joins!' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1686:in `construct_finder_sql' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1548:in `find_every' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:615:in `find'

    Read the article

  • Is it possible to programatically catch JavaScript SyntaxErrors?

    - by Matty
    I don't think that this is doable, but wanted to throw it out to the community to confirm my suspicions. Let's say you're writing a program in another language like Python, PHP, or ASP. This program is intended to build another program written in JavaScript. However, the first program is unfortunately not immune to errors. So, occasionally, the program which builds the JavaScript program does some funky stuff and outputs a syntax error in the JavaScript source. Now some user goes and loads the program and it essentially halts, because the web browser running it can't properly parse the JavaScript. This user is probably not going to be happy. I wouldn't be. This brings me to my question. Is it possible to write an error handler that would catch these kind of syntax problems allowing the application to fail gracefully? Here's an example: <html> <head> <script type="text/javascript" charset="utf-8"> window.onerror = errorHandler; function errorHandler(a,b,c) { alert('horray! No wait, Booo!'); } vara jfs; </script> </head> <body> Can this be done? </body> </html> or <html> <head> <script type="text/javascript" charset="utf-8"> try { vara jfs; } catch (e) { alert('horray! No wait, Booo!'); } </script> </head> <body> Can this be done? </body> </html>

    Read the article

  • Do you still limit line length in code?

    - by Noldorin
    This is a matter on which I would like to gauge the opinion of the community: Do you still limit the length of lines of code to a fixed maximum? This was certainly a convention of the past for many languages; one would typically cap the number of characters per line to a value such as 80 (and more recnetly 100 or 120 I believe). As far as I understand, the primary reasons for limiting line length are: Readability - You don't have to scroll over horizontally when you want to see the end of some lines. Printing - Admittedly (at least in my experience), most code that you are working on does not get printed out on paper, but by limiting the number of characters you can insure that formatting doesn't get messed up when printed. Past editors (?) - Not sure about this one, but I suspect that at some point in the distant past of programming, (at least some) text editors may have been based on a fixed-width buffer. I'm sure there are points that I am still missing out, so feel free to add to these... Now, when I tend to observe C or C# code nowadays, I often see a number of different styles, the main ones being: Line length capped to 80, 100, or even 120 characters. As far as I understand, 80 is the traditional length, but the longer ones of 100 and 120 have appeared because of the widespread use of high resolutions and widescreen monitors nowadays. No line length capping at all. This tends to be pretty horrible to read, and I don't see it too often, though it's certainly not too rare either. Inconsistent capping of line length. The length of some lines are limited to a fixed maximum (or even a maximum that changes depending on the file/location in code), while others (possibly comments) are not at all. My personal preference here (at least recently) has been to cap the line length to 100 in the Visual Studio editor. This means that in a decently sized window (on a non-widescreen monitor), the ends of lines are still fully visible. I can however see a few disadvantages in this, especially when you end up writing code that's indented 3 or 4 levels and then having to include a long string literal - though I often take this as a sign to refactor my code! In particular, I am curious what the C and C# coders (or anyone who uses Visual Studio for that matter) think about this point, though I would be interested in hearing anyone's thoughts on the subject. Edit Thanks for the all answers - I appreciate the variety of opinions here, all presenting sound reasons. Consensus does seem to be tipping in the direction of always (or almost always) limit the line length. Interestingly, it seems to be in various coding standards to limit the line length. Judging by some of the answers, both the Python and Google CPP guidelines set the limit at 80 chars. I haven't seen anything similar regarding C# or VB.NET, but I would be curious to see if there are ones anywhere.

    Read the article

  • How can I easily maintain a cross-file JavaScript Library Development Environment

    - by John
    I have been developing a new JavaScript application which is rapidly growing in size. My entire JavaScript Application has been encapsulated inside a single function, in a single file, in a way like this: (function(){ var uniqueApplication = window.uniqueApplication = function(opts){ if (opts.featureOne) { this.featureOne = new featureOne(opts.featureOne); } if (opts.featureTwo) { this.featureTwo = new featureTwo(opts.featureTwo); } if (opts.featureThree) { this.featureThree = new featureThree(opts.featureThree); } }; var featureOne = function(options) { this.options = options; }; featureOne.prototype.myFeatureBehavior = function() { //Lots of Behaviors }; var featureTwo = function(options) { this.options = options; }; featureTwo.prototype.myFeatureBehavior = function() { //Lots of Behaviors }; var featureThree = function(options) { this.options = options; }; featureThree.prototype.myFeatureBehavior = function() { //Lots of Behaviors }; })(); In the same file after the anonymous function and execution I do something like this: (function(){ var instanceOfApplication = new uniqueApplication({ featureOne:"dataSource", featureTwo:"drawingCanvas", featureThree:3540 }); })(); Before uploading this software online I pass my JavaScript file, and all it's dependencies, into Google Closure Compiler, using just the default Compression, and then I have one nice JavaScript file ready to go online for production. This technique has worked marvelously for me - as it has created only one global footprint in the DOM and has given me a very flexible framework to grow each additional feature of the application. However - I am reaching the point where I'd really rather not keep this entire application inside one JavaScript file. I'd like to move from having one large uniqueApplication.js file during development to having a separate file for each feature in the application, featureOne.js - featureTwo.js - featureThree.js Once I have completed offline development testing, I would then like to use something, perhaps Google Closure Compiler, to combine all of these files together - however I want these files to all be compiled inside of that scope, as they are when I have them inside one file - and I would like for them to remain in the same scope during offline testing too. I see that Google Closure Compiler supports an argument for passing in modules but I haven't really been able to find a whole lot of information on doing something like this. Anybody have any idea how this could be accomplished - or any suggestions on a development practice for writing a single JavaScript Library across multiple files that still only leaves one footprint on the DOM?

    Read the article

  • BDD for C# NUnit

    - by mjezzi
    I've been using a home brewed BDD Spec extension for writing BDD style tests in NUnit, and I wanted to see what everyone thought. Does it add value? Does is suck? If so why? Is there something better out there? Here's the source: https://github.com/mjezzi/NSpec There are two reasons I created this To make my tests easy to read. To produce a plain english output to review specs. Here's an example of how a test will look: -since zombies seem to be popular these days.. Given a Zombie, Peson, and IWeapon: namespace Project.Tests.PersonVsZombie { public class Zombie { } public interface IWeapon { void UseAgainst( Zombie zombie ); } public class Person { private IWeapon _weapon; public bool IsStillAlive { get; set; } public Person( IWeapon weapon ) { IsStillAlive = true; _weapon = weapon; } public void Attack( Zombie zombie ) { if( _weapon != null ) _weapon.UseAgainst( zombie ); else IsStillAlive = false; } } } And the NSpec styled tests: public class PersonAttacksZombieTests { [Test] public void When_a_person_with_a_weapon_attacks_a_zombie() { var zombie = new Zombie(); var weaponMock = new Mock<IWeapon>(); var person = new Person( weaponMock.Object ); person.Attack( zombie ); "It should use the weapon against the zombie".ProveBy( spec => weaponMock.Verify( x => x.UseAgainst( zombie ), spec ) ); "It should keep the person alive".ProveBy( spec => Assert.That( person.IsStillAlive, Is.True, spec ) ); } [Test] public void When_a_person_without_a_weapon_attacks_a_zombie() { var zombie = new Zombie(); var person = new Person( null ); person.Attack( zombie ); "It should cause the person to die".ProveBy( spec => Assert.That( person.IsStillAlive, Is.False, spec ) ); } } You'll get the Spec output in the output window: [PersonVsZombie] - PersonAttacksZombieTests When a person with a weapon attacks a zombie It should use the weapon against the zombie It should keep the person alive When a person without a weapon attacks a zombie It should cause the person to die 2 passed, 0 failed, 0 skipped, took 0.39 seconds (NUnit 2.5.5).

    Read the article

  • RMI-applets - Cannot understand error message

    - by aeter
    In a simple RMI game I'm writing (an assignment in uni), I reveice: java.rmi.MarshalException: error marshalling arguments; nested exception is: java.net.SocketException: Broken pipe at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:138) at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:178) at java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:132) at $Proxy2.drawWorld(Unknown Source) at PlayerServerImpl$1.actionPerformed(PlayerServerImpl.java:180) at javax.swing.Timer.fireActionPerformed(Timer.java:271) at javax.swing.Timer$DoPostEvent.run(Timer.java:201) at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:209) at java.awt.EventQueue.dispatchEvent(EventQueue.java:597) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:269) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:184) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:174) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:169) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:161) at java.awt.EventDispatchThread.run(EventDispatchThread.java:122) The error message appears after the second Player is registered with the RMI Server and the server starts to send the image (the array of pixels) to the 2 applets. The PlayerImpl and the PlayerServerImpl both extend UnicastRemoteObject. I have been struggling with other error messages for some time now, but I cannot understand how to troubleshoot this one. Please help. The relevant parts of the code are: PlayerServerImpl.java: ... timer = new Timer(10, new ActionListener() { // every 10 milliseconds do: @Override public void actionPerformed(ActionEvent e) { ... BufferedImage buff_image = new BufferedImage(GAME_APPLET_WIDTH, GAME_APPLET_HEIGHT, BufferedImage.TYPE_INT_RGB); // create a graphics context on the buffered image Graphics buff_g = buff_image.createGraphics(); ... // draw the score somewhere on the screen buff_g.drawString(score, GAME_APPLET_WIDTH - 20, 10); ... int[] rgbs = new int[GAME_APPLET_WIDTH * GAME_APPLET_HEIGHT]; int imgPixelsGrabbed[] = buff_image.getRGB(0,0,GAME_APPLET_WIDTH,GAME_APPLET_HEIGHT,rgbs,0,GAME_APPLET_WIDTH); // send the new state to the applets for (Player player : players) { player.drawWorld(imgPixelsGrabbed); System.out.println("Sent image to player"); } PlayerImpl.java: private PlayerApplet applet; public PlayerImpl(PlayerApplet applet) throws RemoteException { super(); this.applet = applet; } ... @Override public void drawWorld(int[] imgPixelsGrabbed) throws RemoteException { applet.setWorld(imgPixelsGrabbed); applet.repaint(); } ... PlayerApplet.java: ... private int[] world; // an array of pixels for the new image to be drawn ... // register players player = new PlayerImpl(applet); String serverIPAddressPort = ipAddressField.getText(); if (validateIPAddressPort(serverIPAddressPort)) { server = (PlayerServer) Naming.lookup("rmi://" + serverIPAddressPort + "/PlayerServer"); server.register(player); idPlayer = server.sendPlayerID(); ... @Override public void update(Graphics g) { buff_img = createImage((ImageProducer) new MemoryImageSource(getWidth(), getHeight(), world, 0, getWidth())); Graphics gr = buff_img.getGraphics(); paint(gr); g.drawImage(buff_img, 0, 0, this); } public void setWorld(int[] world) { this.world = world; }

    Read the article

  • How to build, sort and print a tree of a sort?

    - by Tuplanolla
    This is more of an algorithmic dilemma than a language-specific problem, but since I'm currently using Ruby I'll tag this as such. I've already spent over 20 hours on this and I would've never believed it if someone told me writing a LaTeX parser was a walk in the park in comparison. I have a loop to read hierarchies (that are prefixed with \m) from different files art.tex: \m{Art} graphical.tex: \m{Art}{Graphical} me.tex: \m{About}{Me} music.tex: \m{Art}{Music} notes.tex: \m{Art}{Music}{Sheet Music} site.tex: \m{About}{Site} something.tex: \m{Something} whatever.tex: \m{Something}{That}{Does Not}{Matter} and I need to sort them alphabetically and print them out as a tree About Me (me.tex) Site (site.tex) Art (art.tex) Graphical (graphical.tex) Music (music.tex) Sheet Music (notes.tex) Something (something.tex) That Does Not Matter (whatever.tex) in (X)HTML <ul> <li>About</li> <ul> <li><a href="me.tex">Me</a></li> <li><a href="site.tex">Site</a></li> </ul> <li><a href="art.tex">Art</a></li> <ul> <li><a href="graphical.tex">Graphical</a></li> <li><a href="music.tex">Music</a></li> <ul> <li><a href="notes.tex">Sheet Music</a></li> </ul> </ul> <li><a href="something.tex">Something</a></li> <ul> <li>That</li> <ul> <li>Doesn't</li> <ul> <li><a href="whatever.tex">Matter</a></li> </ul> </ul> </ul> </ul> using Ruby without Rails, which means that at least Array.sort and Dir.glob are available. All of my attempts were formed like this (as this part should work just fine). def fss_brace_array(ss_input)#a concise version of another function; converts {1}{2}...{n} into an array [1, 2, ..., n] or returns an empty array ss_output = ss_input[1].scan(%r{\{(.*?)\}}) rescue ss_output = [] ensure return ss_output end #define tree s_handle = File.join(:content.to_s, "*") Dir.glob("#{s_handle}.tex").each do |s_handle| File.open(s_handle, "r") do |f_handle| while s_line = f_handle.gets if s_all = s_line.match(%r{\\m\{(\{.*?\})+\}}) s_all = s_all.to_a #do something with tree, fss_brace_array(s_all) and s_handle break end end end end #do something else with tree

    Read the article

  • Unable to verify body hash for DKIM

    - by Joshua
    I'm writing a C# DKIM validator and have come across a problem that I cannot solve. Right now I am working on calculating the body hash, as described in Section 3.7 Computing the Message Hashes. I am working with emails that I have dumped using a modified version of EdgeTransportAsyncLogging sample in the Exchange 2010 Transport Agent SDK. Instead of converting the emails when saving, it just opens a file based on the MessageID and dumps the raw data to disk. I am able to successfully compute the body hash of the sample email provided in Section A.2 using the following code: SHA256Managed hasher = new SHA256Managed(); ASCIIEncoding asciiEncoding = new ASCIIEncoding(); string rawFullMessage = File.ReadAllText(@"C:\Repositories\Sample-A.2.txt"); string headerDelimiter = "\r\n\r\n"; int headerEnd = rawFullMessage.IndexOf(headerDelimiter); string header = rawFullMessage.Substring(0, headerEnd); string body = rawFullMessage.Substring(headerEnd + headerDelimiter.Length); byte[] bodyBytes = asciiEncoding.GetBytes(body); byte[] bodyHash = hasher.ComputeHash(bodyBytes); string bodyBase64 = Convert.ToBase64String(bodyHash); string expectedBase64 = "2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8="; Console.WriteLine("Expected hash: {1}{0}Computed hash: {2}{0}Are equal: {3}", Environment.NewLine, expectedBase64, bodyBase64, expectedBase64 == bodyBase64); The output from the above code is: Expected hash: 2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8= Computed hash: 2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8= Are equal: True Now, most emails come across with the c=relaxed/relaxed setting, which requires you to do some work on the body and header before hashing and verifying. And while I was working on it (failing to get it to work) I finally came across a message with c=simple/simple which means that you process the whole body as is minus any empty CRLF at the end of the body. (Really, the rules for Body Canonicalization are quite ... simple.) Here is the real DKIM email with a signature using the simple algorithm (with only unneeded headers cleaned up). Now, using the above code and updating the expectedBase64 hash I get the following results: Expected hash: VnGg12/s7xH3BraeN5LiiN+I2Ul/db5/jZYYgt4wEIw= Computed hash: ISNNtgnFZxmW6iuey/3Qql5u6nflKPTke4sMXWMxNUw= Are equal: False The expected hash is the value from the bh= field of the DKIM-Signature header. Now, the file used in the second test is a direct raw output from the Exchange 2010 Transport Agent. If so inclined, you can view the modified EdgeTransportLogging.txt. At this point, no matter how I modify the second email, changing the start position or number of CRLF at the end of the file I cannot get the files to match. What worries me is that I have been unable to validate any body hash so far (simple or relaxed) and that it may not be feasible to process DKIM through Exchange 2010.

    Read the article

  • How can a test script inform R CMD check that it should emit a custom message?

    - by mariotomo
    I'm writing a R package (delftfews) here at office. we are using svUnit for unit testing. our process for describing new functionality: we define new unit tests, initially marked as DEACTIVATED; one block of tests at a time we activate them and implement the function described by the tests. almost all the time we have a small amount of DEACTIVATED tests, relative to functions that might be dropped or will be implemented. my problem/question is: can I alter the doSvUnit.R so that R CMD check pkg emits a NOTE (i.e. a custom message "NOTE" instead of "OK") in case there are DEACTIVATED tests? as of now, we see only that the active tests don't give error: . . * checking for unstated dependencies in tests ... OK * checking tests ... Running ‘doSvUnit.R’ OK * checking PDF version of manual ... OK which is all right if all tests succeed, but less all right if there are skipped tests and definitely wrong if there are failing tests. In this case, I'd actually like to see a NOTE or a WARNING like the following: . . * checking for unstated dependencies in tests ... OK * checking tests ... Running ‘doSvUnit.R’ NOTE 6 test(s) were skipped. WARNING 1 test(s) are failing. * checking PDF version of manual ... OK As of now, we have to open the doSvUnit.Rout to check the real test results. I contacted two of the maintainers at r-forge and CRAN and they pointed me to the sources of R, in particular the testing.R script. if I understand it correctly, to answer this question we need patching the tools package: scripts in the tests directory are called using a system call, output (stdout and stderr) go to one single file, there are two possible outcomes: ok or not ok, so I opened a change request on R, proposing something like bit-coding the return status, bit-0 for ERROR (as it is now), bit-1 for WARNING, bit-2 for NOTE. with my modification, it would be easy producing this output: . . * checking for unstated dependencies in tests ... OK * checking tests ... Running ‘doSvUnit.R’ NOTE - please check doSvUnit.Rout. WARNING - please check doSvUnit.Rout. * checking PDF version of manual ... OK Brian Ripley replied "There are however several packages with properly written unit tests that do signal as required. Please do take this discussion elsewhere: R-bugs is not the place to ask questions." and closed the change request. anybody has hints?

    Read the article

  • In GWT I'd like to load the main page on www.domain.com and a dynamic page on www.domain.com/xyz - I

    - by Mike Brecht
    Hi, I have a question for which I'm sure there must a simple solution. I'm writing a small GWT application where I want to achieve this: www.domain.com : should serve the welcome page www.domain.com/xyz : should serve page xyz, where xyz is just a key for an item in a database. If there is an item associated with key xyz, I'll load that and show a page, otherwise I'll show a 404 error page. I was trying to modify the web.xml file accordingly but I just couldn't make it work. I could make it work with an url-pattern if the key in question is after another /, for example: www.domain.com/search/xyz. However, I'd like to have the key xyz directly following the root / (http://www.domain.com/xyz). Something like <servlet-mapping> <servlet-name>main</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping> doesn't seem to work as I don't know how to address the main page index.html (as it's not an actual servlet) which will then load my main GWT module. I could make it work with a bad work around (see below): Redirecting a 404 exception to index.html and then doing the look up in the main entry point, but I'm sure that's not the best practice, also for SEO purposes. Can anyone give me a hint on how to configure the web.xml with GWT for my purpose? Thanks a lot. Mike Work-around via 404: Web.xml: <web-app> <error-page> <exception-type>404</exception-type> <location>/index.html</location> </error-page> <welcome-file-list> <welcome-file>index.html</welcome-file> </welcome-file-list> </web-app> index.html -- Main Entry point -- onModuleLoad(): String path = Window.Location.getPath(); if (path == null || path.length() == 0 || path.equalsIgnoreCase("/") || path.equalsIgnoreCase("/index.html")) { ... // load main page } else { lookup(path.substring(1)); // that's key xyz

    Read the article

  • Ultimate Get/Post with Android Thread for Dummies

    - by Jayomat
    Hi, I'm writing an app to check for the bus timetable's. Therefor I need to post some data to a html page, submit it, and parse the resulting page with htmlparser. Though it may be asked a lot, can some one help me identify if 1) this page does support post/get (I think it does) 2) which fields I need to use? 3) How to make the actual request? this is my code so far: String url = "http://busspur02.aseag.de/bs.exe?Cmd=RV&Karten=true&DatumT=30&DatumM=4&DatumJ=2010&ZeitH=&ZeitM=&Suchen=%28S%29uchen&GT0=&HT0=&GT1=&HT1="; String charset = "CP1252"; System.out.println("startFrom: "+start_from); System.out.println("goTo: "+destination); //String tag.v List<NameValuePair> params = new ArrayList<NameValuePair>(); params.add(new BasicNameValuePair("HTO", start_from)); params.add(new BasicNameValuePair("HT1", destination)); params.add(new BasicNameValuePair("GTO", "Aachen")); params.add(new BasicNameValuePair("GT1", "Aachen")); params.add(new BasicNameValuePair("DatumT", day)); params.add(new BasicNameValuePair("DatumM", month)); params.add(new BasicNameValuePair("DatumJ", year)); params.add(new BasicNameValuePair("ZeitH", hour)); params.add(new BasicNameValuePair("ZeitM", min)); UrlEncodedFormEntity query = new UrlEncodedFormEntity(params, charset); HttpPost post = new HttpPost(url); post.setEntity(query); InputStream response = new DefaultHttpClient().execute(post).getEntity().getContent(); // Now do your thing with the facebook response. String source = readText(response,"CP1252"); Log.d(TAG_AVV,response.toString()); System.out.println("STREAM "+source); One person also gave me a hint to use firebug to read what's going on at the page, but I don't really understand what to look for, or more precisely, how to use the provided information. I also find it confusing, for example, that when I enter the data by hand, the url says, for example, "....HTO=Kaiserplatz&...", but in Firebug, the same Kaiserplatz is connected to a different field, in this case: \<\td class="Start3" Kaiserplatz <\/td (I inserted \ to make it visible) The last line in my code prints the html page, but without having send a request.. it's printed as if there was no input at all... My app is almost done, I hope someone can help me out to finish it! thanks in advance

    Read the article

  • Salesforce consuming XML and display data in Visualforce report

    - by JavaKungFu
    Firstly, this question requires a bit of introduction so please bear with me. The high level is that I am connecting to a outside web service which will return some XML to my apex controller. The idea is that I want to display the XML returned into a nice tabular format in a VisualForce page. The format of the XML coming back will look something like this: <Wrapper><reportTable name='table_id' title='Report Title'> <row> <Element1><![CDATA[campaign_id]]></Element1> <Element2><![CDATA[577373]]></Element2> <Element3><![CDATA[4129]]></Element3> <Element4 dataFormat='2' dataSuffix='%'><![CDATA[0.7151]]></Element4> <Element5><![CDATA[2010-04-04]]></Element5> <Element6><![CDATA[2010-05-03]]></Element6> </row> </reportTable> ... Now currently I am using the XMLdom utility class (developed by SF for XML functions) to map this data into a custom object "reportTable" which contains a list of "row" custom objects. The reason I am building it out this way is because I don't know how many elements will be in each row, nor the number of rows. The Visualforce page looks something like this: <table><apex:repeat value="{!reportTables}" var="table"> <apex:repeat value="{!table.rows}" var="row"> <tr> <apex:repeat value="{!row.ColumnValue}" var="column"> <apex:repeat value="{!column}" var="value"> <td> <apex:outputText value="{!value}" /> </td> </apex:repeat> </apex:repeat> </tr> </apex:repeat> Questions are: 1) Does this seem like a good approach to the problem? 2) Is there a simpler/better way to consume the XML besides writing my own custom objects to map VF to? Open to any and all suggestions. I really hope there is a better way than building the HTML table myself, as then I also have to deal with styling and alignment etc. Thanks.

    Read the article

< Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >