Search Results

Search found 2242 results on 90 pages for 'discussion'.

Page 80/90 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • How can a test script inform R CMD check that it should emit a custom message?

    - by mariotomo
    I'm writing a R package (delftfews) here at office. we are using svUnit for unit testing. our process for describing new functionality: we define new unit tests, initially marked as DEACTIVATED; one block of tests at a time we activate them and implement the function described by the tests. almost all the time we have a small amount of DEACTIVATED tests, relative to functions that might be dropped or will be implemented. my problem/question is: can I alter the doSvUnit.R so that R CMD check pkg emits a NOTE (i.e. a custom message "NOTE" instead of "OK") in case there are DEACTIVATED tests? as of now, we see only that the active tests don't give error: . . * checking for unstated dependencies in tests ... OK * checking tests ... Running ‘doSvUnit.R’ OK * checking PDF version of manual ... OK which is all right if all tests succeed, but less all right if there are skipped tests and definitely wrong if there are failing tests. In this case, I'd actually like to see a NOTE or a WARNING like the following: . . * checking for unstated dependencies in tests ... OK * checking tests ... Running ‘doSvUnit.R’ NOTE 6 test(s) were skipped. WARNING 1 test(s) are failing. * checking PDF version of manual ... OK As of now, we have to open the doSvUnit.Rout to check the real test results. I contacted two of the maintainers at r-forge and CRAN and they pointed me to the sources of R, in particular the testing.R script. if I understand it correctly, to answer this question we need patching the tools package: scripts in the tests directory are called using a system call, output (stdout and stderr) go to one single file, there are two possible outcomes: ok or not ok, so I opened a change request on R, proposing something like bit-coding the return status, bit-0 for ERROR (as it is now), bit-1 for WARNING, bit-2 for NOTE. with my modification, it would be easy producing this output: . . * checking for unstated dependencies in tests ... OK * checking tests ... Running ‘doSvUnit.R’ NOTE - please check doSvUnit.Rout. WARNING - please check doSvUnit.Rout. * checking PDF version of manual ... OK Brian Ripley replied "There are however several packages with properly written unit tests that do signal as required. Please do take this discussion elsewhere: R-bugs is not the place to ask questions." and closed the change request. anybody has hints?

    Read the article

  • Alternatives to requiring users to register for an account?

    - by jamieb
    I'm working on a side project to build a new web app idea of mine. For the sake of discussion, let's say this app displays a random photograph of a famous work of art. On a scale of 1 to 5, users are asked to rate how well they like each piece of art, and then are shown the next photo. Eventually, the app is able to get an sense of the person's style and is able to recommend artwork that he/she may find pleasing. The whole concept is similar to Netflix. I understand how to do all the preference matching logic (although not as sophisticated as Netflix). But I'd like to find a way to do this without requiring that users create an account first. This is a novelty website that a typical user might use only a handful of times. Requiring registration is overkill and will likely drastically reduce it's utility. I'd like to allow people to begin rating artwork within five seconds of their initial pageview, yet maintain the integrity of the voting (since recommendations are predicated on how other people have rated the various pieces of artwork). Can it be done? Some ideas: OpenID. The perfect solution except for the fact that it's not wildly used and my target audience isn't the most technically adept demographic. Text message. User inputs phone number and is texted a four digit code to key into the web app. Quick, easy, and great way to limit abuse. However, privacy concerns abound... people are probably even less likely to give me their phone number than their email address. Facebook login. I personally don't have a Facebook account due to privacy concerns. And I'd really hate to support such a proprietary platform. Hash code/Bookmark. Vistor's initial pageview generates a 5 or 6 digit alphanumeric code that is embedded in each subsequent URL. They can bookmark any page to save their state. Good: Very simple system that doesn't require any user action. Bad: Very easy to stuff the ballot box, might be difficult to account for users sharing the link containing their ID code via email or social networking sites.

    Read the article

  • Logging raw HTTP request/response in ASP.NET MVC & IIS7

    - by Greg Beech
    I'm writing a web service (using ASP.NET MVC) and for support purposes we'd like to be able to log the requests and response in as close as possible to the raw, on-the-wire format (i.e including HTTP method, path, all headers, and the body) into a database. What I'm not sure of is how to get hold of this data in the least 'mangled' way. I can re-constitute what I believe the request looks like by inspecting all the properties of the HttpRequest object and building a string from them (and similarly for the response) but I'd really like to get hold of the actual request/response data that's sent on the wire. I'm happy to use any interception mechanism such as filters, modules, etc. and the solution can be specific to IIS7. However, I'd prefer to keep it in managed code only. Any recommendations? Edit: I note that HttpRequest has a SaveAs method which can save the request to disk but this reconstructs the request from the internal state using a load of internal helper methods that cannot be accessed publicly (quite why this doesn't allow saving to a user-provided stream I don't know). So it's starting to look like I'll have to do my best to reconstruct the request/response text from the objects... groan. Edit 2: Please note that I said the whole request including method, path, headers etc. The current responses only look at the body streams which does not include this information. Edit 3: Does nobody read questions around here? Five answers so far and yet not one even hints at a way to get the whole raw on-the-wire request. Yes, I know I can capture the output streams and the headers and the URL and all that stuff from the request object. I already said that in the question, see: I can re-constitute what I believe the request looks like by inspecting all the properties of the HttpRequest object and building a string from them (and similarly for the response) but I'd really like to get hold of the actual request/response data that's sent on the wire. If you know the complete raw data (including headers, url, http method, etc.) simply cannot be retrieved then that would be useful to know. Similarly if you know how to get it all in the raw format (yes, I still mean including headers, url, http method, etc.) without having to reconstruct it, which is what I asked, then that would be very useful. But telling me that I can reconstruct it from the HttpRequest/HttpResponse objects is not useful. I know that. I already said it. Please note: Before anybody starts saying this is a bad idea, or will limit scalability, etc., we'll also be implementing throttling, sequential delivery, and anti-replay mechanisms in a distributed environment, so database logging is required anyway. I'm not looking for a discussion of whether this is a good idea, I'm looking for how it can be done.

    Read the article

  • What to do when you need more verbs in REST

    - by Richard Levasseur
    There is another similar question to mine, but the discussion veered away from the problem I'm encounting. Say I have a system that deals with expense reports (ER). You can create and edit them, add attachments, and approve/reject them. An expense report might look like this: GET /er/1 => {"title": "Trip to NY", "totalcost": "400 USD", "comments": [ "john: Please add the total cost", "mike: done, can you approve it now?" ], "approvals": [ {"john": "Pending"}, {"finance-group": "Pending"}] } That looks fine, right? Thats what an expense report document looks like. If you want to update it, you can do this: POST /er/1 {"title": "Trip to NY 2010"} If you want to approve it, you can do this: POST /er/1/approval {"approved": true} But, what if you want to update the report and approve it at the same time? How do we do that? If you only wanted to approve, then doing a POST to something like /er/1/approval makes sense. We could put a flag in the URL, POST /er/1?approve=1, and send the data changes as the body, but that flag doesn't seem RESTful. We could put special field to be submitted, too, but that seems a bit hacky, too. If we did that, then why not send up data with attributes like set_title or add_to_cost? We could create a new resource for updating and approving, but (1) I can't think of how to name it without verbs, and (2) it doesn't seem right to name a resource based on what actions can be done to it (what happens if we add more actions?) We could have an X-Approve: True|False header, but headers seem like the wrong tool for the job. It'd also be difficult to get set headers without using javascript in a browser. We could use a custom media-type, application/approve+yes, but that seems no better than creating a new resource. We could create a temporary "batch operations" url, /er/1/batch/A. The client then sends multiple requests, perhaps POST /er/1/batch/A to update, then POST /er/1/batch/A/approval to approve, then POST /er/1/batch/A/status to end the batch. On the backend, the server queues up all the batch requests somewhere, then processes them in the same backend-transaction when it receives the "end batch processing" request. The downside with this is, obviously, that it introduces a lot of complexity. So, what is a good, general way to solve the problem of performing multiple actions in a single request? General because its easy to imagine additional actions that might be done in the same request: Suppress or send notifications (to email, chat, another system, whatever) Override some validation (maximum cost, names of dinner attendees) Trigger backend workflow that doesn't have a representation in the document.

    Read the article

  • Would someone mind giving suggestions for this new assembly language?

    - by Noctis Skytower
    Greetings! Last semester in college, my teacher in the Computer Languages class taught us the esoteric language named Whitespace. In the interest of learning the language better with a very busy schedule (midterms), I wrote an interpreter and assembler in Python. An assembly language was designed to facilitate writing programs easily, and a sample program was written with the given assembly mnemonics. Now that it is summer, a new project has begun with the objective being to rewrite the interpreter and assembler for Whitespace 0.3, with further developments coming afterwards. Since there is so much extra time than before to work on its design, you are presented here with an outline that provides a revised set of mnemonics for the assembly language. This post is marked as a wiki for their discussion. Have you ever had any experience with assembly languages in the past? Were there some instructions that you thought should have been renamed to something different? Did you find yourself thinking outside the box and with a different paradigm than in which the mnemonics were named? If you can answer yes to any of those questions, you are most welcome here. Subjective answers are appreciated! hold N Push the number onto the stack copy Duplicate the top item on the stack copy N Copy the nth item on the stack (given by the argument) onto the top of the stack swap Swap the top two items on the stack drop Discard the top item on the stack drop N Slide n items off the stack, keeping the top item add Addition sub Subtraction mul Multiplication div Integer Division mod Modulo save Store load Retrieve L: Mark a location in the program call L Call a subroutine goto L Jump unconditionally to a label if=0 L Jump to a label if the top of the stack is zero if<0 L Jump to a label if the top of the stack is negative return End a subroutine and transfer control back to the caller exit End the program print chr Output the character at the top of the stack print int Output the number at the top of the stack input chr Read a character and place it in the location given by the top of the stack input int Read a number and place it in the location given by the top of the stack Question: How would you redesign, rewrite, or rename the previous mnemonics and for what reasons?

    Read the article

  • Rails 3.1 text_field form helper and jQuery datepicker date format

    - by DanyW
    I have a form field in my Rails view like this: <%= f.text_field :date, :class => "datepicker" %> A javascript function converts all input fields of that class to a jQueryUI datepicker. $(".datepicker").datepicker({ dateFormat : "dd MM yy", buttonImageOnly : true, buttonImage : "<%= asset_path('iconDatePicker.gif') %>", showOn : "both", changeMonth : true, changeYear : true, yearRange : "c-20:c+5" }) So far so good. I can edit the record and it persists the date correctly all the way to the DB. My problem is when I want to edit the record again. When the form is pre-populated from the record it displays in 'yyyy-mm-dd' format. The javascript only formats dates which are selected in the datepicker control. Is there any way I can format the date retrieved from the database? At which stage can I do this? Thanks, Dany. Some more details following the discussion below, my form is spread across two files, a view and a partial. Here's the view: <%= form_tag("/shows/update_individual", :method => "put") do %> <% for show in @shows %> <%= fields_for "shows[]", show do |f| %> <%= render "fields", :f => f %> <% end %> <% end %> <%= submit_tag "Submit"%> <% end %> And here's the _fields partial view: <p> <%= f.label :name %> <%= f.text_field :name %> </p> <p> <%= f.date %> <%= f.label :date %> <%= f.text_field :date, :class => "datepicker" %> </p> Controller code: def edit_individual @shows = Show.find(params[:show_ids]) end

    Read the article

  • PostgreSQL - fetch the row which has the Max value for a column

    - by Joshua Berry
    I'm dealing with a Postgres table (called "lives") that contains records with columns for time_stamp, usr_id, transaction_id, and lives_remaining. I need a query that will give me the most recent lives_remaining total for each usr_id There are multiple users (distinct usr_id's) time_stamp is not a unique identifier: sometimes user events (one by row in the table) will occur with the same time_stamp. trans_id is unique only for very small time ranges: over time it repeats remaining_lives (for a given user) can both increase and decrease over time example: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 07:00 | 1 | 1 | 1 09:00 | 4 | 2 | 2 10:00 | 2 | 3 | 3 10:00 | 1 | 2 | 4 11:00 | 4 | 1 | 5 11:00 | 3 | 1 | 6 13:00 | 3 | 3 | 1 As I will need to access other columns of the row with the latest data for each given usr_id, I need a query that gives a result like this: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 11:00 | 3 | 1 | 6 10:00 | 1 | 2 | 4 13:00 | 3 | 3 | 1 As mentioned, each usr_id can gain or lose lives, and sometimes these timestamped events occur so close together that they have the same timestamp! Therefore this query won't work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp) AS max_timestamp FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp = b.time_stamp Instead, I need to use both time_stamp (first) and trans_id (second) to identify the correct row. I also then need to pass that information from the subquery to the main query that will provide the data for the other columns of the appropriate rows. This is the hacked up query that I've gotten to work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp || '*' || trans_id) AS max_timestamp_transid FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp_transid = b.time_stamp || '*' || b.trans_id ORDER BY b.usr_id Okay, so this works, but I don't like it. It requires a query within a query, a self join, and it seems to me that it could be much simpler by grabbing the row that MAX found to have the largest timestamp and trans_id. The table "lives" has tens of millions of rows to parse, so I'd like this query to be as fast and efficient as possible. I'm new to RDBM and Postgres in particular, so I know that I need to make effective use of the proper indexes. I'm a bit lost on how to optimize. I found a similar discussion here. Can I perform some type of Postgres equivalent to an Oracle analytic function? Any advice on accessing related column information used by an aggregate function (like MAX), creating indexes, and creating better queries would be much appreciated! P.S. You can use the following to create my example case: create TABLE lives (time_stamp timestamp, lives_remaining integer, usr_id integer, trans_id integer); insert into lives values ('2000-01-01 07:00', 1, 1, 1); insert into lives values ('2000-01-01 09:00', 4, 2, 2); insert into lives values ('2000-01-01 10:00', 2, 3, 3); insert into lives values ('2000-01-01 10:00', 1, 2, 4); insert into lives values ('2000-01-01 11:00', 4, 1, 5); insert into lives values ('2000-01-01 11:00', 3, 1, 6); insert into lives values ('2000-01-01 13:00', 3, 3, 1);

    Read the article

  • Is there anyway to exclude artifacts inherited from a parent POM?

    - by Miguel
    Artifacts from dependencies can be excluded by declaring an <exclusions> element inside a <dependency> But in this case it's needed to exclude an artifact inherited from a parent project. An excerpt of the POM under discussion follows: <project> <modelVersion>4.0.0</modelVersion> <groupId>test</groupId> <artifactId>jruby</artifactId> <version>0.0.1-SNAPSHOT</version> <parent> <artifactId>base</artifactId> <groupId>es.uniovi.innova</groupId> <version>1.0.0</version> </parent> <dependencies> <dependency> <groupId>com.liferay.portal</groupId> <artifactId>ALL-DEPS</artifactId> <version>1.0</version> <scope>provided</scope> <type>pom</type> </dependency> </dependencies> </project> base artifact, depends on javax.mail:mail-1.4.jar, and ALL-DEPS depends on another version of the same library. Due to the fact that mail.jar from ALL-DEPS exist on the execution environment, although not exported, collides with the mail.jar that exists on the parent, which is scoped as compile. A solution could be to rid off mail.jar from the parent POM, but most of the projects that inherit base, need it (as is a transtive dependency for log4j). So What I would like to do is to simply exclude parent's library from the child project, as it could be done if base was a dependency and not the parent pom: ... <dependency> <artifactId>base</artifactId> <groupId>es.uniovi.innova</groupId> <version>1.0.0</version> <type>pom<type> <exclusions> <exclusion> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> </exclusion> </exclusions> </dependency> ...

    Read the article

  • CSS3's border-radius property and border-collapse:collapse don't mix. How can I use border-radius to

    - by vamin
    Edit - Original Title: Is there an alternative way to achieve border-collapse:collapse in CSS (in order to have a collapsed, rounded corner table)? Since it turns out that simply getting the table's borders to collapse does not solve the root problem, I have updated the title to better reflect the discussion. I am trying to make a table with rounded corners using the CSS3 border-radius property. The table styles I'm using look something like this: table { -moz-border-radius:10px; -webkit-border-radius:10px; border-radius:10px} Here's the problem. I also want to set the border-collapse:collapse property, and when that is set border-radius no longer works (at least in Firefox)(edit- I thought this might just be a difference in mozilla's implementation, but it turns out this is the way it's supposed to work according to the w3c). Is there a CSS-based way I can get the same effect as border-collapse:collapse without actually using it? Edits: I've made a simple page to demonstrate the problem here (Firefox/Safari only). It seems that a large part of the problem is that setting the table to have rounded corners does not affect the corners of the corner td elements. If the table was all one color, this wouldn't be a problem since I could just make the top and bottom td corners rounded for the first and last row respectively. However, I am using different background colors for the table to differentiate the headings and for striping, so the inner td elements would show their rounded corners as well. Summary of proposed solutions: Surrounding the table with another element with round corners doesn't work because the table's square corners "bleed through." Specifying border width to 0 doesn't collapse the table. Bottom td corners still square after setting cellspacing to zero. Using javascript instead- works by avoiding the problem. Possible solutions: The tables are generated in php, so I could just apply a different class to each of the outer th/tds and style each corner separately. I'd rather not do this, since it's not very elegant and a bit of a pain to apply to multiple tables, so please keep suggestions coming. Possible solution 2 is to use javascript (jQuery, specifically) to style the corners. This solution also works, but still not quite what I'm looking for (I know I'm picky). I have two reservations: 1) this is a very lightweight site, and I'd like to keep javascript to the barest minimum 2) part of the appeal that using border-radius has for me is graceful degradation and progressive enhancement. By using border-radius for all rounded corners, I hope to have a consistently rounded site in CSS3-capable browsers and a consistently square site in others (I'm looking at you, IE). I know that trying to do this with CSS3 today may seem needless, but I have my reasons. I would also like to point out that this problem is a result of the w3c speficication, not poor CSS3 support, so any solution will still be relevant and useful when CSS3 has more widespread support.

    Read the article

  • Silverlight? WPF? or Windows Form?

    - by Amit
    After Silverlight 4.0 has been released with new WPF, I am kind of confused with these technologies: Silverlight? WPF? Windows Form? The main motive that we want to achieve for BIG business project is following: Performance Security And platform independent** If I consider all above three points then only Silverlight is the option as I don’t want people buying emulator on MacOS for WPF or Windows Form. Now how good the Silverlight is for Business applications, I was completely against when Silverlight 2.0 was in the market but now it is Silverlight 4.0 and they have provided many new features (but still basics) that is required in any challenging business applications. Comparing Silverlight and WPF -* Silverlight and WPF are very new technology and if I'd to compare from these two then I'd prefer WPF because it can be considered stable and mature. But it is not same as Windows Form. -* If I go with Silverlight then I am sure about keep updating to the latest version of Silverlight. I remembered when we were developing software for version 2.0 then we'd to create our own framework with dynamic loading DLL, and then Navigation concept. But everything was got changed once Silverlight 3.0 came. I don't want this to be happening with this new product. -* If we go with WPF then we don't get the platform independence. Now, why not we just focus on making WPF and then move to Silverlght. As someone (Tim?) from Microsoft has said that the idea is to make Silverlight as close as WPF. But if that is the case then why XAML structure is different; I will not be convinced with by saying that .Net framework for SL is too small.. well the difference is coming from the namespace ? I was searching on this subject and found "Microsoft WPF-Silverlight Comparison Whitepaper v1.1.pdf". This guide is very good that gives you ins-outs about how can we build common apps that runs on both. But again, it is comparing Silverlight 2 and not 4. I am sure many architect/ developers/ project managers must be facing similar kind of questions in their premises and wants to initiate this discussion, if it has not been :). We've still got 2 weeks to make this decision, so I'm expecting everyone to participate, gurus?

    Read the article

  • Choosing the right .NET architecture. WCF? WPF/Forms, ASP.NET (MVC)?

    - by Tommy Jakobsen
    I’m in the situation that I have to design and implement a rather big system from the bottom. I’m having some (a lot actually) questions about the architecture that I would like your comments and thoughts on. I don’t hope that I’ve written too much here, but I wanted to give you all an idea of what the system is. Quick info about the applications, read if you want: I can’t share much detail about the project, but basically it’s a system where we offer our customer a service to manage their users. We have a hotline where the users call and our hotline uses an (windows) application (intranet) to manage the user’s data, etc. The customer also has a web application where they can see reports, information about their business and users, and the ability to modify their data. Modifying data is not just user data like address and so, but also information about the products/services the user has, which can be complicated. The applications will be built on Microsoft .NET Framework 4, with a MS SQL Server 2008 database. There will be a few applications that have to access this database, such as: Intranet application (used by us and our hotline) Customer web application type 1 Customer web application type 2 Customer web application type n different applications) … Now my big problem is what .NET parts I should use for such a system. For the “backend” I’ve considered using Windows Communication Foundation: Would WCF be a good choice? The intranet application will be an application that has to edit lots of records in the database. It has to be easy to navigate using the keyboard (fast to work with). Has a feature such as “find customer, find that, lookup this, choose this and update that”. What would be the best choice to develop this application in? Would it be WPF or good old Windows Forms? I don’t need all of the fancy graphics features in WPF, like 3D, but the application has to look nice (maybe something like the new Visual Studio/Office tools). And the same question goes for the web pages. They have much the same work to do, but not as many features as the intranet application, and not the same amount of data (much less). That is my questions for now. I’m hoping to get a discussion going that will open my eyes to some of these technologies, helping me decide architecture to go with. I would like to say thanks in advance, and let you all know that any thoughts will be much appreciated.

    Read the article

  • Is READ UNCOMMITTED / NOLOCK safe in this situation?

    - by Ben Challenor
    I know that snapshot isolation would fix this problem, but I'm wondering if NOLOCK is safe in this specific case so that I can avoid the overhead. I have a table that looks something like this: drop table Data create table Data ( Id BIGINT NOT NULL, Date BIGINT NOT NULL, Value BIGINT, constraint Cx primary key (Date, Id) ) create nonclustered index Ix on Data (Id, Date) There are no updates to the table, ever. Deletes can occur but they should never contend with the SELECT because they affect the other, older end of the table. Inserts are regular and page splits to the (Id, Date) index are extremely common. I have a deadlock situation between a standard INSERT and a SELECT that looks like this: select top 1 Date, Value from Data where Id = @p0 order by Date desc because the INSERT acquires a lock on Cx (Date, Id; Value) and then Ix (Id, Date), but the SELECT acquires a lock on Ix (Id, Date) and then Cx (Date, Id; Value). This is because the SELECT first seeks on Ix and then joins to a seek on Cx. Swapping the clustered and non-clustered index would break this cycle, but it is not an acceptable solution because it would introduce cycles with other (more complex) SELECTs. If I add NOLOCK to the SELECT, can it go wrong in this case? Can it return: More than one row, even though I asked for TOP 1? No rows, even though one exists and has been committed? Worst of all, a row that doesn't satisfy the WHERE clause? I've done a lot of reading about this online, but the only reproductions of over- or under-count anomalies I've seen (one, two) involve a scan. This involves only seeks. Jeff Atwood has a post about using NOLOCK that generated a good discussion. I was particularly interested in a comment by Rick Townsend: Secondly, if you read dirty data, the risk you run is of reading the entirely wrong row. For example, if your select reads an index to find your row, then the update changes the location of the rows (e.g.: due to a page split or an update to the clustered index), when your select goes to read the actual data row, it's either no longer there, or a different row altogether! Is this possible with inserts only, and no updates? If so, then I guess even my seeks on an insert-only table could be dangerous. Update: I'm trying to figure out how snapshot isolation works. It seems to be row-based, where transactions read the table (with no shared lock!), find the row they are interested in, and then see if they need to get an old version of the row from the version store in tempdb. But in my case, no row will have more than one version, so the version store seems rather pointless. And if the row was found with no shared lock, how is it different to just using NOLOCK?

    Read the article

  • generated service mock: everything but RhinoMocks fails?

    - by hko
    I have the "quest" to search for the next Mocking Framework for my company, and basically it's down to NSubstitute (simplest syntax, but no strict mocks), FakeItEasy(best reviews, Roy Osherove bonus, and slightly better lib support than NSubstitute), Moq (best "other libs support", biggest featureset, downside: mock.Object). We definitely want to move on from RhinoMocks, e.g. because of the unusefull interactiontest error messages (it should tell me what the parameter was instead, when a verification fails). So I was pretty surprised the other day (that was yesterday) when I found out RhinoMocks could do a thing where every other mock framework fails at: Mocking an autogenerated SomethingService (a typical VS autogenerated service with a default construtor in a partial class). Please don't argue about the design.. I intend to write lightweight integration tests (and some unit tests), and I can't mess around with the service, the product is installed on too many customers system. See this code: // here the NSubstitute and FakeItEasy equivalents throw an exception.. see below TicketStoreService fakeTicketStoreService = MockRepository.GenerateMock<TicketStoreService>(); fakeTicketStoreService.Expect(service => service.DoSomething(Arg.Is(new Guid())).Return(new Guid()); fakeTicketStoreService.DoSomething(Arg.Is(new Guid())); fakeTicketStoreService.VerifyAllExpectations(); Note that DoSomething is a non-virtual methodcall in an autogenerated class. So it shouldn't work, according to common knowledge. But it does. Problem is that it's the only (non commercial) framework that can do this: Rhino.Mocks works, and verification works too FakeItEasy says it doesn't find a default constructor (probably just wrong exception message): No default constructor was found on the type SomeNamespace.TicketStoreService Moq gives something sane and understandable: Invalid setup on a non-virtual (overridable in VB) member: service=> service.DoSomething Nsubstitute gives a message System.NotSupportedException: Cannot serialize member System.ComponentModel.Component.Site of type System.ComponentModel.ISite because it is an interface. I'm really wondering what's going on here with the frameworks, except Moq. The "fancy new" frameworks seem to have an initial perf hit too, probably preparing some Type cache and serializing stuff, whilst RhinoMocks somehow manages to create a very "slim" mock without recursion. I have to admit I didn't like RhinoMocks very well, but here it shines.. unfortunately. So, is there a way to get that to work with newer (non-commercial!) mocking frameworks, or somehow get a sane error message out of Rhino.Mocks? And why can Rhino.Mocks achieve this, when clearly every Mocking framework states it can only work with virtual methods when given a concrete class? Let's not derail the discussion by talking about alternative approaches like Extract&Override or runtime-proxy Mocking frameworks like JustMock/TypeMock/Moles or the new Fakes framework, I know these, but that would be less ideal solutions, for reasons beyond this topic. Any help appreciated..

    Read the article

  • Code golf - hex to (raw) binary conversion

    - by Alnitak
    In response to this question asking about hex to (raw) binary conversion, a comment suggested that it could be solved in "5-10 lines of C, or any other language." I'm sure that for (some) scripting languages that could be achieved, and would like to see how. Can we prove that comment true, for C, too? NB: this doesn't mean hex to ASCII binary - specifically the output should be a raw octet stream corresponding to the input ASCII hex. Also, the input parser should skip/ignore white space. edit (by Brian Campbell) May I propose the following rules, for consistency? Feel free to edit or delete these if you don't think these are helpful, but I think that since there has been some discussion of how certain cases should work, some clarification would be helpful. The program must read from stdin and write to stdout (we could also allow reading from and writing to files passed in on the command line, but I can't imagine that would be shorter in any language than stdin and stdout) The program must use only packages included with your base, standard language distribution. In the case of C/C++, this means their respective standard libraries, and not POSIX. The program must compile or run without any special options passed to the compiler or interpreter (so, 'gcc myprog.c' or 'python myprog.py' or 'ruby myprog.rb' are OK, while 'ruby -rscanf myprog.rb' is not allowed; requiring/importing modules counts against your character count). The program should read integer bytes represented by pairs of adjacent hexadecimal digits (upper, lower, or mixed case), optionally separated by whitespace, and write the corresponding bytes to output. Each pair of hexadecimal digits is written with most significant nibble first. The behavior of the program on invalid input (characters besides [a-fA-F \t\r\n], spaces separating the two characters in an individual byte, an odd number of hex digits in the input) is undefined; any behavior (other than actively damaging the user's computer or something) on bad input is acceptable (throwing an error, stopping output, ignoring bad characters, treating a single character as the value of one byte, are all OK) The program may write no additional bytes to output. Code is scored by fewest total bytes in the source file. (Or, if we wanted to be more true to the original challenge, the score would be based on lowest number of lines of code; I would impose an 80 character limit per line in that case, since otherwise you'd get a bunch of ties for 1 line).

    Read the article

  • How is "clean" testing done on the Macintosh without virtualization?

    - by Schnapple
    One of the things I've run across on Windows is when a web browser plugin or program you're developing makes an assumption that something is installed that, by default, isn't always present on Windows. A perfect example would be .NET - a whole lot of people running Windows XP have never installed any versions of .NET and so the installer needs to detect and remedy this if necessary. The way I've been testing this in Windows is to have a virtual machine with a snapshot of a clean, patched, but otherwise untouched install of XP or Vista or 7 or whatever. When I'm done testing I just discard any changes since the snapshot. Works great. I'm now developing something for the Macintosh, a platform which is very new to me, and I'm seeing that virtualization does not appear to be an option. It's explicitly forbidden in the EULA of Mac OS X, it's only allowed from Mac OS X Server, which seeing as how I'm targeting an end product is of no use to me, and the one program I see which can virtualize it - VirtualBox - only supports the server and actively nukes any discussion of running the consumer/client version of Mac OS X. And the only instructions I find anywhere on the topic seem to involve the use of "hacking" programs which is very much incompatible with the full-time gig I'm trying to do this for. So it looks like virtualization is out, but at various points I'm going to want or need to simulate what it's like to install and run this software on a "clean" Macintosh. How do people usually do this? Just buy multiple Macintoshes and use Time Machine? Am I thinking about this all wrong and everything Just Works? To be clear I'm not trying to run Mac OS X on a Windows machine. I have a Macintosh, I'm fine with virtualizing Mac OS X on Apple hardware, I'm just not seeing a route to making the non-Server version do this. I'm aware that Mac OS X Server can be virtualized but that's not what I'm going for. I'm aware that there are unsanctioned/unsupported methods of making Mac OS X run in virtualization programs like VirtualBox but for legal reasons I am not interested in those. My question is not "how can I do this?" but rather "so this thing I do on Windows seems to not be possible, generally, on the Macintosh, so what do people do to achieve what I'm going for?"

    Read the article

  • gcc, strict-aliasing, and casting through a union

    - by Joseph Quinsey
    About a year ago the following paragraph was added to the GCC Manual, version 4.3.4, regarding -fstrict-aliasing: Similarly, access by taking the address, casting the resulting pointer and dereferencing the result has undefined behavior [emphasis added], even if the cast uses a union type, e.g.: union a_union { int i; double d; }; int f() { double d = 3.0; return ((union a_union *)&d)->i; } Does anyone have an example to illustrate this undefined behavior? Note this question is not about what the C99 standard says, or does not say. It is about the actual functioning of gcc, and other existing compilers, today. My simple, naive, attempt fails. For example: #include <stdio.h> union a_union { int i; double d; }; int f1(void) { union a_union t; t.d = 3333333.0; return t.i; // gcc manual: 'type-punning is allowed, provided ...' } int f2(void) { double d = 3333333.0; return ((union a_union *)&d)->i; // gcc manual: 'undefined behavior' } int main(void) { printf("%d\n", f1()); printf("%d\n", f2()); return 0; } works fine, giving on CYGWIN: -2147483648 -2147483648 Also note that taking addresses is obviously wrong (or right, if you are trying to illustrate undefined behavior). For example, just as we know this is wrong: extern void foo(int *, double *); union a_union t; t.d = 3.0; foo(&t.i, &t.d); // UD behavior so is this wrong: extern void foo(int *, double *); double d = 3.0; foo(&((union a_union *)&d)->i, &d); // UD behavior For background discussion about this, see for example: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1422.pdf http://gcc.gnu.org/ml/gcc/2010-01/msg00013.html http://davmac.wordpress.com/2010/02/26/c99-revisited/ http://cellperformance.beyond3d.com/articles/2006/06/understanding-strict-aliasing.html http://stackoverflow.com/questions/98650/what-is-the-strict-aliasing-rule http://stackoverflow.com/questions/2771023/c99-strict-aliasing-rules-in-c-gcc/2771041#2771041 The first link, draft minutes of an ISO meeting seven months ago, notes in section 4.16: Is there anybody that thinks the rules are clear enough? No one is really able to interpret tham.

    Read the article

  • What are the Options for Storing Hierarchical Data in a Relational Database?

    - by orangepips
    Good Overviews One more Nested Intervals vs. Adjacency List comparison: the best comparison of Adjacency List, Materialized Path, Nested Set and Nested Interval I've found. Models for hierarchical data: slides with good explanations of tradeoffs and example usage Representing hierarchies in MySQL: very good overview of Nested Set in particular Hierarchical data in RDBMSs: most comprehensive and well organized set of links I've seen, but not much in the way on explanation Options Ones I am aware of and general features: Adjacency List: Columns: ID, ParentID Easy to implement. Cheap node moves, inserts, and deletes. Expensive to find level (can store as a computed column), ancestry & descendants (Bridge Hierarchy combined with level column can solve), path (Lineage Column can solve). Use Common Table Expressions in those databases that support them to traverse. Nested Set (a.k.a Modified Preorder Tree Traversal) First described by Joe Celko - covered in depth in his book Trees and Hierarchies in SQL for Smarties Columns: Left, Right Cheap level, ancestry, descendants Compared to Adjacency List, moves, inserts, deletes more expensive. Requires a specific sort order (e.g. created). So sorting all descendants in a different order requires additional work. Nested Intervals Combination of Nested Sets and Materialized Path where left/right columns are floating point decimals instead of integers and encode the path information. Bridge Table (a.k.a. Closure Table: some good ideas about how to use triggers for maintaining this approach) Columns: ancestor, descendant Stands apart from table it describes. Can include some nodes in more than one hierarchy. Cheap ancestry and descendants (albeit not in what order) For complete knowledge of a hierarchy needs to be combined with another option. Flat Table A modification of the Adjacency List that adds a Level and Rank (e.g. ordering) column to each record. Expensive move and delete Cheap ancestry and descendants Good Use: threaded discussion - forums / blog comments Lineage Column (a.k.a. Materialized Path, Path Enumeration) Column: lineage (e.g. /parent/child/grandchild/etc...) Limit to how deep the hierarchy can be. Descendants cheap (e.g. LEFT(lineage, #) = '/enumerated/path') Ancestry tricky (database specific queries) Database Specific Notes MySQL Use session variables for Adjacency List Oracle Use CONNECT BY to traverse Adjacency Lists PostgreSQL ltree datatype for Materialized Path SQL Server General summary 2008 offers HierarchyId data type appears to help with Lineage Column approach and expand the depth that can be represented.

    Read the article

  • Trappings MySQL Warnings on Calls Wrapped in Classes -- Python

    - by chernevik
    I can't get Python's try/else blocks to catch MySQL warnings when the execution statements are wrapped in classes. I have a class that has as a MySQL connection object as an attribute, a MySQL cursor object as another, and a method that run queries through that cursor object. The cursor is itself wrapped in a class. These seem to run queries properly, but the MySQL warnings they generate are not caught as exceptions in a try/else block. Why don't the try/else blocks catch the warnings? How would I revise the classes or method calls to catch the warnings? Also, I've looked through the prominent sources and can't find a discussion that helps me understand this. I'd appreciate any reference that explains this. Please see code below. Apologies for verbosity, I'm newbie. #!/usr/bin/python import MySQLdb import sys import copy sys.path.append('../../config') import credentials as c # local module with dbase connection credentials #============================================================================= # CLASSES #------------------------------------------------------------------------ class dbMySQL_Connection: def __init__(self, db_server, db_user, db_passwd): self.conn = MySQLdb.connect(db_server, db_user, db_passwd) def getCursor(self, dict_flag=True): self.dbMySQL_Cursor = dbMySQL_Cursor(self.conn, dict_flag) return self.dbMySQL_Cursor def runQuery(self, qryStr, dict_flag=True): qry_res = runQueryNoCursor(qryStr=qryStr, \ conn=self, \ dict_flag=dict_flag) return qry_res #------------------------------------------------------------------------ class dbMySQL_Cursor: def __init__(self, conn, dict_flag=True): if dict_flag: dbMySQL_Cursor = conn.cursor(MySQLdb.cursors.DictCursor) else: dbMySQL_Cursor = conn.cursor() self.dbMySQL_Cursor = dbMySQL_Cursor def closeCursor(self): self.dbMySQL_Cursor.close() #============================================================================= # QUERY FUNCTIONS #------------------------------------------------------------------------------ def runQueryNoCursor(qryStr, conn, dict_flag=True): dbMySQL_Cursor = conn.getCursor(dict_flag) qry_res =runQueryFnc(qryStr, dbMySQL_Cursor.dbMySQL_Cursor) dbMySQL_Cursor.closeCursor() return qry_res #------------------------------------------------------------------------------ def runQueryFnc(qryStr, dbMySQL_Cursor): qry_res = {} qry_res['rows'] = dbMySQL_Cursor.execute(qryStr) qry_res['result'] = copy.deepcopy(dbMySQL_Cursor.fetchall()) qry_res['messages'] = copy.deepcopy(dbMySQL_Cursor.messages) qry_res['query_str'] = qryStr return qry_res #============================================================================= # USAGES qry = 'DROP DATABASE IF EXISTS database_of_armaments' dbConn = dbMySQL_Connection(**c.creds) def dbConnRunQuery(): # Does not trap an exception; warning displayed to standard error. try: dbConn.runQuery(qry) except: print "dbConn.runQuery() caught an exception." def dbConnCursorExecute(): # Does not trap an exception; warning displayed to standard error. dbConn.getCursor() # try/except block does catches error without this try: dbConn.dbMySQL_Cursor.dbMySQL_Cursor.execute(qry) except Exception, e: print "dbConn.dbMySQL_Cursor.execute() caught an exception." print repr(e) def funcRunQueryNoCursor(): # Does not trap an exception; no warning displayed try: res = runQueryNoCursor(qry, dbConn) print 'Try worked. %s' % res except Exception, e: print "funcRunQueryNoCursor() caught an exception." print repr(e) #============================================================================= if __name__ == '__main__': print '\n' print 'EXAMPLE -- dbConnRunQuery()' dbConnRunQuery() print '\n' print 'EXAMPLE -- dbConnCursorExecute()' dbConnCursorExecute() print '\n' print 'EXAMPLE -- funcRunQueryNoCursor()' funcRunQueryNoCursor() print '\n'

    Read the article

  • Cloning a selector + all its children in jQuery?

    - by HipHop-opatamus
    I'm having trouble getting the following JQuery script to function properly - its functionality is as follows: 1) Hide the content below each headline 2) Upon clicking a headline, substitute the "#first-post" with the headline + the hidden content below the headline. I can only seem to get the script to clone the headline itself to #first-post, not the headline + the content beneath it. Any idea why? <HTML> <HEAD> <script src="http://code.jquery.com/jquery-latest.js"></script> </HEAD> <script> $(document).ready( function(){ $('.title').siblings().hide(); $('.title').click( function() { $('#first-post').replaceWith($(this).closest(".comment").clone().attr('id','first-post')); $('html, body').animate({scrollTop:0}, 'fast'); return false; }); }); </script> <BODY> <div id="first-post"> <div class="content"><p>This is a test discussion topic</p> </div> </div> <div class="comment"> <h2 class="title"><a href="#1">1st Post</a></h2> <div class="content"> <p>this is 1st reply to the original post</p> </div> <div class="test">1st post second line</div> </div> <div class="comment"> <h2 class="title"><a href="#2">2nd Post</a></h2> <div class="content"> <p>this is 2nd reply to the original post</p> </div> <div class="test">2nd post second line</div> </div> </div> <div class="comment"> <h2 class="title"><a href="#3">3rd Post</a></h2> <div class="content"> <p>this is 3rd reply to the original post</p> </div> <div class="test">3rd post second line</div> </div> </div> </BODY> </HTML>

    Read the article

  • asp:Button is not calling server-side function

    - by Richard Neil Ilagan
    Hi guys, I know that there has been much discussion here about this topic, but none of the threads I got across helped me solve this problem. I'm hoping that mine is somewhat unique, and may actually merit a different solution. I'm instantiating an asp:Button inside a data-bound asp:GridView through template fields. Some of the buttons are supposed to call a server-side function, but for some weird reason, it doesn't. All the buttons do when you click them is fire a postback to the current page, doing nothing, effectively just reloading the page. Below is a fragment of the code: <asp:GridView ID="gv" runat="server" AutoGenerateColumns="false" CssClass="l2 submissions" ShowHeader="false"> <Columns> <asp:TemplateField> <ItemTemplate><asp:Panel ID="swatchpanel" CssClass='<%# Bind("status") %>' runat="server"></asp:Panel></ItemTemplate> <ItemStyle Width="50px" CssClass="sw" /> </asp:TemplateField> <asp:BoundField DataField="description" ReadOnly="true"> </asp:BoundField> <asp:BoundField DataField="owner" ReadOnly="true"> <ItemStyle Font-Italic="true" /> </asp:BoundField> <asp:BoundField DataField="last-modified" ReadOnly="true"> <ItemStyle Width="100px" /> </asp:BoundField> <asp:TemplateField> <ItemTemplate> <asp:Button ID="viewBtn" cssclass='<%# Bind("sid") %>' runat="server" Text="View" OnClick="viewBtnClick" /> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> The viewBtn above should call the viewBtnClick() function on server-side. I do have that function defined, along with a proper signature (object,EventArgs). One thing that may be of note is that this code is actually inside an ASCX, which is loaded in another ASCX, finally loaded into an ASPX. Any help or insight into the matter will be SO appreciated. Thanks! (oh, and please don't mind my trashy HTML/CSS semantics - this is still in a very,very early stage :p)

    Read the article

  • SQL Server architecture guidance

    - by Liam
    Hi, We are designing a new version of our existing product on a new schema. Its an internal web application with possibly 100 concurrent users (max)This will run on a SQL Server 2008 database. On of the discussion items recently is whether we should have a single database of split the database for performance reasons across 2 separate databases. The database could grow anywhere from 50-100GB over 5 years. We are Developers and not DBAs so it would be nice to get some general guidance. [I know the answer is not simple as it depends on the schema, archiving policy, amount of data etc. ] Option 1 Single Main Database [This is my preferred option]. The plan would be to have all the tables in a single database and possibly to use file groups and partitioning to separate the data if required across multiple disks. [Use schema if appropriate]. This should deal with the performance concerns One of the comments wrt this was that the a single server instance would still be processing this data so there would still be a processing bottle neck. For reporting we could have a separate reporting DB but this is still being discussed. Option 2 Split the database into 2 separate databases DB1 - Customers, Accounts, Customer resources etc DB2 - This would contain the bulk of the data [i.e. Vehicle tracking data, financial transaction tables etc]. These tables would typically contain a lot of data. [It could reside on a separate server if required] This plan would involve keeping the main data in a smaller database [DB1] and retaining the [mainly] read only transaction type data in a separate DB [DB2]. The UI would mainly read from DB1 and thus be more responsive. [I'm aware that this option makes it harder for Referential Integrity to be enforced.] Points for consideration As we are at the design stage we can at least make proper use of indexes to deal performance issues so thats why option 1 to me is attractive and its more of a standard approach. For both options we are considering implementing an archiving database. Apologies for the long Question. In summary the question is 1 DB or 2? Thanks in advance, Liam

    Read the article

  • sed - trying to replace first occurrence after a match

    - by wakkaluba
    I am facing a situation that drives me nuts. I am setting up an update server which uses a json file. Don't ask why or how, it sucks and is my only possibility to achieve it. I have been trying and researching for HOURS (many) because I went ballistic and wanted to crack this on my own. But I have to realize I got stuck and need help. So sorry for this chunk but I think it is somewhat important to see... The file is a one liner and repeating the following sequence with changing values (of course). "plugin_name_foo_bar": {"buildDate": "bla", "dependencies": [{"name": "bla", "optional": true, "version": "1.00"}], "developers": [{"developerId": "bla", "email": "[email protected]", "name": "Bla bla2nd"}], "excerpt": "some text {excerpt} !bla.png|thumbnail,border=1! ", "gav": "bla", "labels": ["report", "scm-related"], "name": "plugin_name_foo_bar", "previousTimestamp": "bla", "previousVersion": "1.0", "releaseTimestamp": "bla", "requiredCore": "1", "scm": "github.com", "sha1": "ynnBM2jWo25ZLDdP3ybBOnV/Pio=", "title": "bla", "url": "http://bla.org", "version": "1.0", "wiki": "https://bla.org"}, "Exclusion": {"buildDate": "bla", "dependencies": [], and the next plugin block is glued straight afterwards. What I now want to do is to search for "plugin_foo_bar": {" as this is the unique identifier for a new plugin description block. I want to replace the first sha1 value occuring afterwards. That's where I keep failing. I always grab the first,last or any occurrence in the entire file and not the block :( "title" is the unique identifier after the sha1 value. So I tried to make the .* less greedy but it ain't working out. last attempt was heading towards: sed -i 's/("name": "plugin_name_foo_bar.*sha1": ")([a-zA-Z0-9!@#\$%^&*()\[\]]*)(", "title"\)/\1blablabla\2/1' default.json to find the sha1 value of that plugin but still no joy. I hope someone knows - preferably a simpler approach - before I now continue with trial and error until I have to puke and freakout. I am working with SED on Windows, so Unix approach might help me to figure out how to achieve this in batch but please make it as one-liner if possible. Scripts are a real pain to convert. And I just need SED and no other solution with other tools like AWK. That is absolutely out of discussion. Any help is appreciated :) Cheers Jan

    Read the article

  • How do I 'globally' catch exceptions thrown in object instances.

    - by SleepyBobos
    I am currently writing a winforms application (C#). I am making use of the Enterprise Library Exception Handling Block, following a fairly standard approach from what I can see. IE : In the Main method of Program.cs I have wired up event handler to Application.ThreadException event etc. This approach works well and handles the applications exceptional circumstances. In one of my business objects I throw various exceptions in the Set accessor of one of the objects properties set { if (value > MaximumTrim) throw new CustomExceptions.InvalidTrimValue("The value of the minimum trim..."); if (!availableSubMasterWidthSatisfiesAllPatterns(value)) throw new CustomExceptions.InvalidTrimValue("Another message..."); _minimumTrim = value; } My logic for this approach (without turning this into a 'when to throw exceptions' discussion) is simply that the business objects are responsible for checking business rule constraints and throwing an exception that can bubble up and be caught as required. It should be noted that in the UI of my application I do explictly check the values that the public property is being set to (and take action there displaying friendly dialog etc) but with throwing the exception I am also covering the situation where my business object may not be used by a UI eg : the Property is being set by another business object for example. Anyway I think you all get the idea. My issue is that these exceptions are not being caught by the handler wired up to Application.ThreadException and I don't understand why. From other reading I have done the Application.ThreadException event and it handler "... catches any exception that occurs on the main GUI thread". Are the exceptions being raised in my business object not in this thread? I have not created any new threads. I can get the approach to work if I update the code as follows, explicity calling the event handler that is wired to Application.ThreadException. This is the approach outlined in Enterprise Library samples. However this approach requires me to wrap any exceptions thrown in a try catch, something I was trying to avoid by using a 'global' handler to start with. try { if (value > MaximumTrim) throw new CustomExceptions.InvalidTrimValue("The value of the minimum..."); if (!availableSubMasterWidthSatisfiesAllPatterns(value)) throw new CustomExceptions.InvalidTrimValue("Another message"); _minimumTrim = value; } catch (Exception ex) { Program.ThreadExceptionHandler.ProcessUnhandledException(ex); } I have also investigated using wiring a handler up to AppDomain.UnhandledException event but this does not catch the exceptions either. I would be good if someone could explain to me why my exceptions are not being caught by my global exception handler in the first code sample. Is there another approach I am missing or am I stuck with wrapping code in try catch, shown above, as required?

    Read the article

  • Android: How to properly exit application when inconsistent condition is unavoidable?

    - by Bevor
    First of all I already read about all this discussion that it isn't a good idea to manually exit an Android application. But in my case it seems to be needed. I have an AsyncTask which does a lof of operations in background. That means downloading data, saving it to local storage and preparing it for usage in application. It could happen that there is no internet connection or something different happens. For all that cases I have an Exception handling which returns the result. And if there is an exception, the application is unusable so I need to exit it. My question is, do I have to do some unregistration unloading or unbinding tasks or something when I exit the application by code or is System.exit(0) ok? I do all this in an AsyncTask, see my example: public class InitializationTask extends AsyncTask<Void, Void, InitializationResult> { private ProcessController processController = new ProcessController(); private ProgressDialog progressDialog; private Activity mainActivity; public InitializationTask(Activity mainActivity) { this.mainActivity = mainActivity; } @Override protected void onPreExecute() { super.onPreExecute(); progressDialog = new ProgressDialog(mainActivity); progressDialog.setMessage("Die Daten werden aufbereitet.\nBitte warten..."); progressDialog.setIndeterminate(true); //means that the "loading amount" is not measured. progressDialog.setCancelable(false); progressDialog.show(); }; @Override protected InitializationResult doInBackground(Void... params) { return processController.initializeData(); } @Override protected void onPostExecute(InitializationResult result) { super.onPostExecute(result); progressDialog.dismiss(); if (!result.isValid()) { AlertDialog.Builder dialog = new AlertDialog.Builder(mainActivity); dialog.setTitle("Initialisierungsfehler"); dialog.setMessage(result.getReason()); dialog.setPositiveButton("Ok", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { dialog.cancel(); //TODO cancel application System.exit(0); } }); dialog.show(); } } }

    Read the article

  • How to custom query using ORM in Fuelphp?

    - by viyancs
    I have a problem when I want to query table using ORM ,example I have article table with field id,author,text. My code like this : // Single where $article = Model_Article::find()->where('id', 4); print_r($article); that't code will be fetch all field on table article, it's like select * from article where id = 4 Try Possibility $article = Model_Article::find(null, array('id','title'))->where('id', 3); the response is object(Orm\Query)#89 (14) { ["model":protected]=> string(10) "Model_Article" ["connection":protected]=> NULL ["view":protected]=> NULL ["alias":protected]=> string(2) "t0" ["relations":protected]=> array(0) { } ["joins":protected]=> array(0) { } ["select":protected]=> array(1) { ["t0_c0"]=> string(5) "t0.id" } ["limit":protected]=> NULL ["offset":protected]=> NULL ["rows_limit":protected]=> NULL ["rows_offset":protected]=> NULL ["where":protected]=> array(1) { [0]=> array(2) { [0]=> string(9) "and_where" [1]=> array(3) { [0]=> string(5) "t0.id" [1]=> string(1) "=" [2]=> int(3) } } } ["order_by":protected]=> array(0) { } ["values":protected]=> array(0) { } } that's is not return id or title field. but when i'm try by adding get_one() method $article = Model_Article::find(null, array('id','title'))->where('id', 3)->get_one(); id is return , but title is not and another field, i don't know why ? Reference ORM Discussion FuelPHP it's say ORM currently will be select all column, no plans to change that at the moment. My Goal I want to query in orm like this select id,owner from article where id = 4 it's will be return only id & owner, how i can get that using orm ?

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >