Search Results

Search found 13004 results on 521 pages for 'pretty printing'.

Page 475/521 | < Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >

  • Some Early Considerations

    - by Chris Massey
    Following on from my previous post, I want to say "thank you" to everyone who has got in touch and got involved – you are pioneers! An update on where we are right now: paper prototypes v1 To be more specific, we’ve picked two of the ideas that seem to have more pros than cons, turned them into Balsamiq mockups, and are getting them fleshed out with realistic content. We’ll initially make these available to the aforementioned pioneers (thank you again), roll in the feedback, and then open up to get more data on what works and what doesn’t. If you’ve got any questions about this (or what we’re working on right now), feel free to ask me in the comments below. I’ve had a few people express an interest in the process we’re going through, and I’m more than happy to share details more frequently as we go along – not least because you, dear reader, will help us stay on target and create something Good. To start with, here’s a quick flashback to bring you all up to speed. A Brief Retrospective As you may already know, we’re creating a new publishing asset specifically focused on providing great content for web developers. We don’t yet know exactly what this thing will look like, or exactly how it will work, but we know we want to create something that is useful different. For my part, I’m seriously excited at the prospect of building a genuinely digital publishing system (as opposed to what most publishing is these days, which is print-style publishing which just happens to be on the web). The main challenge at this point is working out our build-measure-assess loop to speed up our experimental turn-around, and that’ll get better as we run more trials. Of course, there are a few things we’ve been pondering at this early conceptual stage: Do we publishing about heterogeneous technology stacks from day 1, or do we start with ASP.NET (which we’re familiar with) & branch out later? There are challenges with either approach. What publishing "modes" are already being well-handled? For example, the likes of Pluralsight, TekPub, and Treehouse have pretty much nailed video training (debate about price, if you like), and unless we think we can do it faster / better / cheaper (unlikely, for the record), we should leave them to it. Where should we base whatever we create? Should we create a completely new asset under a new name, graft something onto Simple-Talk (like the labs), or just build something directly into Simple-Talk? It sounds trivial, but it does have at least some impact on infrastructure and what how we manage the different types of content we (will) have. Are there any obvious problems or niches that we think could address really well, or should we just throw ideas out and see what readers respond to? What kind of users do we want to provide for? This actually deserves a little bit of unpacking… Why are you here? We currently divide readers into (broadly) the categories: Category 1: I know nothing about X, and I’d like to learn about it. Category 2: I know something about X, but I’d like to learn how to do something specific with it. Category 3: Ah man, I have a problem with X, and I need to fix it now. Now that I think about it, I might also include a 4th class of reader: Category 4: I’m looking for something interesting to engage my brain. These are clearly task-based categorizations, and depending on which task you’re performing when you arrive here, you’re going to need different types of content, or will have specific discovery needs. One of the questions that’s at the back of my mind whenever I consider a new idea is “How many of the categories will this satisfy?” As an example, typical video training is very well suited to categories 1, 2, and 4. StackOverflow is very well suited to category 3, and serves as a sign-posting system to the rest. Clearly it’s not necessary to satisfy every category need to be useful and popular, but being aware of what behavior readers might be exhibiting when they arrive will help us tune our ideas appropriately. < / Flashback > We don’t have clean answers to most of these considerations – they’re things we’re aware of, and each idea we look at is going to be best suited to a different mix of the options I’ve described. Our first experimental loop will be coming full circle in the next few days, so we should start to see how the different possibilities vary between ideas. Free to chime in with questions and suggestions about anything I’ve just brain-dumped, or at any stage as we go along. If you see anything that intrigued or enrages you, or just have an idea you’d like to share, I’d love to hear from you.

    Read the article

  • How to Hashtag (Without Being #Annoying)

    - by Mike Stiles
    The right tool in the wrong hands can be a dangerous thing. Giving a chimpanzee a chain saw would not be a pretty picture. And putting Twitter hashtags in the hands of social marketers who were never really sure how to use them can be equally unattractive. Boiled down, hashtags are for search and organization of tweets. A notch up from that, they can also be used as part of a marketing strategy. In terms of search, if you’re in the organic apple business, you want anyone who searches “organic” on Twitter to see your posts about your apples. It’s keyword tactics not unlike web site keyword search tactics. So get a clear idea of what keywords are relevant for your tweet. It’s reasonable to include #organic in your tweet. Is it fatal if you don’t hashtag the word? It depends on the person searching. If they search “organic,” your tweet’s going to come up even if you didn’t put the hashtag in front of it. If the searcher enters “#organic,” your tweet needs the hashtag. Err on the side of caution and hashtag it so it comes up no matter how the searcher enters it. You’ll also want to hashtag it for the second big reason people hashtag, organization. You can follow a hashtag. So can the rest of the Twitterverse. If you’re that into organic munchies, you can set up a stream populated only with tweets hashtagged #organic. If you’ve established a hashtag for your brand, like #nobugsprayapples, you (and everyone else) can watch what people are tweeting about your company. So what kind of hashtags should you include? They should be directly related to the core message of your tweet. Ancillary or very loosely-related hashtags = annoying. Hashtagging your brand makes sense. Hashtagging your core area of interest makes sense. Creating a specific event or campaign hashtag you want others to include and spread makes sense (the burden is on you to promote it and get it going). Hashtagging nearly every word in the tweet is highly annoying. Far and away, the majority of hashtagged words in such tweets have no relevance, are not terms that would be searched, and are not terms needed for categorization. It looks desperate and spammy. Two is fine. One is better. And it is possible to tweet with --gasp-- no hashtags! Make your hashtags as short as you can. In fact, if your brand’s name really is #nobugsprayapples, you’re burning up valuable, limited characters and risking the inability of others to retweet with added comments. Also try to narrow your topic hashtag down. You’ll find a lot of relevant users with #organic, but a lot of totally uninterested users with #food. Just as you can join online forums and gain credibility and a reputation by contributing regularly to that forum, you can follow hashtagged topics and gain the same kind of credibility in your area of expertise. Don’t just parachute in for the occasional marketing message. And if you’re constantly retweeting one particular person, stop it. It’s kissing up and it’s obvious. Which brings us to the king of hashtag annoyances, “hashjacking.” This is when you see what terms are hot and include them in your marketing tweet as a hashtag, even though it’s unrelated to your content. Justify it all you want, but #justinbieber has nothing to do with your organic apples. Equally annoying, piggybacking on a popular event’s hashtag to tweet something not connected to the event. You’re only fostering ill will and mistrust toward your account from the people you’ve tricked into seeing your tweet. Lastly, don’t @ mention people just to make sure they see your tweet. If the tweet’s not for them or about them, it’s spammy. What I haven’t covered is use of the hashtag for comedy’s sake. You’ll see this a lot and is a matter of personal taste. No one will search these hashtagged terms or need to categorize then, they’re just there for self-expression and laughs. Twitter is, after all, supposed to be fun.  What are some of your biggest Twitter pet peeves? #blogsovernow

    Read the article

  • Is it feasible and useful to auto-generate some code of unit tests?

    - by skiwi
    Earlier today I have come up with an idea, based upon a particular real use case, which I would want to have checked for feasability and usefulness. This question will feature a fair chunk of Java code, but can be applied to all languages running inside a VM, and maybe even outside. While there is real code, it uses nothing language-specific, so please read it mostly as pseudo code. The idea Make unit testing less cumbersome by adding in some ways to autogenerate code based on human interaction with the codebase. I understand this goes against the principle of TDD, but I don't think anyone ever proved that doing TDD is better over first creating code and then immediatly therafter the tests. This may even be adapted to be fit into TDD, but that is not my current goal. To show how it is intended to be used, I'll copy one of my classes here, for which I need to make unit tests. public class PutMonsterOnFieldAction implements PlayerAction { private final int handCardIndex; private final int fieldMonsterIndex; public PutMonsterOnFieldAction(final int handCardIndex, final int fieldMonsterIndex) { this.handCardIndex = Arguments.requirePositiveOrZero(handCardIndex, "handCardIndex"); this.fieldMonsterIndex = Arguments.requirePositiveOrZero(fieldMonsterIndex, "fieldCardIndex"); } @Override public boolean isActionAllowed(final Player player) { Objects.requireNonNull(player, "player"); Hand hand = player.getHand(); Field field = player.getField(); if (handCardIndex >= hand.getCapacity()) { return false; } if (fieldMonsterIndex >= field.getMonsterCapacity()) { return false; } if (field.hasMonster(fieldMonsterIndex)) { return false; } if (!(hand.get(handCardIndex) instanceof MonsterCard)) { return false; } return true; } @Override public void performAction(final Player player) { Objects.requireNonNull(player); if (!isActionAllowed(player)) { throw new PlayerActionNotAllowedException(); } Hand hand = player.getHand(); Field field = player.getField(); field.setMonster(fieldMonsterIndex, (MonsterCard)hand.play(handCardIndex)); } } We can observe the need for the following tests: Constructor test with valid input Constructor test with invalid inputs isActionAllowed test with valid input isActionAllowed test with invalid inputs performAction test with valid input performAction test with invalid inputs My idea mainly focuses on the isActionAllowed test with invalid inputs. Writing these tests is not fun, you need to ensure a number of conditions and you check whether it really returns false, this can be extended to performAction, where an exception needs to be thrown in that case. The goal of my idea is to generate those tests, by indicating (through GUI of IDE hopefully) that you want to generate tests based on a specific branch. The implementation by example User clicks on "Generate code for branch if (handCardIndex >= hand.getCapacity())". Now the tool needs to find a case where that holds. (I haven't added the relevant code as that may clutter the post ultimately) To invalidate the branch, the tool needs to find a handCardIndex and hand.getCapacity() such that the condition >= holds. It needs to construct a Player with a Hand that has a capacity of at least 1. It notices that the capacity private int of Hand needs to be at least 1. It searches for ways to set it to 1. Fortunately it finds a constructor that takes the capacity as an argument. It uses 1 for this. Some more work needs to be done to succesfully construct a Player instance, involving the creation of objects that have constraints that can be seen by inspecting the source code. It has found the hand with the least capacity possible and is able to construct it. Now to invalidate the test it will need to set handCardIndex = 1. It constructs the test and asserts it to be false (the returned value of the branch) What does the tool need to work? In order to function properly, it will need the ability to scan through all source code (including JDK code) to figure out all constraints. Optionally this could be done through the javadoc, but that is not always used to indicate all constraints. It could also do some trial and error, but it pretty much stops if you cannot attach source code to compiled classes. Then it needs some basic knowledge of what the primitive types are, including arrays. And it needs to be able to construct some form of "modification trees". The tool knows that it needs to change a certain variable to a different value in order to get the correct testcase. Hence it will need to list all possible ways to change it, without using reflection obviously. What this tool will not replace is the need to create tailored unit tests that tests all kinds of conditions when a certain method actually works. It is purely to be used to test methods when they invalidate constraints. My questions: Is creating such a tool feasible? Would it ever work, or are there some obvious problems? Would such a tool be useful? Is it even useful to automatically generate these testcases at all? Could it be extended to do even more useful things? Does, by chance, such a project already exist and would I be reinventing the wheel? If not proven useful, but still possible to make such thing, I will still consider it for fun. If it's considered useful, then I might make an open source project for it depending on the time. For people searching more background information about the used Player and Hand classes in my example, please refer to this repository. At the time of writing the PutMonsterOnFieldAction has not been uploaded to the repo yet, but this will be done once I'm done with the unit tests.

    Read the article

  • Are there Negative Impact of opensource on commercial environment?

    - by Lostsoul
    I know this is not a good fit for Stack Overflow but wasn't sure if it was good for this site also so let me know if its not and I'll delete it. I love programming for fun but my role in my company is not technical. I have always loved the hacker culture and have been trying to drive that openness within my company from day one. My company has a very broad range of products and there are a few that are not strategic to us so I wanted to open source them (so we can focus on what makes us unique and open source the products that every firm has). Our industry does not open source(we would be the first firm to try this) and the feedback I'm getting from my management team is either 1) we'll destroy the industry or 2) all competitive commercial firms will unite against us and we'll be wiped out either way. I disagreed on both points because I think transparency will only grow our industry and our firm (think of McDonalds/KFC sharing their recipe openly, people may copy you, competitors may target you, but customers also may feel more comfortable buying your product. The value add, I believe, is in the delivery and experience not in hoarding the recipe). It's a big battle in my firm right now between the IT people who have seen the positive effects of sharing and the business people who think we'll be giving up everything (they prefer we sell parts we want to opensource, but in their defense this is standard when divesting something). Our industry is very secretive and I don't want to put anyone(even my competitors employees) out of a job yet I don't want to protect inefficient people by not being open with everyone. Yet I've seen so many amazing technologies created in interesting ways just by giving people freedom to take apart code and put it back together. I'm interested in hearing people's thoughts(doesn't have to be to my specific situation, I'm looking for the general lessons). Its a very stressful decision(but one I feel I must make) because if we go the open source route then there will be no going back. So what are your thoughts? Does open sourcing apply generally or is it only really applicable to software? Is it overall good for people in the industry and outside? I'm actually more interested in the negativeness effects(although positive are welcomed as well) Update: Long story short, although code is involved this is not so much about code as it is more about the idea of open sourcing. We are a mid sized quant hedge fund. We have some unique strategies but also have the standard long/short, arbitrage, global macro, etc.. funds. We are keeping the unique funds we have but the other stuff that everyone else has we are considering open sourcing (We have put in years of work & millions of dollars into. Our funds is pretty popular and our performance is either in first or second quartile so I suspect there will be interest but I don't know to what extent). The goal is not to get a community to work for us or anything, the goal is to let anyone who wants to tinker with it do so and create anything they want (it will not be part of our product line although I may unofficially allocate some our of staff's time to assist any community that grows). Although the code base is quite large, the value in this is the industry knowledge and approaches we have acquired (there are many books on artificial intelligence and quant trading but they are often years behind what's really going on as most firms forbid their staff from discussing what they are doing). We are also considering after we move our clients out to let the software still run and output the resulting portfolios for free as well so people can at least see the results(as long as we have avail. infrastructure). I think our main choices are, we can continue to fight for market share in a products that are becoming commoditized, we can shut the funds/products down(and keep the code but no one outside of our firm will ever learn from it) or we can open source it and let people do what they want. By open sourcing it, my idea is that the talent pool in the industry will grow because right now most of our hires have the same background (CFA, MBA, similar school, same experience,etc.. because we can't spend time training people so the industry 'standardizes' most people and thus the firms themselves start to look/act similar) but this may allow us to identify talent that has never been in the industry before (if we put a GPU license then as people learn from what we did, we can learn from what they do as well and maybe apply it to other areas of our firm). I see a lot of benefits but not many negatives while my peers at the company see the opposite.

    Read the article

  • Adaptive Connections For ADFBC

    - by Duncan Mills
    Some time ago I wrote an article on Adaptive Bindings showing how the pageDef for a an ADF UI does not have to be wedded to a fixed data control or collection / View Object. This article has proved pretty popular, so as a follow up I wanted to cover another "Adaptive" feature of your ADF applications, the ability to make multiple different connections from an Application Module, at runtime. Now, I'm sure you'll be aware that if you define your application to use a data-source rather than a hard-coded JDBC connection string, then you have the ability to change the target of that data-source after deployment to point to a different database. So that's great, but the reality of that is that this single connection is effectively fixed within the application right?  Well no, this it turns out is a common misconception. To be clear, yes a single instance of an ADF Application Module is associated with a single connection but there is nothing to stop you from creating multiple instances of the same Application Module within the application, all pointing at different connections.  If fact this has been possible for a long time using a custom extension point with code that which extends oracle.jbo.http.HttpSessionCookieFactory. This approach, however, involves writing code and no-one likes to write any more code than they need to, so, is there an easier way? Yes indeed.  It is in fact  a little publicized feature that's available in all versions of 11g, the ELEnvInfoProvider. What Does it Do?  The ELEnvInfoProvider  is  a pre-existing class (the full path is  oracle.jbo.client.ELEnvInfoProvider) which you can plug into your ApplicationModule configuration using the jbo.envinfoprovider property. Visuallty you can set this in the editor, or you can also set it directly in the bc4j.xcfg (see below for an example) . Once you have plugged in this envinfoprovider, here's the fun bit, rather than defining the hard-coded name of a datasource instead you can plug in a EL expression for the connection to use.  So what's the benefit of that? Well it allows you to defer the selection of a connection until the point in time that you instantiate the AM. To define the expression itself you'll need to do a couple of things: First of all you'll need a managed bean of some sort – e.g. a sessionScoped bean defined in your ViewController project. This will need a getter method that returns the name of the connection. Now this connection itself needs to be defined in your Application Server, and can be managed through Enterprise Manager, WLST or through MBeans. (You may need to read the documentation [http://docs.oracle.com/cd/E28280_01/web.1111/b31974/deployment_topics.htm#CHDJGBDD] here on how to configure connections at runtime if you're not familiar with this)   The EL expression (e.g. ${connectionManager.connection} is then defined in the configuration by editing the bc4j.xcfg file (there is a hyperlink directly to this file on the configuration editing screen in the Application Module editor). You simply replace the hardcoded JDBCName value with the expression.  So your cfg file would end up looking something like this (notice the reference to the ELEnvInfoProvider that I talked about earlier) <BC4JConfig version="11.1" xmlns="http://xmlns.oracle.com/bc4j/configuration">   <AppModuleConfigBag ApplicationName="oracle.demo.model.TargetAppModule">   <AppModuleConfig DeployPlatform="LOCAL"  JDBCName="${connectionManager.connection}" jbo.project="oracle.demo.model.Model" name="TargetAppModuleLocal" ApplicationName="oracle.demo.model.TargetAppModule"> <AM-Pooling jbo.doconnectionpooling="true"/> <Database jbo.locking.mode="optimistic">       <Security AppModuleJndiName="oracle.demo.model.TargetAppModule"/>    <Custom jbo.envinfoprovider="oracle.jbo.client.ELEnvInfoProvider"/> </AppModuleConfig> </AppModuleConfigBag> </BC4JConfig> Still Don't Quite Get It? So far you might be thinking, well that's fine but what difference does it make if the connection is resolved "just in time" rather than up front and changed as required through Enterprise Manager? Well a trivial example would be where you have a single application deployed to your application server, but for different users you want to connect to different databases. Because, the evaluation of the connection is deferred until you first reference the AM you have a decision point that can take the user identity into account. However, think about it for a second.  Under what circumstances does a new AM get instantiated? Well at the first reference of the AM within the application yes, but also whenever a Task Flow is entered -  if the data control scope for the Task Flow is ISOLATED.  So the reality is, that on a single screen you can embed multiple Task Flows, all of which are pointing at different database connections concurrently. Hopefully you'll find this feature useful, let me know... 

    Read the article

  • What's New in SGD 5.1?

    - by Fat Bloke
    Oracle announced the latest version of Secure Global Desktop (SGD) this week with 3 major themes: Support for Android devices; Support for Desktop Chrome clients;  Support for Oracle Unified Directory. I'll talk about the new features in a moment, but a bit of context first: Oracle SGD - what, how and why?  Oracle Secure Global Desktop is Oracle's secure remote access product which allows users on almost any device, to access almost any type application which  is hosted in the data center, from almost any location. And it does this by sitting on the edge of the datacenter, between the user and the applications: This is actually a really smart environment for an increasing number of use cases where: Users need mobility of location AND device (i.e. work from anywhere); IT needs to ensure security of applications and data (of course!) The application requires an end-user environment which can't be guaranteed and IT may not own the client platform (e.g. BYOD, working from home, partners or contractors). Oracle has a a specific interest in this of course. As the leading supplier of enterprise applications, many of Oracle's customers, and indeed Oracle itself, fit these criteria. So, as an IT guy rolling out an application to your employees, if one of your apps absolutely needs, say,  IE10 with Java 6 update 32, how can you be sure that the user population has this, especially when they're using their own devices? In the SGD model you, the IT guy, can set up, say, a Windows Server running the exact environment required, and then use SGD to publish this app, without needing to worry any further about the device the end user is using. What's new?  So back to SGD 5.1 and what is new there: Android devices Since we introduced our support for iPad tablets in SGD 5.0 we've had a big demand from customers to extend this to Android tablets too, and so we're pleased to announce that 5.1 supports Android 4.x tablets such as Nexus 7 and 10, and the Galaxy Tab. Here's how it works, with screenshots from my Nexus 7: Simply point your browser to the SGD server URL and login; The workspace is the list of apps that the admin has deemed ok for you to run. You click on an application to run it (here's Excel and Oracle E-Business Suite): There's an extended on-screen keyboard (extended because desktop apps need keys that don't appear on a tablet keyboard such as ctrl, WIndow key, etc) and touch gestures can be mapped to desktop events (such as tap and hold to right click) All in all a pretty nice implementation for Android tablet users. Desktop Chrome Browsers SGD has always been designed around using a browser to access your applications. But traditionally, this has involved using Java to deliver the SGD client component. With HTML5 and Javascript engines becoming so powerful, we thought we'd see how well a pure web client could perform with desktop apps. And the answer was, surprisingly well. So with this release we now offer this additional way of working, which can be enabled by a simple bit of configuration. Here's a Linux desktop running in a tab in Chrome. And if you resize the browser window, the Linux desktop is resized by SGD too. Very cool! Oracle Unified Directory As I mentioned above, a lot of Oracle users already benefit from SGD. And a lot of Oracle customers use Oracle Unified Directory as their Enterprise and Carrier grade user directory. So it makes a lot of sense that SGD now supports this LDAP directory for both Authentication and as a means to determine which users get which applications, e.g. publish the engineering app to the guys in the Development group, but give everyone E-Business Suite to let them do their expenses. Summary With new devices, and faster 4G networking becoming more prevalent, the pressure for businesses to move to a increasingly mobile enterprise is stronger than ever. SGD is good for users, and even better for IT. By offering the user the ability to work from anywhere, and IT the control and security they need, everyone wins with SGD. To try this for yourself, download SGD 5.1 (look under Desktop Virtualization Products) from the Oracle Software Delivery Cloud or if you're an existing customer, get it from My Oracle Support.  -FB 

    Read the article

  • Bulk inserting best way to about it? + Helping me understand fully what I found so far

    - by chobo2
    Hi So I saw this post here and read it and it seems like bulk copy might be the way to go. http://stackoverflow.com/questions/682015/whats-the-best-way-to-bulk-database-inserts-from-c I still have some questions and want to know how things actually work. So I found 2 tutorials. http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx First way uses 2 ado.net 2.0 features. BulkInsert and BulkCopy. the second one uses linq to sql and OpenXML. This sort of appeals to me as I am using linq to sql already and prefer it over ado.net. However as one person pointed out in the posts what he just going around the issue at the cost of performance( nothing wrong with that in my opinion) First I will talk about the 2 ways in the first tutorial I am using VS2010 Express, .net 4.0, MVC 2.0, SQl Server 2005 Is ado.net 2.0 the most current version? Based on the technology I am using, is there some updates to what I am going to show that would improve it somehow? Is there any thing that these tutorial left out that I should know about? BulkInsert I am using this table for all the examples. CREATE TABLE [dbo].[TBL_TEST_TEST] ( ID INT IDENTITY(1,1) PRIMARY KEY, [NAME] [varchar](50) ) SP Code USE [Test] GO /****** Object: StoredProcedure [dbo].[sp_BatchInsert] Script Date: 05/19/2010 15:12:47 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BatchInsert] (@Name VARCHAR(50) ) AS BEGIN INSERT INTO TBL_TEST_TEST VALUES (@Name); END C# Code /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 1000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } So first thing is the batch size. Why would you set a batch size to anything but the number of records you are sending? Like I am sending 500,000 records so I did a Batch size of 500,000. Next why does it crash when I do this? If I set it to 1000 for batch size it works just fine. System.Data.SqlClient.SqlException was unhandled Message="A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=20 LineNumber=0 Number=233 Server="" State=0 StackTrace: at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.UpdateFromDataTable(DataTable dataTable, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.Update(DataTable dataTable) at TestIQueryable.Program.BatchInsert() in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 124 at TestIQueryable.Program.Main(String[] args) in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 16 InnerException: Time it took to insert 500,000 records with insert batch size of 1000 took "2 mins and 54 seconds" Of course this is no official time I sat there with a stop watch( I am sure there are better ways but was too lazy to look what they where) So I find that kinda slow compared to all my other ones(expect the linq to sql insert one) and I am not really sure why. Next I looked at bulkcopy /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } This one seemed to go really fast and did not even need a SP( can you use SP with bulk copy? If you can would it be better?) BatchCopy had no problem with a 500,000 batch size.So again why make it smaller then the number of records you want to send? I found that with BatchCopy and 500,000 batch size it took only 5 seconds to complete. I then tried with a batch size of 1,000 and it only took 8 seconds. So much faster then the bulkinsert one above. Now I tried the other tutorial. USE [Test] GO /****** Object: StoredProcedure [dbo].[spTEST_InsertXMLTEST_TEST] Script Date: 05/19/2010 15:39:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code. /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } So I like this because I get to use objects even though it is kinda redundant. I don't get how the SP works. Like I don't get the whole thing. I don't know if OPENXML has some batch insert under the hood but I do not even know how to take this example SP and change it to fit my tables since like I said I don't know what is going on. I also don't know what would happen if the object you have more tables in it. Like say I have a ProductName table what has a relationship to a Product table or something like that. In linq to sql you could get the product name object and make changes to the Product table in that same object. So I am not sure how to take that into account. I am not sure if I would have to do separate inserts or what. The time was pretty good for 500,000 records it took 52 seconds The last way of course was just using linq to do it all and it was pretty bad. /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } I did only 50,000 records and that took over a minute to do. So I really narrowed it done to the linq to sql bulk insert way or bulk copy. I am just not sure how to do it when you have relationship for either way. I am not sure how they both stand up when doing updates instead of inserts as I have not gotten around to try it yet. I don't think I will ever need to insert/update more than 50,000 records at one type but at the same time I know I will have to do validation on records before inserting so that will slow it down and that sort of makes linq to sql nicer as your got objects especially if your first parsing data from a xml file before you insert into the database. Full C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Serialization; using System.Data; using System.Data.SqlClient; namespace TestIQueryable { class Program { private static string connectionString = ""; static void Main(string[] args) { BatchInsert(); Console.WriteLine("done"); } /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 500000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } private static DataTable GetDataTable() { // You First need a DataTable and have all the insert values in it DataTable dtInsertRows = new DataTable(); dtInsertRows.Columns.Add("NAME"); for (int i = 0; i < 500000; i++) { DataRow drInsertRow = dtInsertRows.NewRow(); string name = "Name : " + i; drInsertRow["NAME"] = name; dtInsertRows.Rows.Add(drInsertRow); } return dtInsertRows; } static void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e) { Console.WriteLine("Number of records affected : " + e.RowsCopied.ToString()); } } }

    Read the article

  • nhibernate mapping: A collection with cascade="all-delete-orphan" was no longer referenced

    - by Chev
    Hi All I am having some probs with my fluent mappings. I have an entity with a child collection of entities i.e Event and EventItems for example. If I set my cascade mapping of the collection to AllDeleteOrphan I get the following error when saving a new entity to the DB: NHibernate.HibernateException : A collection with cascade="all-delete-orphan" was no longer referenced by the owning entity instance: Core.Event.EventItems If I set the cascade to All it works fine? Below are my classes and mapping files: public class EventMap : ClassMap<Event> { public EventMap() { Id(x => x.Id, "Id") .UnsavedValue("00000000-0000-0000-0000-000000000000") .GeneratedBy.GuidComb(); Map(x => x.Name); HasMany(x => x.EventItems) .Inverse() .KeyColumn("EventId") .AsBag() .Cascade.AllDeleteOrphan(); } } public class EventItemMap : SubclassMap<EventItem> { public EventItemMap() { Id(x => x.Id, "Id") .UnsavedValue("00000000-0000-0000-0000-000000000000") .GeneratedBy.GuidComb(); References(x => x.Event, "EventId"); } } public class Event : EntityBase { private IList<EventItem> _EventItems; protected Event() { InitMembers(); } public Event(string name) : this() { Name = name; } private void InitMembers() { _EventItems = new List<EventItem>(); } public virtual EventItem CreateEventItem(string name) { EventItem eventItem = new EventItem(this, name); _EventItems.Add(eventItem); return eventItem; } public virtual string Name { get; private set; } public virtual IList<EventItem> EventItems { get { return _EventItems.ToList<EventItem>().AsReadOnly(); } protected set { _EventItems = value; } } } public class EventItem : EntityBase { protected EventItem() { } public EventItem(Event @event, string name):base(name) { Event = @event; } public virtual Event Event { get; private set; } } Pretty stumped here. Any tips greatly appreciated. Chev

    Read the article

  • Facebook Oauth Logout

    - by Derek Troy-West
    I have an application that integrates with Facebook using Oauth 2. I can authorize with FB and query their REST and Graph APIs perfectly well, but when I authorize an active browser session is created with FB. I can then log-out of my application just fine, but the session with FB persists, so if anyone else uses the browser they will see the previous users FB account (unless the previous user manually logs out of FB also). The steps I take to authorize are: Call [LINK: graph.facebook.com/oauth/authorize?client_id...] This step opens a Facebook login/connect window if the user's browser doesn't already have an active FB session. Once they log-in to facebook they redirect to my site with a code I can exchange for an oauth token. Call [LINK: graph.facebook.com/oauth/access_token?client_id..] with the code from (1) Now I have an Oauth Token, and the user's browser is logged into my site, and into FB. I call a bunch of APIs to do stuff: i.e. [LINK: graph.facebook.com/me?access_token=..] Lets say my user wants to log out of my site. The FB terms and conditions demand that I perform Single Sign Off, so when the user logs out of my site, they also are logged out of Facebook. There are arguments that this is a bit daft, but I'm happy to comply if there is any way of actually achieving that. I have seen suggestions that: A. I use the Javascript API to logout: FB.Connect.logout(). Well I tried using that, but it didn't work, and I'm not sure exactly how it could, as I don't use the Javascript API in any way on my site. The session isn't maintained or created by the Javascript API so I'm not sure how it's supposed to expire it either. B. Use [LINK: facebook.com/logout.php]. This was suggested by an admin in the Facebook forums some time ago. The example given related to the old way of getting FB sessions (non-oauth) so I don't think I can apply it in my case. C. Use the old REST api expireSession or revokeAuthorization. I tried both of these and while they do expire the Oauth token they don't invalidate the session that the browser is currently using so it has no effect, the user is not logged out of Facebook. I'm really at a bit of a loose end, the Facebook documentation is patchy, ambiguous and pretty poor. The support on the forums is non-existant, at the moment I can't even log in to the facebook forum, and aside from that, their own FB Connect integration doesn't even work on the forum itself. Doesn't inspire much confidence. Ta for any help you can offer. Derek ps. Had to change HTTPS to LINK, not enough karma to post links which is probably fair enough.

    Read the article

  • .NET Web Service (asmx) Timeout Problem

    - by Barry Fandango
    I'm connecting to a vendor-supplied web ASMX service and sending a set of data over the wire. My first attempt hit the 1 minute timeout that Visual Studio throws in by default in the app.config file when you add a service reference to a project. I increased it to 10 minutes, another timeout. 1 hour, another timeout: Error: System.TimeoutException: The request channel timed out while waiting for a reply after 00:59:59.6874880. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout. ---> System.TimeoutE xception: The HTTP request to 'http://servername/servicename.asmx' has exceeded the allotted timeout of 01:00:00. The time allotted to this operation may have been a portion of a longer timeout. ---> System.Net.WebExcept ion: The operation has timed out at System.Net.HttpWebRequest.GetResponse() [... lengthly stacktrace follows] I contacted the vendor. They confirmed the call may take over an hour (don't ask, they are the bane of my existence.) I increased the timeout to 10 hours to be on the safe side. However the web service call continues to time out at 1 hour. The relevant app.config section now looks like this: <basicHttpBinding> <binding name="BindingName" closeTimeout="10:00:00" openTimeout="10:00:00" receiveTimeout="10:00:00" sendTimeout="10:00:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="2147483647" maxBufferPoolSize="524288" maxReceivedMessageSize="2147483647" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="2147483647" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="None"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> Pretty absurd, but regardless the timeout is still kicking in at 1 hour. Unfortunately every change takes at least an additional hour to test. Is there some internal limit that I'm bumping into - another timeout setting to be changed somewhere? All changes to these settings up to one hour had the expected effect. Thanks for any help you can provide!

    Read the article

  • Entity Framework SaveChanges error details

    - by Marek Karbarz
    When saving changes with SaveChanges on a data context is there a way to determine which Entity causes an error? For example, sometimes I'll forget to assign a date to a non-nullable date field and get "Invalid Date Range" error, but I get no information about which entity or which field it's caused by (I can usually track it down by painstakingly going through all my objects, but it's very time consuming). Stack trace is pretty useless as it only shows me an error at the SaveChanges call without any additional information as to where exactly it happened. Note that I'm not looking to solve any particular problem I have now, I would just like to know in general if there's a way to tell which entity/field is causing a problem. Quick sample of a stack trace as an example - in this case an error happened because CreatedOn date was not set on IAComment entity, however it's impossible to tell from this error/stack trace [SqlTypeException: SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM.] System.Data.SqlTypes.SqlDateTime.FromTimeSpan(TimeSpan value) +2127345 System.Data.SqlTypes.SqlDateTime.FromDateTime(DateTime value) +232 System.Data.SqlClient.MetaType.FromDateTime(DateTime dateTime, Byte cb) +46 System.Data.SqlClient.TdsParser.WriteValue(Object value, MetaType type, Byte scale, Int32 actualLength, Int32 encodingByteSize, Int32 offset, TdsParserStateObject stateObj) +4997789 System.Data.SqlClient.TdsParser.TdsExecuteRPC(_SqlRPC[] rpcArray, Int32 timeout, Boolean inSchema, SqlNotificationRequest notificationRequest, TdsParserStateObject stateObj, Boolean isCommandProc) +6248 System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) +987 System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) +162 System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) +32 System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) +141 System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior) +12 System.Data.Common.DbCommand.ExecuteReader(CommandBehavior behavior) +10 System.Data.Mapping.Update.Internal.DynamicUpdateCommand.Execute(UpdateTranslator translator, EntityConnection connection, Dictionary`2 identifierValues, List`1 generatedValues) +8084396 System.Data.Mapping.Update.Internal.UpdateTranslator.Update(IEntityStateManager stateManager, IEntityAdapter adapter) +267 [UpdateException: An error occurred while updating the entries. See the inner exception for details.] System.Data.Mapping.Update.Internal.UpdateTranslator.Update(IEntityStateManager stateManager, IEntityAdapter adapter) +389 System.Data.EntityClient.EntityAdapter.Update(IEntityStateManager entityCache) +163 System.Data.Objects.ObjectContext.SaveChanges(SaveOptions options) +609 IADAL.IAController.Save(IAHeader head) in C:\Projects\IA\IADAL\IAController.cs:61 IA.IAForm.saveForm(Boolean validate) in C:\Projects\IA\IA\IAForm.aspx.cs:198 IA.IAForm.advance_Click(Object sender, EventArgs e) in C:\Projects\IA\IA\IAForm.aspx.cs:287 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +118 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +112 System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) +10 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +13 System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) +36 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +5019

    Read the article

  • GLFW 3 initialized, yet not?

    - by mSkull
    I'm struggling with creating a window with the GLFW 3 function, glfwCreateWindow. I have set an error callback function, that pretty much just prints out the error number and description, and according to that the GLFW library have not been initialized, even though the glfwInit function just returned success? Here's an outtake from my code // Error callback function prints out any errors from GFLW to the console static void error_callback( int error, const char *description ) { cout << error << '\t' << description << endl; } bool Base::Init() { // Set error callback /*! * According to the documentation this can be use before glfwInit, * and removing won't change anything anyway */ glfwSetErrorCallback( error_callback ); // Initialize GLFW /*! * This return succesfull, but... */ if( !glfwInit() ) { cout << "INITIALIZER: Failed to initialize GLFW!" << endl; return false; } else { cout << "INITIALIZER: GLFW Initialized successfully!" << endl; } // Create window /*! * When this is called, or any other glfw functions, I get a * "65537 The GLFW library is not initialized" in the console, through * the error_callback function */ window = glfwCreateWindow( 800, 600, "GLFW Window", NULL, NULL ); if( !window ) { cout << "INITIALIZER: Failed to create window!" << endl; glfwTerminate(); return false; } // Set window to current context glfwMakeContextCurrent( window ); ... return true; } And here's what's printed out in the console INITIALIZER: GLFW Initialized succesfully! 65537 The GLFW library is not initialized INITIALIZER: Failed to create window! I think I'm getting the error because of the setup isn't entirely correct, but I've done the best I can with what I could find around the place I downloaded the windows 32 from glfw.org and stuck the 2 includes files from it into minGW/include/GLFW, the 2 .a files (from the lib-mingw folder) into minGW/lib and the dll, also from the lib-mingw folder, into Windows/System32 In code::blocks I have, from build options - linker settings, linked the 2 .a files from the download. I believe I need to link more things, but I can figure out what, or where I should get those things from.

    Read the article

  • Subclassed django models with integrated querysets

    - by outofculture
    Like in this question, except I want to be able to have querysets that return a mixed body of objects: >>> Product.objects.all() [<SimpleProduct: ...>, <OtherProduct: ...>, <BlueProduct: ...>, ...] I figured out that I can't just set Product.Meta.abstract to true or otherwise just OR together querysets of differing objects. Fine, but these are all subclasses of a common class, so if I leave their superclass as non-abstract I should be happy, so long as I can get its manager to return objects of the proper class. The query code in django does its thing, and just makes calls to Product(). Sounds easy enough, except it blows up when I override Product.__new__, I'm guessing because of the __metaclass__ in Model... Here's non-django code that behaves pretty much how I want it: class Top(object): _counter = 0 def __init__(self, arg): Top._counter += 1 print "Top#__init__(%s) called %d times" % (arg, Top._counter) class A(Top): def __new__(cls, *args, **kwargs): if cls is A and len(args) > 0: if args[0] is B.fav: return B(*args, **kwargs) elif args[0] is C.fav: return C(*args, **kwargs) else: print "PRETENDING TO BE ABSTRACT" return None # or raise? else: return super(A).__new__(cls, *args, **kwargs) class B(A): fav = 1 class C(A): fav = 2 A(0) # => None A(1) # => <B object> A(2) # => <C object> But that fails if I inherit from django.db.models.Model instead of object: File "/home/martin/beehive/apps/hello_world/models.py", line 50, in <module> A(0) TypeError: unbound method __new__() must be called with A instance as first argument (got ModelBase instance instead) Which is a notably crappy backtrace; I can't step into the frame of my __new__ code in the debugger, either. I have variously tried super(A, cls), Top, super(A, A), and all of the above in combination with passing cls in as the first argument to __new__, all to no avail. Why is this kicking me so hard? Do I have to figure out django's metaclasses to be able to fix this or is there a better way to accomplish my ends?

    Read the article

  • ASP.NET GridView second header row to span main header row

    - by Dana Robinson
    I have an ASP.NET GridView which has columns that look like this: | Foo | Bar | Total1 | Total2 | Total3 | Is it possible to create a header on two rows that looks like this? | | Totals | | Foo | Bar | 1 | 2 | 3 | The data in each row will remain unchanged as this is just to pretty up the header and decrease the horizontal space that the grid takes up. The entire GridView is sortable in case that matters. I don't intend for the added "Totals" spanning column to have any sort functionality. Edit: Based on one of the articles given below, I created a class which inherits from GridView and adds the second header row in. namespace CustomControls { public class TwoHeadedGridView : GridView { protected Table InnerTable { get { if (this.HasControls()) { return (Table)this.Controls[0]; } return null; } } protected override void OnDataBound(EventArgs e) { base.OnDataBound(e); this.CreateSecondHeader(); } private void CreateSecondHeader() { GridViewRow row = new GridViewRow(0, -1, DataControlRowType.Header, DataControlRowState.Normal); TableCell left = new TableHeaderCell(); left.ColumnSpan = 3; row.Cells.Add(left); TableCell totals = new TableHeaderCell(); totals.ColumnSpan = this.Columns.Count - 3; totals.Text = "Totals"; row.Cells.Add(totals); this.InnerTable.Rows.AddAt(0, row); } } } In case you are new to ASP.NET like I am, I should also point out that you need to: 1) Register your class by adding a line like this to your web form: <%@ Register TagPrefix="foo" NameSpace="CustomControls" Assembly="__code"%> 2) Change asp:GridView in your previous markup to foo:TwoHeadedGridView. Don't forget the closing tag. Another edit: You can also do this without creating a custom class. Simply add an event handler for the DataBound event of your grid like this: protected void gvOrganisms_DataBound(object sender, EventArgs e) { GridView grid = sender as GridView; if (grid != null) { GridViewRow row = new GridViewRow(0, -1, DataControlRowType.Header, DataControlRowState.Normal); TableCell left = new TableHeaderCell(); left.ColumnSpan = 3; row.Cells.Add(left); TableCell totals = new TableHeaderCell(); totals.ColumnSpan = grid.Columns.Count - 3; totals.Text = "Totals"; row.Cells.Add(totals); Table t = grid.Controls[0] as Table; if (t != null) { t.Rows.AddAt(0, row); } } } The advantage of the custom control is that you can see the extra header row on the design view of your web form. The event handler method is a bit simpler, though.

    Read the article

  • Integrating POP3 client functionality into a C# application?

    - by flesh
    I have a web application that requires a server based component to periodically access POP3 email boxes and retrieve emails. The service then needs to process the emails which will involve: Validating the email against some business rules (does it contain a valid reference in the subject line, which user sent the mail, etc.) Analysing and saving any attachments to disk Take the email body and attachment details and create a new item in the database Or update an existing item where the reference matches the incoming email subject line What is the best way to approach this? I really don't want to have to write a POP3 client from scratch, but I need to be able to customize the processing of emails. Ideally I would be able to plug in some component that does the access and retrieval for me, returning arrays of attachments, body text, subject line, etc. ready for my processing... [ UPDATE: Reviews ] OK, so I have spent a fair amount of time looking into (mainly free) .NET POP3 libraries so I thought I'd provide a short review of some of those mentioned below and a few others: Pop3.net - free - works OK, very basic in terms of functionality provided. This is pretty much just the POP3 commands and some base64 encoding, but it's very straight forward - probably a good introduction Pop3 Wizard - commercial / some open source code - couldn't get this to build, missing DLLs, I wouldn't bother with this C#Mail - free - works well, comes with Mime parser and SMTP client, however the comments are in Japanese (not a big deal) and it didn't work with SSL 'out of the box' - I had to change the SslStream constructor after which it worked no problem OpenPOP - free - hasn't been updated for about 5 years so it's current state is .NET 1.0, doesn't support SSL but that was no problem to resolve - I just replaced the existing stream with an SslStream and it worked. Comes with Mime parser. Of the free libraries, I'd go for C#Mail or OpenPOP. I looked at a few commercial libraries: Chillkat, Rebex, RemObjects, JMail.net. Based on features, price and impression of the company I would probably go for Rebex and may in the future if my requirements change or I run into production issues with either of C#Mail or OpenPOP. In case anyone's needs it, this is the replacement SslStream constructor that I used to enable SSL with C#Mail and OpenPOP: SslStream stream = new SslStream(clientSocket.GetStream(), false, delegate(object sender, X509Certificate cert, X509Chain chain, SslPolicyErrors errors) { return true; });

    Read the article

  • WPF Animation FPS vs. CPU usage - Am I expecting too much?

    - by Cory Charlton
    Working on a screen saver for my wife, http://cchearts.codeplex.com/, and while I've been able to improve FPS on lower end machines (switch from Path to StreamGeometry, use DrawingVisual instead of UserControl, etc) the CPU usage still seems very high. Here's some numbers I ran from a few 5 minute sampling periods: ~60FPS 35% average CPU on Core 2 Duo T7500 @ 2.2GHz, 3GB ram, NVIDIA Quadro NVS 140M (128MB), Vista [My dev laptop] ~40FPS 50% average CPU on Pentium D @ 3.4GHz, 1.5GB ram, Standard VGA Graphics Adapter (unknown), 2003 Server [A crappy desktop] I can understand the lower frame rate and higher CPU usage on the crappy desktop but it still seems pretty high and 35% on my dev laptop seems high as well. I'd really like to analyze the application to get more details but I'm having issues there as well so I'm wondering if I'm doing something wrong (never profiled WPF before). WPF Performance Suite: Process Launch Error Unable to attach to process: CCHearts.exe Do you want to kill it? This error message occurs when I click cancel after attempting launch. If I don't click cancel it sits there idle, I guess waiting to attach. Performance Explorer: Could not launch C:\Projects2\CC.Hearts\CC.Hearts\bin\Debug (USEVISUAL)\CCHearts.exe. Previous attempt to profile the application finished unsuccessfully. Please restart the application. Output Window from Performance: Profiling started. Profiling process ID 5360 (CCHearts). Process ID 5360 has exited. Data written to C:\Projects2\CC.Hearts\CCHearts100608.vsp. Profiling finished. PRF0025: No data was collected. Profiling complete. So I'm stuck wanting to improve performance but have no concrete way to determine where the bottleneck is. Have been relatively successful throwing darts at this point but I'm beyond that now :) PS: Screensaver is hosted at CodePlex if you want to look at the source and missed the link above. Edit: My RenderOptions darts... // NOTE: Grasping at straws here ;-) RenderOptions.SetBitmapScalingMode(newHeart, BitmapScalingMode.LowQuality); RenderOptions.SetCachingHint(newHeart, CachingHint.Cache); RenderOptions.SetEdgeMode(newHeart, EdgeMode.Aliased); I threw those a little while back and didn't see much difference (not sure if the bitmap scaling even comes into play). Really wish I could get profiling working to know where I should try to optimize. For now I assume there is some overhead in creating a new HeartVisual and the DrawingVisual contained inside. Maybe if I reset and reused the hearts (tossed them in a queue once they completed or something) I'd see an improvement. Shrug Throwing darts while blindfolder is always fun.

    Read the article

  • Return Json causes save file dialog in asp.net mvc

    - by Eran
    Hi, I'm integrating jquery fullcalendar into my application. Here is the code i'm using: in index.aspx: <script type="text/javascript"> $(document).ready(function() { $('#calendar').fullCalendar({ events: "/Scheduler/CalendarData" }); }); </script> <div id="calendar"> </div> Here is the code for Scheduler/CalendarData: public ActionResult CalendarData() { IList<CalendarDTO> tasksList = new List<CalendarDTO>(); tasksList.Add(new CalendarDTO { id = 1, title = "Google search", start = ToUnixTimespan(DateTime.Now), end = ToUnixTimespan(DateTime.Now.AddHours(4)), url = "www.google.com" }); tasksList.Add(new CalendarDTO { id = 1, title = "Bing search", start = ToUnixTimespan(DateTime.Now.AddDays(1)), end = ToUnixTimespan(DateTime.Now.AddDays(1).AddHours(4)), url = "www.bing.com" }); return Json(tasksList,JsonRequestBehavior.AllowGet); } private long ToUnixTimespan(DateTime date) { TimeSpan tspan = date.ToUniversalTime().Subtract( new DateTime(1970, 1, 1, 0, 0, 0)); return (long)Math.Truncate(tspan.TotalSeconds); } public ActionResult Index() { return View("Index"); } I also have the following code inside head tag in site.master: <asp:ContentPlaceHolder ID="ContentPlaceHolder1" runat="server" /> <link href="<%= Url.Content("~/Content/jquery-ui-1.7.2.custom.css") %>" rel="stylesheet" type="text/css" /> <link href="~Perspectiva/Content/Site.css" rel="stylesheet" type="text/css" /> <link href="~Perspectiva/Content/fullcalendar.css" rel="stylesheet" type="text/css" /> <script src="~Perspectiva/Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="~Perspectiva/Scripts/fullcalendar.js" type="text/javascript"></script> <script src="/Scripts/MicrosoftAjax.debug.js" type="text/javascript"></script> <script src="/Scripts/MicrosoftMvcAjax.debug.js" type="text/javascript"></script> Everything I did was pretty much copied from http://szahariev.blogspot.com/2009/08/jquery-fullcalendar-and-aspnet-mvc.html When navigating to /scheduler/calendardata I get a prompt for saving the json data which contents are exactly what I created in the CalendarData function. What do I need to do in order to render the page correctly? Thanks in advance, Eran

    Read the article

  • Qt vs .NET - plz no n00bs who don't know wtf they're talking about [closed]

    - by Pirate for Profit
    Man in all these Qt vs. .NET discussions 90% these people don't know WTF they're talking about. Trying to get a real comparison chart going before we embark on a major fucking project. And yes I'm drunk, and yes I use cocaine. Event Handling In Qt the event handling system you just emit signals when something cool happens and then catch them in slots, for instance emit valueChanged(int percent, bool something); and void MyCatcherObj::valueChanged(int p, bool ok){} blocking them and disconnecting them when needed, doing it across threads... once you get the hang of it, it just seems a lot more natural and intuitive than the way the .NET event handling is set up (you know, object sender, CustomEventArgs e). And I'm not just talking about syntax, because in the end the .NET delegate crap is the bomb. I'm also talking about in more than just reflection (because, yes, .NET obviously has much stronger reflection capabilities). I'm talking about in the way the system feels to a human being. Qt wins hands down i m o. Basically, the footprints make more sense and you can visualize the project easier without the clunky event handling system. I wish I could it explain it better. The only thing is, I do love some of the ease of C# compared to C++ and .NET's assembly architecture. That is a big bonus for modular projects, which are a PITA to do in C++. Database Ease of Doing Crap Also what about datasets and database manipulations. I think .net wins here but I'm not sure. Threading/Conccurency How do you guys think of the threading? In .NET, all I've ever done is make like a list of master worker threads with locks. I like QConcurrentFramework, you don't worry about locks or anything, and with the ease of the signal slot system across threads it's nice to get notified about the progress of things. Memory Usage Also what do you think of the overall memory usage comparison. Is the .NET garbage collector pretty on the ball and quick compared to the instantaneous nature of native memory management? Or does it just let programs leak up a storm and lag the computer then clean it up when it's about to really lag? However, I am a n00b who doesn't know what I'm talking about, please school me on the subject.

    Read the article

  • Compare NSArray with NSMutableArray adding delta objects to NSMutableArray

    - by Hooligancat
    I have an NSMutableArray that is populated with objects of strings. For simplicity sake we'll say that the objects are a person and each person object contains information about that person. Thus I would have an NSMutableArray that is populated with person objects: person.firstName person.lastName person.age person.height And so on. The initial source of data comes from a web server and is populated when my application loads and completes it's initialization with the server. Periodically my application polls the server for the latest list of names. Currently I am creating an NSArray of the result set, emptying the NSMutableArray and then re-populating the NSMutableArray with NSArray results before destroying the NSArray object. This seems inefficient to me on a few levels and also presents me with a problem losing table row references which I can work around, but might be creating more work for myself in doing so. The inefficiency seems to be that I should be able to compare the two arrays and end up with a filtered NSArray. I could then add the filtered set to the NSMutableArray. This would mean that I can simply append new data to the NSMutableArray instead of throwing everything out and re-populating. Conversely I would need to do the same filter in reverse to see if there are records that need removing from the NSMutableArray. Is there any method to do this in a more efficient manner? Have I overlooked something in the docs some place that refers to a simpler technique? I have a problem when I empty the NSMutableArray and re-populate in that any referencing tables lose their selected row state. I can track it and re-select it, but my theory is that using some form of compare and adding objects and removing objects instead of dealing with the whole array in one block might mean I keep my row reference (assuming the item isn't deleted of course). Any suggestions or help much appreciated. Update Would it be just as fast to do a fast enumeration over each comparing each line item as I go? It seems like an expensive operation, but with the last fast enumeration code it might be pretty efficient...

    Read the article

  • iPhone MapKit problems: viewForAnnotation inconsistently setting pinColor?

    - by blackkettle
    Hi, I'm trying setup a map that displays different pin colors depending on the type/class of the location in question. I know this is a pretty common thing to do, but I'm having trouble getting the viewForAnnotation delegate to consistently update/set the pin color. I have a showThisLocation function that basically cycles through a list of AddressAnnotations and then based on the annotation class (bus stop, hospital, etc.) I set an if( myClass == 1){ [defaults setObject:@"1" forKey:@"currPinColor"]; [defaults synchronize]; NSLog(@"Should be %@!", [defaults objectForKey:@"currPinColor"]); } else if( myClass ==2 ){ [defaults setObject:@"2" forKey:@"currPinColor"]; [defaults synchronize]; NSLog(@"Should be %@!", [defaults objectForKey:@"currPinColor"]); } [_mapView addAnnotation:myCurrentAnnotation]; then my viewForAnnotation delegate looks like this, - (MKAnnotationView *) mapView:(MKMapView *)mapView viewForAnnotation:(id )annotation { if( annotation == mapView.userLocation ){ return nil; } NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults]; MKPinAnnotationView *annView = nil; annView = (MKPinAnnotationView*)[mapView dequeueReusableAnnotationViewWithIdentifier:@"currentloc"]; if( annView == nil ){ annView = [[MKPinAnnotationView alloc] initWithAnnotation:annotation reuseIdentifier:@"currentloc"]; } annView.pinColor = [defaults integerForKey:@"currPinColor"]; NSLog(@"Pin color: %d", [defaults integerForKey:@"currPinColor"]); annView.animatesDrop=TRUE; annView.canShowCallout = YES; annView.calloutOffset = CGPointMake(-5, 5); return annView; } The problem is that, although the NSLog statements in the "if" block always confirm that the color has been set, the delegate sometimes but not always ends up with the correct color. I've also noticed that what generally happens is that the first search for a new location will set all pins to the last color in the "if" block, but search for the same location again will set the pins to the correct color. I suspect I am not supposed to usen NSUserDefaults in this way, but I also tried to create my own subclass for MKAnnotation which included an additional property "currentPinColor", and while this allowed me to set the "currentPinColor", when I tried to access the "currentPinColor from the delegate method, the compiler complained that it didn't know anything about "currentPinColor in connection with MKAnnotation. Fair enough I guess, but then I tried to revise the delegate method, - (MKAnnotationView *) mapView:(MKMapView *)mapView viewForAnnotation:(id )annotation instead of the default - (MKAnnotationView *) mapView:(MKMapView *)mapView viewForAnnotation:(id )annotation at which point the compiler complained that it didn't know anything about the protocol for MyCustomMKAnnotation in this delegate context. What is the proper way to set the delegate method and/or MyCustomMKAnnotation, or what is the appropriate way to achieve consistent pinColor settings. I'm just about out of ideas for things to try here.

    Read the article

  • No endpoint mapping found for..., using SpringWS, JaxB Marshaller

    - by Saky
    I get this error: No endpoint mapping found for [SaajSoapMessage {http://mycompany/coolservice/specs}ChangePerson] Following is my ws config file: <bean class="org.springframework.ws.server.endpoint.mapping.PayloadRootAnnotationMethodEndpointMapping"> <description>An endpoint mapping strategy that looks for @Endpoint and @PayloadRoot annotations.</description> </bean> <bean class="org.springframework.ws.server.endpoint.adapter.MarshallingMethodEndpointAdapter"> <description>Enables the MessageDispatchServlet to invoke methods requiring OXM marshalling.</description> <constructor-arg ref="marshaller"/> </bean> <bean id="marshaller" class="org.springframework.oxm.jaxb.Jaxb2Marshaller"> <property name="contextPaths"> <list> <value>org.company.xml.persons</value> <value>org.company.xml.person_allextensions</value> <value>generated</value> </list> </property> </bean> <bean id="persons" class="com.easy95.springws.wsdl.wsdl11.MultiPrefixWSDL11Definition"> <property name="schemaCollection" ref="schemaCollection"/> <property name="portTypeName" value="persons"/> <property name="locationUri" value="/ws/personnelService/"/> <property name="targetNamespace" value="http://mycompany/coolservice/specs/definitions"/> </bean> <bean id="schemaCollection" class="org.springframework.xml.xsd.commons.CommonsXsdSchemaCollection"> <property name="xsds"> <list> <value>/DataContract/Person-AllExtensions.xsd</value> <value>/DataContract/Person.xsd</value> </list> </property> <property name="inline" value="true"/> </bean> I have then the following files: public interface MarshallingPersonService { public final static String NAMESPACE = "http://mycompany/coolservice/specs"; public final static String CHANGE_PERSON = "ChangePerson"; public RespondPersonType changeEquipment(ChangePersonType request); } and @Endpoint public class PersonEndPoint implements MarshallingPersonService { @PayloadRoot(localPart=CHANGE_PERSON, namespace=NAMESPACE) public RespondPersonType changePerson(ChangePersonType request) { System.out.println("Received a request, is request null? " + (request == null ? "yes" : "no")); return null; } } I am pretty much new to WebServices, and not very comfortable with annotations. I am following a tutorial on setting up jaxb marshaller in springws. I would rather use xml mappings than annotations, although for now I am getting the error message.

    Read the article

  • WPF Converter and NotifyOnTargetUpdated exclusive in a binding ?

    - by Mathieu Garstecki
    Hi, I have a problem with a databinding in WPF. When I try to use a value converter and set the NotifyOnTargetUpdated=True property to True, I get an XamlParseException with the following message: 'System.Windows.Data.BindingExpression' value cannot be assigned to property 'Contenu' of object 'View.UserControls.ShadowedText'. Value cannot be null. Parameter name: textToFormat Error at object 'System.Windows.Data.Binding' in markup file 'View.UserControls;component/saletotal.xaml' Line 363 Position 95. The binding is pretty standard: <my:ShadowedText Contenu="{Binding Path=Total, Converter={StaticResource CurrencyToStringConverter}, NotifyOnTargetUpdated=True}" TargetUpdated="MontantTotal_TargetUpdated"> </my:ShadowedText> (Styling properties removed for conciseness) The converter exists in the resources and works correctly when NotifyOnTargetUpdated=True is removed. Similarly, the TargetUpdated event is called and implemented correctly, and works when the converter is removed. Note: This binding is defined in a ControlTemplate, though I don't think that is relevant to the problem. Can anybody explain me what is happening ? Am I defining the binding wrong ? Are those features mutually exclusive (and in this case, can you explain why it is so) ? Thanks in advance. More info: Here is the content of the TargetUpdated handler: private void MontantTotal_TargetUpdated(object sender, DataTransferEventArgs e) { ShadowedText textBlock = (ShadowedText)e.TargetObject; double textSize = textBlock.Taille; double delta = 5; double defaultTaille = 56; double maxWidth = textBlock.MaxWidth; while (true) { FormattedText newFormat = new FormattedText(textBlock.Contenu, CultureInfo.CurrentCulture, FlowDirection.LeftToRight, new Typeface("Calibri"), textSize, (SolidColorBrush) Resources["RougeVif"]); if (newFormat.Width < textBlock.MaxWidth && textSize <= defaultTaille) { if ((Math.Round(newFormat.Width) + delta) >= maxWidth || textSize == defaultTaille) { break; } textSize++; } else { if ((Math.Round(newFormat.Width) - delta) <= maxWidth && textSize <= defaultTaille) { break; } textSize--; } } textBlock.Taille = textSize; } The role of the handler is to resize the control based on the length of the content. It is quite ugly but I want to have the functional part working before refactoring.

    Read the article

  • UITableView : crash when adding a section footer view in empty section

    - by Supernico
    Hi everyone, This is the first time I ask a question here, but I have to say this site has been a tremendous help for me over the last couple months (iphone-dev-wise), and I thank you for that. However, I didn't find any solution for this problem I'm having: I have a UITableView with 2 sections, and no rows when the app is launched for the first time. The user can fill the sections later on as he wishes (the content is not relevant here). The UITableView looks good when filled with rows, but looks pretty ugly when there is none (the 2 header sections are stuck together with no white space in between). That is why I'd like to add a nice "No row" view in between when there is no row. I used viewForFooterInSection: - (CGFloat)tableView:(UITableView *)tableView heightForFooterInSection:(NSInteger)section { if(section == 0) { if([firstSectionArray count] == 0) return 44; else return 0; } return 0; } - (UIView *)tableView:(UITableView *)tableView viewForFooterInSection:(NSInteger)section{ if(section == 0) { UILabel *label = [[UILabel alloc] initWithFrame:CGRectMake(200, 10, 50, 44)]; label.backgroundColor = [UIColor clearColor]; label.textColor = [UIColor colorWithWhite:0.6 alpha:1.0]; label.textAlignment = UITextAlignmentCenter; label.lineBreakMode = UILineBreakModeWordWrap; label.numberOfLines = 0; label.text = @"No row"; return [label autorelease]; } return nil; } - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 2; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { if(section == 0) { return [firstSectionArray count]; } return [secondSectionArray count]; } This works great : the footer view appears only when there is no row in section 0. But my app crashes when I enter edit mode and delete the last row in section 0: Assertion failure in -[UIViewAnimation initWithView:indexPath:endRect:endAlpha:startFraction:endFraction:curve:animateFromCurrentPosition:shouldDeleteAfterAnimation:editing:] Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Cell animation stop fraction must be greater than start fraction' This does not happen when there are several rows in section 0. It only happens when there is only one row left. Here's the code for edit mode: - (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath { // If row is deleted, remove it from the list. if (editingStyle == UITableViewCellEditingStyleDelete) { // Find the book at the deleted row, and remove from application delegate's array. if(indexPath.section == 0) { [firstSectionArray removeObjectAtIndex:indexPath.row]; } else { [secondSectionArray removeObjectAtIndex:indexPath.row]; } // Animate the deletion from the table. [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationBottom]; [tableView reloadData]; } } Does anyone have any idea why this is happening? Thanks

    Read the article

  • Cucumber error under bundler

    - by Senthil
    I've confirmed cucumber tests work fine without bundler, but with bundler it breaks. This is the error I get: You have a nil object when you didn't expect it! You might have expected an instance of Array. The error occurred while evaluating nil.each (NoMethodError) /home/senthil/rails/othersprograms/nmicho-teambox/vendor/plugins/cucumber/lib/cucumber/formatter/console.rb:148:in `record_tag_occurrences' /home/senthil/rails/othersprograms/nmicho-teambox/vendor/plugins/cucumber/lib/cucumber/formatter/pretty.rb:71:in `before_feature_element' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:190:in `__send__' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:190:in `send_to_all' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:188:in `each' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:188:in `send_to_all' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:179:in `broadcast' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:50:in `visit_feature_element' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/feature.rb:26:in `accept' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/feature.rb:25:in `each' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/feature.rb:25:in `accept' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:20:in `visit_feature' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:180:in `broadcast' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:19:in `visit_feature' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/features.rb:29:in `accept' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/features.rb:17:in `each' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/features.rb:17:in `each' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/features.rb:28:in `accept' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:14:in `visit_features' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:180:in `broadcast' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:13:in `visit_features' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/cli/main.rb:61:in `execute!' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/cli/main.rb:20:in `execute' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/cucumber:8 /var/lib/gems/1.8/bin/cucumber:19:in `load' /var/lib/gems/1.8/bin/cucumber:19 I've searched all over Google and nothing. Oh btw the above tests is when I was trying to run cucumber tests in Teambox app. I've downloaded it fresh and it still breaks. But if I use their old versions where they didn't not have bundler installed, it works fine. Can anyone help? Thanks

    Read the article

  • How can I do something ~after~ an event has fired in C#?

    - by Siracuse
    I'm using the following project to handle global keyboard and mouse hooking in my C# application. This project is basically a wrapper around the Win API call SetWindowsHookEx using either the WH_MOUSE_LL or WH_KEYBOARD_LL constants. It also manages certain state and generally makes this kind of hooking pretty pain free. I'm using this for a mouse gesture recognition software I'm working on. Basically, I have it setup so it detects when a global hotkey is pressed down (say CTRL), then the user moves the mouse in the shape of a pre-defined gesture and then releases global hotkey. The event for the KeyDown is processed and tells my program to start recording the mouse locations until it receives the KeyUp event. This is working fine and it allows an easy way for users to enter a mouse-gesture mode. Once the KeyUp event fires and it detects the appropriate gesture, it is suppose to send certain keystrokes to the active window that the user has defined for that particular gesture they just drew. I'm using the SendKeys.Send/SendWait methods to send output to the current window. My problem is this: When the user releases their global hotkey (say CTRL), it fires the KeyUp event. My program takes its recorded mouse points and detects the relevant gesture and attempts to send the correct input via SendKeys. However, because all of this is in the KeyUp event, that global hotkey hasn't finished being processed. So, for example if I defined a gesture to send the key "A" when it is detected, and my global hotkey is CTRL, when it is detected SendKeys will send "A" but while CTRL is still "down". So, instead of just sending A, I'm getting CTRL-A. So, in this example, instead of physically sending the single character "A" it is selecting-all via the CTRL-A shortcut. Even though the user has released the CTRL (global hotkey), it is still being considered down by the system. Once my KeyUp event fires, how can I have my program wait some period of time or for some event so I can be sure that the global hotkey is truly no longer being registered by the system, and only then sending the correct input via SendKeys?

    Read the article

< Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >