Search Results

Search found 7957 results on 319 pages for 'production databases'.

Page 302/319 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • Metrics - A little knowledge can be a dangerous thing (or 'Why you're not clever enough to interpret metrics data')

    - by Jason Crease
    At RedGate Software, I work on a .NET obfuscator  called SmartAssembly.  Various features of it use a database to store various things (exception reports, name-mappings, etc.) The user is given the option of using either a SQL-Server database (which requires them to have Microsoft SQL Server), or a Microsoft Access MDB file (which requires nothing). MDB is the default option, but power-users soon switch to using a SQL Server database because it offers better performance and data-sharing. In the fashionable spirit of optimization and metrics, an obvious product-management question is 'Which is the most popular? SQL Server or MDB?' We've collected data about this fact, using our 'Feature-Usage-Reporting' technology (available as part of SmartAssembly) and more recently our 'Application Metrics' technology: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 28 19.0 8115 8115 MDB 114 77.6 1449 1449 (As a disclaimer, please note than SmartAssembly has far more than 132 users . This data is just a selection of one build) So, it would appear that SQL-Server is used by fewer users, but more often. Great. But here's why these numbers are useless to me: Only the original developers understand the data What does a single 'usage' of 'MDB' mean? Does this happen once per run? Once per option change? On clicking the 'Obfuscate Now' button? When running the command-line version or just from the UI version? Each question could skew the data 10-fold either way, and the answers only known by the developer that instrumented the application in the first place. In other words, only the original developer can interpret the data - product-managers cannot interpret the data unaided. Most of the data is from uninterested users About half of people who download and run a free-trial from the internet quit it almost immediately. Only a small fraction use it sufficiently to make informed choices. Since the MDB option is the default one, we don't know how many of those 114 were people CHOOSING to use the MDB, or how many were JUST HAPPENING to use this MDB default for their 20-second trial. This is a problem we see across all our metrics: Are people are using X because it's the default or are they using X because they want to use X? We need to segment the data further - asking what percentage of each percentage meet our criteria for an 'established user' or 'informed user'. You end up spending hours writing sophisticated and dubious SQL queries to segment the data further. Not fun. You can't find out why they used this feature Metrics can answer the when and what, but not the why. Why did people use feature X? If you're anything like me, you often click on random buttons in unfamiliar applications just to explore the feature-set. If we listened uncritically to metrics at RedGate, we would eliminate the most-important and more-complex features which people actually buy the software for, leaving just big buttons on the main page and the About-Box. "Ah, that's interesting!" rather than "Ah, that's actionable!" People do love data. Did you know you eat 1201 chickens in a lifetime? But just 4 cows? Interesting, but useless. Often metrics give you a nice number: '5.8% of users have 3 or more monitors' . But unless the statistic is both SUPRISING and ACTIONABLE, it's useless. Most metrics are collected, reviewed with lots of cooing. and then forgotten. Unless a piece-of-data could change things, it's useless collecting it. People get obsessed with significance levels The first things that lots of people do with this data is do a t-test to get a significance level ("Hey! We know with 99.64% confidence that people prefer SQL Server to MDBs!") Believe me: other causes of error/misinterpretation in your data are FAR more significant than your t-test could ever comprehend. Confirmation bias prevents objectivity If the data appears to match our instinct, we feel satisfied and move on. If it doesn't, we suspect the data and dig deeper, plummeting down a rabbit-hole of segmentation and filtering until we give-up and move-on. Data is only useful if it can change our preconceptions. Do you trust this dodgy data more than your own understanding, knowledge and intelligence?  I don't. There's always multiple plausible ways to interpret/action any data Let's say we segment the above data, and get this data: Post-trial users (i.e. those using a paid version after the 14-day free-trial is over): Parameter Number of users % of total users Number of sessions Number of usages SQL Server 13 9.0 1115 1115 MDB 5 4.2 449 449 Trial users: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 15 10.0 7000 7000 MDB 114 77.6 1000 1000 How do you interpret this data? It's one of: Mostly SQL Server users buy our software. People who can't afford SQL Server tend to be unable to afford or unwilling to buy our software. Therefore, ditch MDB-support. Our MDB support is so poor and buggy that our massive MDB user-base doesn't buy it.  Therefore, spend loads of money improving it, and think about ditching SQL-Server support. People 'graduate' naturally from MDB to SQL Server as they use the software more. Things are fine the way they are. We're marketing the tool wrong. The large number of MDB users represent uninformed downloaders. Tell marketing to aggressively target SQL Server users. To choose an interpretation you need to segment again. And again. And again, and again. Opting-out is correlated with feature-usage Metrics tends to be opt-in. This skews the data even further. Between 5% and 30% of people choose to opt-in to metrics (often called 'customer improvement program' or something like that). Casual trial-users who are uninterested in your product or company are less likely to opt-in. This group is probably also likely to be MDB users. How much does this skew your data by? Who knows? It's not all doom and gloom. There are some things metrics can answer well. Environment facts. How many people have 3 monitors? Have Windows 7? Have .NET 4 installed? Have Japanese Windows? Minor optimizations.  Is the text-box big enough for average user-input? Performance data. How long does our app take to start? How many databases does the average user have on their server? As you can see, questions about who-the-user-is rather than what-the-user-does are easier to answer and action. Conclusion Use SmartAssembly. If not for the metrics (called 'Feature-Usage-Reporting'), then at least for the obfuscation/error-reporting. Data raises more questions than it answers. Questions about environment are the easiest to answer.

    Read the article

  • Blogging locally and globally–my experience

    - by DigiMortal
    In Baltic MVP Summit 2011 there was discussion about having two blogs - one for local and another for global audience – and how to publish once written information in these blogs. There are many ways how to optimize your blogging activities if you have more than one audience and here you can find my experiences, best practices and advices about this topic. My two blogs I have to working blogs: this one here technology and programming blog for local market My local blog is almost five years old and it makes it one of the oldest company blogs in Estonia. It is still active and I write there as much as I have time for it. This blog here is active since September 2007, so it is about 3.5 years old right now. Both of these blogs are  my major hits in my MVP carrier and they have very good web statistics too. My local blog My local blog is about programming, web and technology. It has way wider target audience then this blog here has. By example, in my local blog I blog also about local events, cool new concept phones, different webs providing some interesting services etc. But local guys can find there also my postings about how to solve one or another programming problem and postings about Microsoft technologies I am playing with. This far my local blog has a lot of readers for such a small country that Estonia is. This blog has made me a lot of cool contacts and I have had there a lot of interesting discussions about different technical topics. Why I started this blog? Living in small country is different than living in big country. In small country you have less people and therefore smaller audience so you have to target more than one technical topic to find enough readers. In a same time you are still interested in your main topics and you want to reach to more people who are sharing same interests with you. Practically one day y will grow out from local market and you go global. This is how this blog was born. Was it worth to create, promote and mess with it? Every second I have put on my time to this blog has been worth of it. Thanks to this blog I have found new good friends and without them I think it is more boring to work on different problems and solutions. Defining target audiences One thing you should always do when having more than one blog is defining target audiences. If you are just technomaniac interested in sharing your stuff and make some new friends and have something to write to your MVP nomination form then you don’t have to go through complex targeting process. You can do it simple way and same effectively. Here is how I defined target audiences to my blogs: local blog – reader of my local blog is IT professional, software developer, technology innovator or just some guy who is interested in technology,   this blog – reader of this blog is experienced professional software developer who works on Microsoft technologies or software developer who is open minded and open to new technologies and interesting solutions to development problems. You can see how local blog – due to small market with less people – has wider definition for audience while this blog is heavily targeted to Microsoft technologies and specially to software development. On practical side these decisions are also made well I think because it is very hard to build up popular common IT blog. On global level it is better to target some specific niche and find readers who are professionals on your favorite topics. Thanks to this blog I have found new friends who are professional developers and I am very happy about all the discussions I have had with them. Publishing content to different blogs My local blog and this blog have some overlapping topics like .NET, databases and SEO. Due to this overlapping there is question: when I write posting to my local blog then should I have to publish same thing in my global blog? And if I write something to my global blog then should I publish same thing also in my local blog? Well, it really depends on the definition of your target audiences. If they match then of course it is good idea to translate you post and publish it also to another blog. But if you have different audiences then you may need to modify your posting before publishing it. The questions you have to answer are: is target audience interested in this topic? is target audience expecting more specific and deeper handling of this topic or are they expecting more general handling of topic? is the problem you are discussing actual for target audience or not? You have to answer these questions and after that make your decision. If you need to modify your original posting then take some time and do it. Provide quality to all your readers because they will respect you if you respect them. Cross-posting and referencing It is tempting to save time that preparing some blog post takes and if you have are done with posting in one blog it may seem like good idea to make short posting to another blog and add reference to first one where topic is discussed longer. Well, don’t do it – all your readers expect good quality content from you and jumping from one blog post to another is disturbing for them. Of course, there is problem with differences between target audiences. You may have wider target audience and some people may be interested in more specific handling of topic. In this case feel free to refer your blog you are writing in english. This is not working very well in opposite direction because almost all my global blog readers understand english but not estonian. By example, estonian language is complex one and online translating tools make very poor translations from estonian language. This is why I don’t even plan to publish postings here that refer to my local blog for more information. I am keeping these two blogs as two different worlds and if there is posting that fits well to both blogs I will write my posting to one blog and then answer previous three questions before posting same thing to another blog. Conclusion Growing out of your local market is not anything mysterious if you are living in small country. As it is harder to find people there who are interested in same topics with you then sooner or later you will start finding these new contacts from global audience. Global audience is bigger and to be visible there you must provide high quality content to your audience. It is something you will learn over time and you will learn every day something new when you are posting to your global blog. You may ask: if global blog is much more complex thing to do then is it worth to do at all? My answer is: yes, do it for sure. It is not easy thing to do when you start but if you work on your global blog and improve it over time you will get over all obstacles pretty soon. Just don’t forget one thing – content is king and your readers expect high quality from you.

    Read the article

  • How to begin? Windows 8 Development

    - by Dennis Vroegop
    Ok. I convinced you in my last post to do some Win8 development. You want a piece of that cake, or whatever your reasons may be. Good! Welcome to the club! Now let me ask you a question: what are you going to write? Ah. That’s the big one, isn’t it? What indeed? If you have been creating applications for computers before you’re in for quite a shock. The way people perceive apps on a tablet is quite different from what we know as applications. There’s a reason we call them apps instead of applications! Yes, technically they are applications but we don’t call them apps only because it sounds cool. The abbreviated form of the word applications itself is a pointer. Apps are small. Apps are focused. Apps are more lightweight. Apps do one thing but they do that one thing extremely good. In the ‘old’ days we wrote huge systems. We build ecosystems of services, screens, databases and more to create a system that provides value for the user. Think about it: what application do you use most at work? Can you in one sentence describe what it is, or what it does and yet still distinctively describe its purpose? I doubt you can. Let’s have a look at Outlouk. We all know it and we all love or hate it. But what is it? A mail program? No, there’s so much more there: calendar, contacts, RSS feeds and so on. Some call it a ‘collaboration’  application but that’s not really true as well. After all, why should a collaboration application give me my schedule for the day? I think the best way to describe Outlook is “client for Exchange”  although that isn’t accurate either. Anyway: Outlook is a great application but it’s not an ‘app’ and therefor not very suitable for WinRT. Ok. Disclaimer here: yes, you can write big applications for WinRT. Some will. But that’s not what 99.9% of the developers will do. So I am stating here that big applications are not meant for WinRT. If 0.01% of the developers think that this is nonsense then they are welcome to go ahead but for the majority here this is not what we’re talking about. So: Apps are small, lightweight and good at what they do but only at that. If you’re a Phone developer you already know that: Phone apps on any platform fit the description I have above. If you’ve ever worked in a large cooperation before you might have seen one of these before: the Mission Statement. It’s supposed to be a oneliner that sums up what the company is supposed to do. Funny enough: although this doesn’t work for large companies it does work for defining your app. A mission statement for an app describes what it does. If it doesn’t fit in the mission statement then your app is going to get to big and will fail. A statement like this should be in the following style “<your app name> is the best app to <describe single task>” Fill in the blanks, write it and go! Mmm.. not really. There are some things there we need to think about. But the statement is a very, very important one. If you cannot fit your app in that line you’re preparing to fail. Your app will become to big, its purpose will be unclear and it will be hard to use. People won’t download it and those who do will give it a bad rating therefor preventing that huge success you’ve been dreaming about. Stick to the statement! Ok, let’s give it a try: “PlanesAreCool” is the best app to do planespotting in the field. You might have seen these people along runways of airports: taking photographs of airplanes and noting down their numbers and arrival- and departure times. We are going to help them out with our great app! If you look at the statement, can you guess what it does? I bet you can. If you find out it isn’t clear enough of if it’s too broad, refine it. This is probably the most important step in the development of your app so give it enough time! So. We’ve got the statement. Print it out, stick it to the wall and look at it. What does it tell you? If you see this, what do you think the app does? Write that down. Sit down with some friends and talk about it. What do they expect from an app like this? Write that down as well. Brainstorm. Make a list of features. This is mine: Note planes Look up aircraft carriers Add pictures of that plane Look up airfields Notify friends of new spots Look up details of a type of plane Plot a graph with arrival and departure times Share new spots on social media Look up history of a particular aircraft Compare your spots with friends Write down arrival times Write down departure times Write down wind conditions Write down the runway they take Look up weather conditions for next spotting day Invite friends to join you for a day of spotting. Now, I must make it clear that I am not a planespotter nor do I know what one does. So if the above list makes no sense, I apologize. There is a lesson: write apps for stuff you know about…. First of all, let’s look at our statement and then go through the list of features. Remove everything that has nothing to do with that statement! If you end up with an empty list, try again with both steps. Note planes Look up aircraft carriers Add pictures of that plane Look up airfields Notify friends of new spots Look up details of a type of plane Plot a graph with arrival and departure times Share new spots on social media Look up history of a particular aircraft Compare your spots with friends Write down arrival times Write down departure times Write down wind conditions Write down the runway they take Look up weather conditions for next spotting day Invite friends to join you for a day of spotting. That's better. The things I removed could be pretty useful to a plane spotter and could be fun to write. But do they match the statement? I said that the app is for spotting in the field, so “look up airfields” doesn’t belong there: I know where I am so why look it up? And the same goes for inviting friends or looking up the weather conditions for tomorrow. I am at the airfield right now, looking through my binoculars at the planes. I know the weather now and I don’t care about tomorrow. If you feel the items you’ve crossed out are valuable, then why not write another app? One that says “SpotNoter” is the best app for preparing a day of spotting with my friends. That’s a different app! Remember: Win8 apps are small and very good at doing ONE thing, and one thing only! If you have made that list, it’s time to prepare the navigation of your app. The navigation is how users see your app and how they use it. We’ll do that next time!

    Read the article

  • Let your Signature Experience drive IT-decision making

    - by Tania Le Voi
    Today’s CIO job description:  ‘’Align IT infrastructure and solutions with business goals and objectives ; AND while doing so reduce costs; BUT ALSO, be innovative, ensure the architectures are adaptable and agile as we need to act today on the changes that we may request tomorrow.”   Sound like an unachievable request? The fact is, reality dictates that CIO’s are put under this type of pressure to deliver more with less. In a past career phase I spent a few years as an IT Relationship Manager for a large Insurance company. This is a role that we see all too infrequently in many of our customers, and it’s a shame.  The purpose of this role was to build a bridge, a relationship between IT and the business. Key to achieving that goal was to ensure the same language was being spoken and more importantly that objectives were commonly understood - hence service and projects were delivered to time, to budget and actually solved the business problems. In reality IT and the business are already married, but the relationship is most often defined as ‘supplier’ of IT rather than a ‘trusted partner’. To deliver business value they need to understand how to work together effectively to attain this next level of partnership. The Business cannot compete if they do not get a new product to market ahead of the competition, or for example act in a timely manner to address a new industry problem such as a legislative change. An even better example is when the Application or Service fails and the Business takes a hit by bad publicity, being trending topics on social media and losing direct revenue from online channels. For this reason alone Business and IT need the alignment of their priorities and deliverables now more than ever! Take a look at Forrester’s recent study that found ‘many IT respondents considering themselves to be trusted partners of the business but their efforts are impaired by the inadequacy of tools and organizations’.  IT Meet the Business; Business Meet IT So what is going on? We talk about aligning the business with IT but the reality is it’s difficult to do. Like any relationship each side has different goals and needs and language can be a barrier; business vs. technology jargon! What if we could translate the needs of both sides into actionable information, backed by data both sides understand, presented in a meaningful way?  Well now we can with the Business-Driven Application Management capabilities in Oracle Enterprise Manager 12cR2! Enterprise Manager’s Business-Driven Application Management capabilities provide the information that IT needs to understand the impact of its decisions on business criteria.  No longer does IT need to be focused solely on speeds and feeds, performance and throughput – now IT can understand IT’s impact on business KPIs like inventory turns, order-to-cash cycle, pipeline-to-forecast, and similar.  Similarly, now the line of business can understand which IT services are most critical for the KPIs they care about. There are a good deal of resources on Oracle Technology Network that describe the functionality of these products, so I won’t’ rehash them here.  What I want to talk about is what you do with these products. What’s next after we meet? Where do you start? Step 1:  Identify the Signature Experience. This is THE business process (or set of processes) that is core to the business, the one that drives the economic engine, the process that a customer recognises the company brand for, reputation, the customer experience, the process that a CEO would state as his number one priority. The crème de la crème of your business! Once you have nailed this it gets easy as Enterprise Manager 12c makes it easy. Step 2:  Map the Signature Experience to underlying IT.  Taking the signature experience, map out the touch points of the components that play a part in ensuring this business transaction is successful end to end, think of it like mapping out a critical path; the applications, middleware, databases and hardware. Use the wealth of Enterprise Manager features such as Systems, Services, Business Application Targets and Business Transaction Management (BTM) to assist you. Adding Real User Experience Insight (RUEI) into the mix will make the end to end customer satisfaction story transparent. Work with the business and define meaningful key performance indicators (KPI’s) and thresholds to enable you to report and action upon. Step 3:  Observe the data over time.  You now have meaningful insight into every step enabling your signature experience and you understand the implication of that experience on your underlying IT.  Watch if for a few months, see what happens and reconvene with your business stakeholders and set clear and measurable targets which can re-define service levels.  Step 4:  Change the information about which you and the business communicate.  It’s amazing what happens when you and the business speak the same language.  You’ll be able to make more informed business and IT decisions. From here IT can identify where/how budget is spent whether on the level of support, performance, capacity, HA, DR, certification etc. IT SLA’s no longer need be focused on metrics such as %availability but structured around business process requirements. The power of this way of thinking doesn’t end here. IT staff get to see and understand how their own role contributes to the business making them accountable for the business service. Take a step further and appraise your staff on the business competencies that are linked to the service availability. For the business, the language barrier is removed by producing targeted reports on the signature experience core to the business and therefore key to the CEO. Chargeback or show back becomes easier to justify as the ‘cost of day per outage’ can be more easily calculated; the business will be able to translate the cost to the business to the cost/value of the underlying IT that supports it. Used this way, Oracle Enterprise Manager 12c is a key enabler to a harmonious relationship between the end customer the business and IT to deliver ultimate service and satisfaction. Just engage with the business upfront, make the signature experience visible and let Enterprise Manager 12c do the rest. In the next blog entry we will cover some of the Enterprise Manager features mentioned to enable you to implement this new way of working.  

    Read the article

  • Notes on implementing Visual Studio 2010 Navigate To

    - by cyberycon
    One of the many neat functions added to Visual Studio in VS 2010 was the Navigate To feature. You can find it by clicking Edit, Navigate To, or by using the keyboard shortcut Ctrl, (yes, that's control plus the comma key). This pops up the Navigate To dialog that looks like this: As you type, Navigate To starts searching through a number of different search providers for your term. The entries in the list change as you type, with most providers doing some kind of fuzzy or at least substring matching. If you have C#, C++ or Visual Basic projects in your solution, all symbols defined in those projects are searched. There's also a file search provider, which displays all matching filenames from projects in the current solution as well. And, if you have a Visual Studio package of your own, you can implement a provider too. Micro Focus (where I work) provide the Visual COBOL language inside Visual Studio (http://visualstudiogallery.msdn.microsoft.com/ef9bc810-c133-4581-9429-b01420a9ea40 ), and we wanted to provide this functionality too. This post provides some notes on the things I discovered mainly through trial and error, but also with some kind help from devs inside Microsoft. The expectation of Navigate To is that it searches across the whole solution, not just the current project. So in our case, we wanted to search for all COBOL symbols inside all of our Visual COBOL projects inside the solution. So first of all, here's the Microsoft documentation on Navigate To: http://msdn.microsoft.com/en-us/library/ee844862.aspx . It's the reference information on the Microsoft.VisualStudio.Language.NavigateTo.Interfaces Namespace, and it lists all the interfaces you will need to implement to create your own Navigate To provider. Navigate To uses Visual Studio's latest mechanism for integrating external functionality and services, Managed Extensibility Framework (MEF). MEF components don't require any registration with COM or any other registry entries to be found by Visual Studio. Visual Studio looks in several well-known locations for manifest files (extension.vsixmanifest). It then uses reflection to scan for MEF attributes on classes in the assembly to determine which functionality the assembly provides. MEF itself is actually part of the .NET framework, and you can learn more about it here: http://mef.codeplex.com/. To get started with Visual Studio and MEF you could do worse than look at some of the editor examples on the VSX page http://archive.msdn.microsoft.com/vsx . I've also written a small application to help with switching between development and production MEF assemblies, which you can find on Codeproject: http://www.codeproject.com/KB/miscctrl/MEF_Switch.aspx. The Navigate To interfaces Back to Navigate To, and summarizing the MSDN reference documentation, you need to implement the following interfaces: INavigateToItemProviderFactoryThis is Visual Studio's entry point to your Navigate To implementation, and you must decorate your implementation with the following MEF export attribute: [Export(typeof(INavigateToItemProviderFactory))]  INavigateToItemProvider Your INavigateToItemProviderFactory needs to return your implementation of INavigateToItemProvider. This class implements StartSearch() and StopSearch(). StartSearch() is the guts of your provider, and we'll come back to it in a minute. This object also needs to implement IDisposeable(). INavigateToItemDisplayFactory Your INavigateToItemProvider hands back NavigateToItems to the NavigateTo framework. But to give you good control over what appears in the NavigateTo dialog box, these items will be handed back to your INavigateToItemDisplayFactory, which must create objects implementing INavigateToItemDisplay  INavigateToItemDisplay Each of these objects represents one result in the Navigate To dialog box. As well as providing the description and name of the item, this object also has a NavigateTo() method that should be capable of displaying the item in an editor when invoked. Carrying out the search The lifecycle of your INavigateToItemProvider is the same as that of the Navigate To dialog. This dialog is modal, which makes your implementation a little easier because you know that the user can't be changing things in editors and the IDE while this dialog is up. But the Navigate To dialog DOES NOT run on the main UI thread of the IDE – so you need to be aware of that if you want to interact with editors or other parts of the IDE UI. When the user invokes the Navigate To dialog, your INavigateToItemProvider gets sent a TryCreateNavigateToItemProvider() message. Instantiate your INavigateToItemProvider and hand this back. The sequence diagram below shows what happens next. Your INavigateToItemProvider will get called with StartSearch(), and passed an INavigateToCallback. StartSearch() is an asynchronous request – you must return from this method as soon as possible, and conduct your search on a separate thread. For each match to the search term, instantiate a NavigateToItem object and send it to INavigateToCallback.AddItem(). But as the user types in the Search Terms field, NavigateTo will invoke your StartSearch() method repeatedly with the changing search term. When you receive the next StartSearch() message, you have to abandon your current search, and start a new one. You can't rely on receiving a StopSearch() message every time. Finally, when the Navigate To dialog box is closed by the user, you will get a Dispose() message – that's your cue to abandon any uncompleted searches, and dispose any resources you might be using as part of your search. While you conduct your search invoke INavigateToCallback.ReportProgress() occasionally to provide feedback about how close you are to completing the search. There does not appear to be any particular requirement to how often you invoke ReportProgress(), and you report your progress as the ratio of two integers. In my implementation I report progress in terms of the number of symbols I've searched over the total number of symbols in my dictionary, and send a progress report every 16 symbols. Displaying the Results The Navigate to framework invokes INavigateToItemDisplayProvider.CreateItemDisplay() once for each result you passed to the INavigateToCallback. CreateItemDisplay() is passed the NavigateToItem you handed to the callback, and must return an INavigateToItemDisplay object. NavigateToItem is a sealed class which has a few properties, including the name of the symbol. It also has a Tag property, of type object. This enables you to stash away all the information you will need to create your INavigateToItemDisplay, which must implement an INavigateTo() method to display a symbol in an editor IDE when the user double-clicks an entry in the Navigate To dialog box. Since the tag is of type object, it is up to you, the implementor, to decide what kind of object you store in here, and how it enables the retrieval of other information which is not included in the NavigateToItem properties. Some of the INavigateToItemDisplay properties are self-explanatory, but a couple of them are less obvious: Additional informationThe string you return here is displayed inside brackets on the same line as the Name property. In English locales, Visual Studio includes the preposition "of". If you look at the first line in the Navigate To screenshot at the top of this article, Book_WebRole.Default is the additional information for textBookAuthor, and is the namespace qualified type name the symbol appears in. For procedural COBOL code we display the Program Id as the additional information DescriptionItemsYou can use this property to return any textual description you want about the item currently selected. You return a collection of DescriptionItem objects, each of which has a category and description collection of DescriptionRun objects. A DescriptionRun enables you to specify some text, and optional formatting, so you have some control over the appearance of the displayed text. The DescriptionItems property is displayed at the bottom of the Navigate To dialog box, with the Categories on the left and the Descriptions on the right. The Visual COBOL implementation uses it to display more information about the location of an item, making it easier for the user to know disambiguate duplicate names (something there can be a lot of in large COBOL applications). Summary I hope this article is useful for anyone implementing Navigate To. It is a fantastic navigation feature that Microsoft have added to Visual Studio, but at the moment there still don't seem to be any examples on how to implement it, and the reference information on MSDN is a little brief for anyone attempting an implementation.

    Read the article

  • SSIS - Range lookups

    - by Repieter
      When developing an ETL solution in SSIS we sometimes need to do range lookups in SSIS. Several solutions for this can be found on the internet, but now we have built another solution which I would like to share, since it's pretty easy to implement and the performance is fast.   You can download the sample package to see how it works. Make sure you have the AdventureWorks2008R2 and AdventureWorksDW2008R2 databases installed. (Apologies for the layout of this blog, I don't do this too often :))   To give a little bit more information about the example, this is basically what is does: we load a facttable and do an SCD type 2 lookup operation of the Product dimension. This is done with a script component.   First we query the Data warehouse to create the lookup dataset. The query that is used for that is:   SELECT     [ProductKey]     ,[ProductAlternateKey]     ,[StartDate]     ,ISNULL([EndDate], '9999-01-01') AS EndDate FROM [DimProduct]     The output of this query is stored in a DataTable:     string lookupQuery = @"                         SELECT                             [ProductKey]                             ,[ProductAlternateKey]                             ,[StartDate]                             ,ISNULL([EndDate], '9999-01-01') AS EndDate                         FROM [DimProduct]";           OleDbCommand oleDbCommand = new OleDbCommand(lookupQuery, _oleDbConnection);         OleDbDataAdapter adapter = new OleDbDataAdapter(oleDbCommand);           _dataTable = new DataTable();         adapter.Fill(_dataTable);     Now that the dimension data is stored in the DataTable we use the following method to do the actual lookup:   public int RangeLookup(string businessKey, DateTime lookupDate)     {         // set default return value (Unknown)         int result = -1;           DataRow[] filteredRows;         filteredRows = _dataTable.Select(string.Format("ProductAlternateKey = '{0}'", businessKey));           for (int i = 0; i < filteredRows.Length; i++)         {             // check if the lookupdate is found between the startdate and enddate of any of the records             if (lookupDate >= (DateTime)filteredRows[i][2] && lookupDate < (DateTime)filteredRows[i][3])             {                 result = (filteredRows[i][0] == null) ? -1 : (int)filteredRows[i][0];                 break;             }         }           filteredRows = null;           return result;     }       This method is executed for every row that passes the script component. This is implemented in the ProcessInputRow method   public override void Input0_ProcessInputRow(Input0Buffer Row)     {         // Perform the lookup operation on the current row and put the value in the Surrogate Key Attribute         Row.ProductKey = RangeLookup(Row.ProductNumber, Row.OrderDate);     }   Now what actually happens?!   1. Every record passes the business key and the orderdate to the RangeLookup method. 2. The DataTable is then filtered on the business key of the current record. The output is stored in a DataRow [] object. 3. We loop over the DataRow[] object to see where the orderdate meets the following expression: (lookupDate >= (DateTime)filteredRows[i][2] && lookupDate < (DateTime)filteredRows[i][3]) 4. When the expression returns true (so where the data is between the Startdate and the EndDate), the surrogate key of the dimension record is returned   We have done some testing with this solution and it works great for us. Hope others can use this example to do their range lookups.

    Read the article

  • Entity Framework 6: Alpha2 Now Available

    - by ScottGu
    The Entity Framework team recently announced the 2nd alpha release of EF6.   The alpha 2 package is available for download from NuGet. Since this is a pre-release package make sure to select “Include Prereleases” in the NuGet package manager, or execute the following from the package manager console to install it: PM> Install-Package EntityFramework -Pre This week’s alpha release includes a bunch of great improvements in the following areas: Async language support is now available for queries and updates when running on .NET 4.5. Custom conventions now provide the ability to override the default conventions that Code First uses for mapping types, properties, etc. to your database. Multi-tenant migrations allow the same database to be used by multiple contexts with full Code First Migrations support for independently evolving the model backing each context. Using Enumerable.Contains in a LINQ query is now handled much more efficiently by EF and the SQL Server provider resulting greatly improved performance. All features of EF6 (except async) are available on both .NET 4 and .NET 4.5. This includes support for enums and spatial types and the performance improvements that were previously only available when using .NET 4.5. Start-up time for many large models has been dramatically improved thanks to improved view generation performance. Below are some additional details about a few of the improvements above: Async Support .NET 4.5 introduced the Task-Based Asynchronous Pattern that uses the async and await keywords to help make writing asynchronous code easier. EF 6 now supports this pattern. This is great for ASP.NET applications as database calls made through EF can now be processed asynchronously – avoiding any blocking of worker threads. This can increase scalability on the server by allowing more requests to be processed while waiting for the database to respond. The following code shows an MVC controller that is querying a database for a list of location entities:     public class HomeController : Controller     {         LocationContext db = new LocationContext();           public async Task<ActionResult> Index()         {             var locations = await db.Locations.ToListAsync();               return View(locations);         }     } Notice above the call to the new ToListAsync method with the await keyword. When the web server reaches this code it initiates the database request, but rather than blocking while waiting for the results to come back, the thread that is processing the request returns to the thread pool, allowing ASP.NET to process another incoming request with the same thread. In other words, a thread is only consumed when there is actual processing work to do, allowing the web server to handle more concurrent requests with the same resources. A more detailed walkthrough covering async in EF is available with additional information and examples. Also a walkthrough is available showing how to use async in an ASP.NET MVC application. Custom Conventions When working with EF Code First, the default behavior is to map .NET classes to tables using a set of conventions baked into EF. For example, Code First will detect properties that end with “ID” and configure them automatically as primary keys. However, sometimes you cannot or do not want to follow those conventions and would rather provide your own. For example, maybe your primary key properties all end in “Key” instead of “Id”. Custom conventions allow the default conventions to be overridden or new conventions to be added so that Code First can map by convention using whatever rules make sense for your project. The following code demonstrates using custom conventions to set the precision of all decimals to 5. As with other Code First configuration, this code is placed in the OnModelCreating method which is overridden on your derived DbContext class:         protected override void OnModelCreating(DbModelBuilder modelBuilder)         {             modelBuilder.Properties<decimal>()                 .Configure(x => x.HasPrecision(5));           } But what if there are a couple of places where a decimal property should have a different precision? Just as with all the existing Code First conventions, this new convention can be overridden for a particular property simply by explicitly configuring that property using either the fluent API or a data annotation. A more detailed description of custom code first conventions is available here. Community Involvement I blogged a while ago about EF being released under an open source license.  Since then a number of community members have made contributions and these are included in EF6 alpha 2. Two examples of community contributions are: AlirezaHaghshenas contributed a change that increases the startup performance of EF for larger models by improving the performance of view generation. The change means that it is less often necessary to use of pre-generated views. UnaiZorrilla contributed the first community feature to EF: the ability to load all Code First configuration classes in an assembly with a single method call like the following: protected override void OnModelCreating(DbModelBuilder modelBuilder) {        modelBuilder.Configurations            .AddFromAssembly(typeof(LocationContext).Assembly); } This code will find and load all the classes that inherit from EntityTypeConfiguration<T> or ComplexTypeConfiguration<T> in the assembly where LocationContext is defined. This reduces the amount of coupling between the context and Code First configuration classes, and is also a very convenient shortcut for large models. Other upcoming features coming in EF 6 Lots of information about the development of EF6 can be found on the EF CodePlex site, including a roadmap showing the other features that are planned for EF6. One of of the nice upcoming features is connection resiliency, which will automate the process of retying database operations on transient failures common in cloud environments and with databases such as the Windows Azure SQL Database. Another often requested feature that will be included in EF6 is the ability to map stored procedures to query and update operations on entities when using Code First. Summary EF6 is the first open source release of Entity Framework being developed in CodePlex. The alpha 2 preview release of EF6 is now available on NuGet, and contains some really great features for you to try. The EF team are always looking for feedback from developers - especially on the new features such as custom Code First conventions and async support. To provide feedback you can post a comment on the EF6 alpha 2 announcement post, start a discussion or file a bug on the CodePlex site. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • JPA 2.1 Schema Generation (TOTD #187)

    - by arungupta
    This blog explained some of the key features of JPA 2.1 earlier. Since then Schema Generation has been added to JPA 2.1. This Tip Of The Day (TOTD) will provide more details about this new feature in JPA 2.1. Schema Generation refers to generation of database artifacts like tables, indexes, and constraints in a database schema. It may or may not involve generation of a proper database schema depending upon the credentials and authorization of the user. This helps in prototyping of your application where the required artifacts are generated either prior to application deployment or as part of EntityManagerFactory creation. This is also useful in environments that require provisioning database on demand, e.g. in a cloud. This feature will allow your JPA domain object model to be directly generated in a database. The generated schema may need to be tuned for actual production environment. This usecase is supported by allowing the schema generation to occur into DDL scripts which can then be further tuned by a DBA. The following set of properties in persistence.xml or specified during EntityManagerFactory creation controls the behaviour of schema generation. Property Name Purpose Values javax.persistence.schema-generation-action Controls action to be taken by persistence provider "none", "create", "drop-and-create", "drop" javax.persistence.schema-generation-target Controls whehter schema to be created in database, whether DDL scripts are to be created, or both "database", "scripts", "database-and-scripts" javax.persistence.ddl-create-script-target, javax.persistence.ddl-drop-script-target Controls target locations for writing of scripts. Writers are pre-configured for the persistence provider. Need to be specified only if scripts are to be generated. java.io.Writer (e.g. MyWriter.class) or URL strings javax.persistence.ddl-create-script-source, javax.persistence.ddl-drop-script-source Specifies locations from which DDL scripts are to be read. Readers are pre-configured for the persistence provider. java.io.Reader (e.g. MyReader.class) or URL strings javax.persistence.sql-load-script-source Specifies location of SQL bulk load script. java.io.Reader (e.g. MyReader.class) or URL string javax.persistence.schema-generation-connection JDBC connection to be used for schema generation javax.persistence.database-product-name, javax.persistence.database-major-version, javax.persistence.database-minor-version Needed if scripts are to be generated and no connection to target database. Values are those obtained from JDBC DatabaseMetaData. javax.persistence.create-database-schemas Whether Persistence Provider need to create schema in addition to creating database objects such as tables, sequences, constraints, etc. "true", "false" Section 11.2 in the JPA 2.1 specification defines the annotations used for schema generation process. For example, @Table, @Column, @CollectionTable, @JoinTable, @JoinColumn, are used to define the generated schema. Several layers of defaulting may be involved. For example, the table name is defaulted from entity name and entity name (which can be specified explicitly as well) is defaulted from the class name. However annotations may be used to override or customize the values. The following entity class: @Entity public class Employee {    @Id private int id;    private String name;     . . .     @ManyToOne     private Department dept; } is generated in the database with the following attributes: Maps to EMPLOYEE table in default schema "id" field is mapped to ID column as primary key "name" is mapped to NAME column with a default VARCHAR(255). The length of this field can be easily tuned using @Column. @ManyToOne is mapped to DEPT_ID foreign key column. Can be customized using JOIN_COLUMN. In addition to these properties, couple of new annotations are added to JPA 2.1: @Index - An index for the primary key is generated by default in a database. This new annotation will allow to define additional indexes, over a single or multiple columns, for a better performance. This is specified as part of @Table, @SecondaryTable, @CollectionTable, @JoinTable, and @TableGenerator. For example: @Table(indexes = {@Index(columnList="NAME"), @Index(columnList="DEPT_ID DESC")})@Entity public class Employee {    . . .} The generated table will have a default index on the primary key. In addition, two new indexes are defined on the NAME column (default ascending) and the foreign key that maps to the department in descending order. @ForeignKey - It is used to define foreign key constraint or to otherwise override or disable the persistence provider's default foreign key definition. Can be specified as part of JoinColumn(s), MapKeyJoinColumn(s), PrimaryKeyJoinColumn(s). For example: @Entity public class Employee {    @Id private int id;    private String name;    @ManyToOne    @JoinColumn(foreignKey=@ForeignKey(foreignKeyDefinition="FOREIGN KEY (MANAGER_ID) REFERENCES MANAGER"))    private Manager manager;     . . . } In this entity, the employee's manager is mapped by MANAGER_ID column in the MANAGER table. The value of foreignKeyDefinition would be a database specific string. A complete replay of Linda's talk at JavaOne 2012 can be seen here (click on CON4212_mp4_4212_001 in Media). These features will be available in GlassFish 4 promoted builds in the near future. JPA 2.1 will be delivered as part of Java EE 7. The different components in the Java EE 7 platform are tracked here. JPA 2.1 Expert Group has released Early Draft 2 of the specification. Section 9.4 and 11.2 provide all details about Schema Generation. The latest javadocs can be obtained from here. And the JPA EG would appreciate feedback.

    Read the article

  • Setting up Rails to work with sqlserver

    - by FortunateDuke
    Ok I followed the steps for setting up ruby and rails on my Vista machine and I am having a problem connecting to the database. Contents of database.yml development: adapter: sqlserver database: APPS_SETUP Host: WindowsVT06\SQLEXPRESS Username: se Password: paswd Run rake db:migrate from myapp directory ---------- rake aborted! no such file to load -- deprecated ADO I have dbi 0.4.0 installed and have created the ADO folder in C:\Ruby\lib\ruby\site_ruby\1.8\DBD\ADO I got the ado.rb from the dbi 0.2.2 What else should I be looking at to fix the issue connecting to the database? Please don't tell me to use MySql or Sqlite or Postgres. *UPDATE* I have installed the activerecord-sqlserver-adapter gem from --source=http://gems.rubyonrails.org Still not working. I have verified that I can connect to the database by logging into SQL Management Studio with the credentials. rake db:migrate --trace PS C:\Inetpub\wwwroot\myapp> rake db:migrate --trace (in C:/Inetpub/wwwroot/myapp) ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:migrate rake aborted! no such file to load -- deprecated C:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `gem_original_require' C:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `require' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/dependencies.rb:510:in `require' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/dependencies.rb:355:in `new_constants_in' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/dependencies.rb:510:in `require' C:/Ruby/lib/ruby/site_ruby/1.8/dbi.rb:48 C:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `gem_original_require' C:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `require' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/dependencies.rb:510:in `require' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/dependencies.rb:355:in `new_constants_in' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/dependencies.rb:510:in `require' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/core_ext/kernel/requires.rb:7:in `require_library_ or_gem' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/core_ext/kernel/reporting.rb:11:in `silence_warnin gs' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/core_ext/kernel/requires.rb:5:in `require_library_ or_gem' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-sqlserver-adapter-1.0.0.9250/lib/active_record/connection_adapters/sqlserver _adapter.rb:29:in `sqlserver_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/connection_adapters/abstract/connection_specificatio n.rb:292:in `send' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/connection_adapters/abstract/connection_specificatio n.rb:292:in `connection=' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/connection_adapters/abstract/connection_specificatio n.rb:260:in `retrieve_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/connection_adapters/abstract/connection_specificatio n.rb:78:in `connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/migration.rb:408:in `initialize' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/migration.rb:373:in `new' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/migration.rb:373:in `up' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/migration.rb:356:in `migrate' C:/Ruby/lib/ruby/gems/1.8/gems/rails-2.1.1/lib/tasks/databases.rake:99 C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:621:in `call' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:621:in `execute' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:616:in `each' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:616:in `execute' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:582:in `invoke_with_call_chain' C:/Ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:575:in `invoke_with_call_chain' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:568:in `invoke' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:2031:in `invoke_task' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:2009:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:2009:in `each' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:2009:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:2048:in `standard_exception_handling' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:2003:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:1982:in `run' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:2048:in `standard_exception_handling' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:1979:in `run' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/bin/rake:31 C:/Ruby/bin/rake:19:in `load' C:/Ruby/bin/rake:19 PS C:\Inetpub\wwwroot\myapp>

    Read the article

  • Troubleshooting Windows Authentication problems (no challenge) in IIS 7.5?

    - by Aaronaught
    I know that there are thousands of reports of people having trouble getting Integrated Windows Authentication to work with IIS, but they all seem to lead to web pages that don't apply or solutions that I've already tried. I've deployed dozens of sites like this before, so either there's something bizarre going on with the server/configuration, or I've been looking at this too long and not seeing the obvious. Simply put, everything works perfectly on my local machine, but falls apart on the production server, which as far as I can tell has the exact same configuration. On the local machine: The machine is running Windows 7 Ultimate, Service Pack 1, IIS 7.5. The site has been tested successfully, using both IIS and the VS Web Development Server. The IIS site config has all authentication methods disabled except Windows Authentication. The local machine is not on any domain. The Providers set up are Negotiate and NTLM (not Negotiate:Kerberos). Extended Protection is Off. All browsers tested (IE, Firefox, Chrome) show the challenge prompt and allow me to log in to the localhost domain with my (local) Windows account. All browsers tested also work using an opaque local IP address - so the browsers themselves don't seem to care whether the site appears "local" or "remote". I've added a display line to the web page which shows the currently-logged-in user and it shows exactly what I would expect (whichever local user I logged in with). On the remote machine: The server is running Windows Server 2008 R2, IIS 7.5. Loading the web page results in an immediate 401.2 error: You are not authorized to view this page due to invalid authentication headers. No challenge prompt ever appears. The IIS site config has all authentication methods disabled except Windows Authentication. The remote machine is not on any domain. The Providers set up are Negotiate and NTLM (not Negotiate:Kerberos). Extended Protection is Off. On the remote machine (remote desktop session), the same error appears in Internet Explorer regardless of whether the domain is localhost or the external IP address. If I try to view the remote web site from my local machine, the error is still 401, but a slightly different 401. No subcode, with the text: Access is denied due to invalid credentials. The Windows Authentication IIS role feature is installed. The WindowsAuthentication Module is added (at the Server level). The exact same error occurs if I turn off Windows Authentication and enable Basic Authentication. The site does load if I turn off Windows Authentication and enable Anonymous (obviously). I've already followed all of the troubleshooting steps on Microsoft Support: Troubleshooting HTTP 401 errors in IIS I've already tried the workaround shown on another Microsoft support page (supposedly to force NTLM as the only method). Last but not least, I tried turning on FREB for 401.2 errors and the results don't seem to tell me anything useful, all I see is the following warning: MODULE_SET_RESPONSE_ERROR_STATUS ModuleName IIS Web Core Notification 2 HttpStatus 401 HttpReason Unauthorized HttpSubStatus 2 ErrorCode 2147942405 ConfigExceptionInfo Notification AUTHENTICATE_REQUEST ErrorCode Access is denied. (0x80070005) ...this seems to just be telling me what I already know (that it's simply rejecting the request instead of negotiating the credentials). The trace does indicate that the WindowsAuthentication module is correctly loaded because there is a NOTIFY_MODULE_START line with ModuleName = WindowsAuthentication (and various other ASP.NET follow-up events - [un]fortunately, no interesting errors or warnings here). Can anyone tell me what I might be missing here? Quick Update: I'm a little uncomfortable sending a whole Wireshark dump as it would reveal IPs, URLs and other stuff, but I did a side-by-side comparison of the HTTP responses from localhost and the remote server in Fiddler, and it seems fairly self-evident what the problem is: Localhost: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/7.5 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM X-Powered-By: ASP.NET Date: Sat, 17 Dec 2011 23:42:34 GMT Content-Length: 6399 Proxy-Support: Session-Based-Authentication Remote: HTTP/1.1 401 Unauthorized Content-Type: text/html Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Sat, 17 Dec 2011 23:43:13 GMT Content-Length: 1293 Aside from a few seemingly-inconsequential differences like cache-control, the main difference is that the remote server is not sending the WWW-Authenticate headers back to the client. So, I guess that narrows the question down to: Why is IIS not sending WWW-Authenticate headers when Windows Authentication appears to be installed, loaded, and exclusively enabled?

    Read the article

  • apache tomcat sermyadmin deployment error

    - by lepricon123
    I am getting the following errro when i try to start the application Mar 23, 2010 7:51:09 PM org.apache.catalina.core.ApplicationContext log INFO: Set web app root system property: 'sermyadmin-production-2.0.1' = [/usr/local/tomcat6/webapps/sermyadmin/] Mar 23, 2010 7:51:09 PM org.apache.catalina.core.ApplicationContext log INFO: Initializing log4j from [/usr/local/tomcat6/webapps/sermyadmin/WEB-INF/classes/log4j.properties] Mar 23, 2010 7:51:09 PM org.apache.catalina.core.ApplicationContext log INFO: Initializing Spring root WebApplicationContext Mar 23, 2010 7:51:23 PM org.apache.catalina.core.StandardContext listenerStart SEVERE: Exception sending context initialized event to listener instance of class org.codehaus.groovy.grails.web.context.GrailsContextLoaderListener org.springframework.beans.factory.access.BootstrapException: Error executing bootstraps; nested exception is org.codehaus.groovy.runtime.InvokerInvocationException: org.springframework.dao.InvalidDataAccessResourceUsageException: could not execute query; nested exception is org.hibernate.exception.SQLGrammarException: could not execute query at org.codehaus.groovy.grails.web.context.GrailsContextLoader.createWebApplicationContext(GrailsContextLoader.java:66) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3843) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4350) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:525) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:829) at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:718) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:490) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1147) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:311) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:117) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardHost.start(StandardHost.java:719) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) at org.apache.catalina.core.StandardService.start(StandardService.java:516) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:578) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:288) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413) Caused by: org.codehaus.groovy.runtime.InvokerInvocationException: org.springframework.dao.InvalidDataAccessResourceUsageException: could not execute query; nested exception is org.hibernate.exception.SQLGrammarException: could not execute query at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3843) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4350) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:525) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:829) at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:718) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:490) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1147) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:311) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:117) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardHost.start(StandardHost.java:719) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) at org.apache.catalina.core.StandardService.start(StandardService.java:516) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:578) ... 2 more Caused by: org.springframework.dao.InvalidDataAccessResourceUsageException: could not execute query; nested exception is org.hibernate.exception.SQLGrammarException: could not execute query at BootStrap$_closure1.doCall(BootStrap.groovy:7) ... 20 more Caused by: org.hibernate.exception.SQLGrammarException: could not execute query ... 21 more Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'opensips.jsec_role' doesn't exist at com.mysql.jdbc.Util.handleNewInstance(Util.java:406)

    Read the article

  • Uninitialized constant Encoding with sqlite3-ruby on windows

    - by Ben Scheirman
    On a new machine, installed ruby with the 1-click installer for windows. Installed rails 2.3.2 and all associated gems, then I installed the sqlite3 binaries (into the c:\ruby\bin folder). Lastly I did gem install sqlite3-ruby -v=1.2.3 (which is apparently the latest version that works with windows) This error happens when I run rake db:migrate or when any ActiveRecord object is touched at runtime. The error looks like this: ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:migrate rake aborted! **uninitialized constant Encoding** <---- Any help resolving this error would be greatly appreciated! Trace: C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:443:in `load_missing_constant' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:80:in `const_missing' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:92:in `const_missing' C:/Ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.3/lib/sqlite3/encoding.rb:9:in `find' C:/Ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.3/lib/sqlite3/database.rb:69:in `initialize' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/sqlite3_adapter.rb:13:in `new' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/sqlite3_adapter.rb:13:in `sqlite3_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in `send' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in `new_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:245:in `checkout_new_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:188:in `checkout' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in `loop' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in `checkout' C:/Ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:183:in `checkout' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:98:in `connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:326:in `retrieve_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_specification.rb:123:in `retrieve_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_specification.rb:115:in `connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/migration.rb:435:in `initialize' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/migration.rb:400:in `new' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/migration.rb:400:in `up' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/migration.rb:383:in `migrate' C:/Ruby/lib/ruby/gems/1.8/gems/rails-2.3.2/lib/tasks/databases.rake:116 C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `call' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `execute' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `each' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `execute' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:in `invoke_with_call_chain' C:/Ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_chain' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:in `invoke' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2051:in `invoke_task' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `each' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:in `run' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in `run' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake:31 C:/Ruby/bin/rake:19:in `load' C:/Ruby/bin/rake:19

    Read the article

  • Apache Tomcat Server failure

    - by Kenneth Ordona
    I'm trying to set up Apache Tomcat 6 with SSL and once I edited the server.xml file to include the following definitions the server started to fail as soon as I hit startup.bat: <-- Define a SSL Coyote HTTP/1.1 Connector on port 8443 -- < Connector protocol="org.apache.coyote.http11.Http11Protocol" port="8445" maxThreads="200" scheme="https" secure="true" SSLEnabled="true" keystoreFile="${user.home}/.tomcat" keystorePass="pnnlpw" clientAuth="false" sslProtocol="TLS"/ The logs that I have are as follows: Jul 05, 2012 1:52:15 PM org.apache.catalina.core.AprLifecycleListener init INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: C:\Program Files\Java\jdk1.7.0_05\bin;C:\WINDOWS\Sun\Java\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;. Jul 05, 2012 1:52:15 PM org.apache.tomcat.util.digester.Digester fatalError SEVERE: Parse Fatal Error at line 91 column 2: The content of elements must consist of well-formed character data or markup. org.xml.sax.SAXParseException; systemId: file://C/tomcat6/conf/server.xml; lineNumber: 91; columnNumber: 2; The content of elements must consist of well-formed character data or markup. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:198) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:177) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:441) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:368) at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(XMLScanner.java:1388) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.startOfMarkup(XMLDocumentFragmentScannerImpl.java:2565) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2663) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:488) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:835) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:764) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1210) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:568) at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1642) at org.apache.catalina.startup.Catalina.load(Catalina.java:524) at org.apache.catalina.startup.Catalina.load(Catalina.java:562) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:261) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413) Jul 05, 2012 1:52:15 PM org.apache.catalina.startup.Catalina load WARNING: Catalina.start using conf/server.xml: org.xml.sax.SAXParseException; systemId: file://C/tomcat6/conf/server.xml; lineNumber: 91; columnNumber: 2; The content of elements must consist of well-formed character data or markup. at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1236) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:568) at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1642) at org.apache.catalina.startup.Catalina.load(Catalina.java:524) at org.apache.catalina.startup.Catalina.load(Catalina.java:562) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:261) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413) Jul 05, 2012 1:52:15 PM org.apache.tomcat.util.digester.Digester fatalError SEVERE: Parse Fatal Error at line 91 column 2: The content of elements must consist of well-formed character data or markup. org.xml.sax.SAXParseException; systemId: file://C/tomcat6/conf/server.xml; lineNumber: 91; columnNumber: 2; The content of elements must consist of well-formed character data or markup. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:198) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:177) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:441) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:368) at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(XMLScanner.java:1388) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.startOfMarkup(XMLDocumentFragmentScannerImpl.java:2565) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2663) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:488) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:835) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:764) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1210) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:568) at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1642) at org.apache.catalina.startup.Catalina.load(Catalina.java:524) at org.apache.catalina.startup.Catalina.start(Catalina.java:582) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Jul 05, 2012 1:52:15 PM org.apache.catalina.startup.Catalina load WARNING: Catalina.start using conf/server.xml: org.xml.sax.SAXParseException; systemId: file://C/tomcat6/conf/server.xml; lineNumber: 91; columnNumber: 2; The content of elements must consist of well-formed character data or markup. at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1236) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:568) at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1642) at org.apache.catalina.startup.Catalina.load(Catalina.java:524) at org.apache.catalina.startup.Catalina.start(Catalina.java:582) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Jul 05, 2012 1:52:15 PM org.apache.catalina.startup.Catalina start SEVERE: Cannot start server. Server instance is not configured. Does anyone have an idea why this is happening? I believe it has to do with the configuration of my connector. I'm pretty new to this so any help would be much appreciated.

    Read the article

  • Rails Rake Error with XAMPP mysql database

    - by edu222
    I have installed XAAMP on my win7 machine and I have the apache server/mysql running on there. I set up rails to work with XAmpp as described here: XAMPP and RAILS This tutorial advises you to add this code to the XAMPP httpd.connf : Listen 3000 LoadModule rewrite_module modules/mod_rewrite.so ################################# # RUBY SETUP ################################# <virtualHost *:3000> ServerName rails DocumentRoot "c:/xampp/htdocs/FirstProject/public" <Directory "c:/xampp/htdocs/FirstProject/public/"> Options ExecCGI FollowSymLinks AllowOverride all Allow from all Order allow,deny AddHandler cgi-script .cgi AddHandler fastcgi-script .fcgi </Directory> </VirtualHost> ################################# # RUBY SETUP ################################# Xampp runs on the default localhost and mysql remains unchanged without a pw. I created a rails app with a mysql database like this: rails -d mysql C:/xampp/htdocs/FirstProject Then I started the ruby script/server from within the FirstProject location The localhost:3000/ shows the classic rails welcome I then ran a basic scaffold command: ruby script/generate scaffold FirstProject name:string email:string <br/> When I run the rake db:migrate command I get the following error: C:\xampp\htdocs\FirstProject>rake db:migrate --trace (in C:/xampp/htdocs/FirstProject) ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:migrate rake aborted! undefined method `init' for Mysql:Class C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/mysql_adapter.rb:70:in `mysql_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:223:in `send' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:223:in `new_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:245:in `checkout_new_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:188:in `checkout' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:184:in `loop' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:184:in `checkout' C:/Ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:183:in `checkout' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:98:in `connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:326:in `retrieve_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_specification.rb:123:in `retrieve_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_specification.rb:115:in `connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb :435:in `initialize' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb :400:in `new' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb :400:in `up' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb :383:in `migrate' C:/Ruby/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/tasks/databases.rake:116 C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `call' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `execute' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `each' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `execute' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:in `invoke_with_call_c hain' C:/Ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_c hain' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:in `invoke' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2051:in `invoke_task' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `each' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exceptio n_handling' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:in `run' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exceptio n_handling' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in `run' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake:31 C:/Ruby/bin/rake:19:in `load' C:/Ruby/bin/rake:19 Any idea on how to fix this? Thanks in advance

    Read the article

  • Android custom ListView unable to click on items

    - by MattC
    So I have a custom ListView object. The list items have two textviews stacked on top of each other, plus a horizontal progress bar that I want to remain hidden until I actually do something. To the far right is a checkbox that I only want to display when the user needs to download updates to their database(s). When I disable the checkbox by setting the visibility to Visibility.GONE, I am able to click on the list items. When the checkbox is visible, I am unable to click on anything in the list except the checkboxes. I've done some searching but haven't found anything relevant to my current situation. I found this question but I'm using an overridden ArrayAdapter since I'm using ArrayLists to contain the list of databases internally. Do I just need to get the LinearLayout view and add an onClickListener like Tom did? I'm not sure. Here's the listview row layout XML: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="?android:attr/listPreferredItemHeight" android:padding="6dip"> <LinearLayout android:orientation="vertical" android:layout_width="0dip" android:layout_weight="1" android:layout_height="fill_parent"> <TextView android:id="@+id/UpdateNameText" android:layout_width="wrap_content" android:layout_height="0dip" android:layout_weight="1" android:textSize="18sp" android:gravity="center_vertical" /> <TextView android:layout_width="fill_parent" android:layout_height="0dip" android:layout_weight="1" android:id="@+id/UpdateStatusText" android:singleLine="true" android:ellipsize="marquee" /> <ProgressBar android:id="@+id/UpdateProgress" android:layout_width="fill_parent" android:layout_height="wrap_content" android:indeterminateOnly="false" android:progressDrawable="@android:drawable/progress_horizontal" android:indeterminateDrawable="@android:drawable/progress_indeterminate_horizontal" android:minHeight="10dip" android:maxHeight="10dip" /> </LinearLayout> <CheckBox android:text="" android:id="@+id/UpdateCheckBox" android:layout_width="wrap_content" android:layout_height="wrap_content" /> </LinearLayout> And here's the class that extends the ListActivity. Obviously it's still in development so forgive the things that are missing or might be left laying around: import java.util.List; import android.app.ListActivity; import android.content.Context; import android.os.Bundle; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.ArrayAdapter; import android.widget.Button; import android.widget.CheckBox; import android.widget.ListView; import android.widget.ProgressBar; import android.widget.TextView; import com.xxxx.android.R; import com.xxxx.android.DAO.AccountManager; import com.xxxx.android.model.UpdateItem; public class UpdateActivity extends ListActivity { AccountManager lookupDb; boolean allSelected; UpdateListAdapter list; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); lookupDb = new AccountManager(this); lookupDb.loadUpdates(); setContentView(R.layout.update); allSelected = false; list = new UpdateListAdapter(this, R.layout.update_row, lookupDb.getUpdateItems()); setListAdapter(list);

    Read the article

  • migrating simple rails database to mysql

    - by joseph-misiti
    i am interested in creating a rails app with a mysql database. i am new to rails and am just trying to start creating something simple: rails -d mysql MyMoviesSQL cd MyMoviesSQL script/generate scaffold Movies title:string rating:integer rake db:migrate i am seeing the following error: rake aborted! NoMethodError: undefined method `ord' for 0:Fixnum: SET NAMES 'utf8' if i do a trace: ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:migrate rake aborted! NoMethodError: undefined method ord' for 0:Fixnum: SET NAMES 'utf8' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract_adapter.rb:219:inlog' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:323:in execute' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:599:inconfigure_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:594:in connect' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:203:ininitialize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:75:in new' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:75:inmysql_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in send' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:innew_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:245:in checkout_new_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:188:incheckout' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in loop' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:incheckout' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:in synchronize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:183:incheckout' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:98:in connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:326:inretrieve_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:123:in retrieve_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:115:inconnection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:435:in initialize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:400:innew' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:400:in up' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:383:inmigrate' /Library/Ruby/Gems/1.8/gems/rails-2.3.5/lib/tasks/databases.rake:116 /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in call' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:inexecute' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in each' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:inexecute' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:in invoke_with_call_chain' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:insynchronize' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in invoke_with_call_chain' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:ininvoke' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2051:in invoke_task' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:intop_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in each' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:intop_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in standard_exception_handling' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:intop_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:in run' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:instandard_exception_handling' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in run' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/bin/rake:31 /usr/bin/rake:19:inload' /usr/bin/rake:19 here are my versions: rails - 2.3.5 ruby - 1.8.6 gem list * LOCAL GEMS * actionmailer (2.3.5, 1.3.6) actionpack (2.3.5, 1.13.6) actionwebservice (1.2.6) activerecord (2.3.5, 1.15.6) activeresource (2.3.5) activesupport (2.3.5, 1.4.4) acts_as_ferret (0.4.1) capistrano (2.0.0) cgi_multipart_eof_fix (2.5.0) daemons (1.0.9) dbi (0.4.3) deprecated (2.0.1) dnssd (0.6.0) fastthread (1.0.1) fcgi (0.8.7) ferret (0.11.4) gem_plugin (0.2.3) highline (1.2.9) hpricot (0.6) libxml-ruby (0.9.5, 0.3.8.4) mongrel (1.1.4) needle (1.3.0) net-sftp (1.1.0) net-ssh (1.1.2) rack (1.0.1) rails (2.3.5) rake (0.8.7, 0.7.3) RedCloth (3.0.4) ruby-openid (1.1.4) ruby-yadis (0.3.4) rubygems-update (1.3.6) rubynode (0.1.3) sqlite3-ruby (1.2.1) termios (0.9.4) also, if i need to add a patch to FixNum, can someone please tell which file to add the patch to. thanks for your help

    Read the article

  • "power limit notification" clobbering on 12G Dell servers with RHEL6

    - by Andrew B
    Server: Poweredge r620 OS: RHEL 6.4 Kernel: 2.6.32-358.18.1.el6.x86_64 I'm experiencing application alarms in my production environment. Critical CPU hungry processes are being starved of resources and causing a processing backlog. The problem is happening on all the 12th Generation Dell servers (r620s) in a recently deployed cluster. As near as I can tell, instances of this happening are matching up to peak CPU utilization, accompanied by massive amounts of "power limit notification" spam in dmesg. An excerpt of one of these events: Nov 7 10:15:15 someserver [.crit] CPU12: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU0: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU6: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU14: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU18: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU2: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU4: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU16: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU0: Package power limit notification (total events = 11) Nov 7 10:15:15 someserver [.crit] CPU6: Package power limit notification (total events = 13) Nov 7 10:15:15 someserver [.crit] CPU14: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU18: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU20: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU8: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU2: Package power limit notification (total events = 12) Nov 7 10:15:15 someserver [.crit] CPU10: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU22: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU4: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU16: Package power limit notification (total events = 13) Nov 7 10:15:15 someserver [.crit] CPU20: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU8: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU10: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU22: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU15: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU3: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU1: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU5: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU17: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU13: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU15: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU3: Package power limit notification (total events = 374) Nov 7 10:15:15 someserver [.crit] CPU1: Package power limit notification (total events = 376) Nov 7 10:15:15 someserver [.crit] CPU5: Package power limit notification (total events = 376) Nov 7 10:15:15 someserver [.crit] CPU7: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU19: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU17: Package power limit notification (total events = 377) Nov 7 10:15:15 someserver [.crit] CPU9: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU21: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU23: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU11: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU13: Package power limit notification (total events = 376) Nov 7 10:15:15 someserver [.crit] CPU7: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU19: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU9: Package power limit notification (total events = 374) Nov 7 10:15:15 someserver [.crit] CPU21: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU23: Package power limit notification (total events = 374) A little Google Fu reveals that this is typically associated with the CPU running hot, or voltage regulation kicking in. I don't think that's what is happening though. Temperature sensors for all servers in the cluster are running fine, Power Cap Policy is disabled in the iDRAC, and my System Profile is set to "Performance" on all of these servers: # omreport chassis biossetup | grep -A10 'System Profile' System Profile Settings ------------------------------------------ System Profile : Performance CPU Power Management : Maximum Performance Memory Frequency : Maximum Performance Turbo Boost : Enabled C1E : Disabled C States : Disabled Monitor/Mwait : Enabled Memory Patrol Scrub : Standard Memory Refresh Rate : 1x Memory Operating Voltage : Auto Collaborative CPU Performance Control : Disabled A Dell mailing list post describes the symptoms almost perfectly. Dell suggested that the author try using the Performance profile, but that didn't help. He ended up applying some settings in Dell's guide for configuring a server for low latency environments and one of those settings (or a combination thereof) seems to have fixed the problem. Kernel.org bug #36182 notes that power-limit interrupt debugging was enabled by default, which is causing performance degradation in scenarios where CPU voltage regulation is kicking in. A RHN KB article (RHN login required) mentions a problem impacting PE r620 and r720 servers not running the Performance profile, and recommends an update to a kernel released two weeks ago. ...Except we are running the Performance profile... Everything I can find online is running me in circles here. What's the heck is going on?

    Read the article

  • .NET Framework generates strange DCOM error

    - by Anders Oestergaard Jensen
    Hello, I am creating a simple application that enables merging of key-value pairs fields in a Word and/or Excel document. Until this day, the application has worked out just fine. I am using the latest version of .NET Framework 4.0 (since it provides a nice wrapper API for Interop). My sample merging method looks like this: public byte[] ProcessWordDocument(string path, List<KeyValuePair<string, string>> kvs) { logger.InfoFormat("ProcessWordDocument: path = {0}", path); var localWordapp = new Word.Application(); localWordapp.Visible = false; Word.Document doc = null; try { doc = localWordapp.Documents.Open(path, ReadOnly: false); logger.Debug("Executing Find->Replace..."); foreach (Word.Range r in doc.StoryRanges) { foreach (KeyValuePair<string, string> kv in kvs) { r.Find.Execute(Replace: Word.WdReplace.wdReplaceAll, FindText: kv.Key, ReplaceWith: kv.Value, Wrap: Word.WdFindWrap.wdFindContinue); } } logger.Debug("Done! Saving document and cleaning up"); doc.Save(); doc.Close(); System.Runtime.InteropServices.Marshal.ReleaseComObject(doc); localWordapp.Quit(); System.Runtime.InteropServices.Marshal.ReleaseComObject(localWordapp); logger.Debug("Done."); return System.IO.File.ReadAllBytes(path); } catch (Exception ex) { // Logging... // doc.Close(); if (doc != null) { doc.Close(); System.Runtime.InteropServices.Marshal.ReleaseComObject(doc); } localWordapp.Quit(); System.Runtime.InteropServices.Marshal.ReleaseComObject(localWordapp); throw; } } The above C# snippet has worked all fine (compiled and deployed unto a Windows Server 2008 x64) with latest updates installed. But now, suddenly, I get the following strange error: System.Runtime.InteropServices.COMException (0x80080005): Retrieving the COM class factory for component with CLSID {000209FF-0000-0000-C000-000000000046} failed due to the following error: 80080005 Server execution failed (Exception from HRESULT: 0x80080005 (CO_E_SERVER_EXEC_FAILURE)). at System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandleInternal& ctor, Boolean& bNeedSecurityCheck) at System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache) at System.RuntimeType.CreateInstanceDefaultCtor(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean skipCheckThis, Boolean fillCache) at System.Activator.CreateInstance(Type type, Boolean nonPublic) at Meeho.Integration.OfficeHelper.ProcessWordDocument(String path, List`1 kvs) in C:\meeho\src\webservices\Meeho.Integration\OfficeHelper.cs:line 30 at Meeho.IntegrationService.ConvertDocument(Byte[] template, String ext, String[] fields, String[] values) in C:\meeho\src\webservices\MeehoService\IntegrationService.asmx.cs:line 49 -- I googled the COM error, but it returns nothing of particular value. I even gave the right permissions for the COM dll's using mmc -32, where I allocated the Word and Excel documents respectively and set the execution rights to the Administrator. I could not, however, locate the dll's by the exact COM CLSID given above. Very frustrating. Please, please, please help me as the application is currently pulled out of production. Anders EDIT: output from the Windows event log: Faulting application name: WINWORD.EXE, version: 12.0.6514.5000, time stamp: 0x4a89d533 Faulting module name: unknown, version: 0.0.0.0, time stamp: 0x00000000 Exception code: 0xc0000005 Fault offset: 0x00000000 Faulting process id: 0x720 Faulting application start time: 0x01cac571c4f82a7b Faulting application path: C:\Program Files (x86)\Microsoft Office\Office12\WINWORD.EXE Faulting module path: unknown Report Id: 041dd5f9-3165-11df-b96a-0025643cefe6 - 1000 2 100 0x80000000000000 2963 Application meeho3 - WINWORD.EXE 12.0.6514.5000 4a89d533 unknown 0.0.0.0 00000000 c0000005 00000000 720 01cac571c4f82a7b C:\Program Files (x86)\Microsoft Office\Office12\WINWORD.EXE unknown 041dd5f9-3165-11df-b96a-0025643cefe6

    Read the article

  • Add Service Reference is generating Message Contracts

    - by JohnIdol
    OK, this has been haunting me for a while, can't find much on Google and I am starting to lose hope so I am reverting to the SO community. When I import a given service using "Add service Reference" on Visual Studio 2008 (SP1) all the Request/Response messages are being unnecessarily wrapped into Message Contracts (named as -- "operationName" + "Request"/"Response" + "1" at the end). The code generator says: // CODEGEN: Generating message contract since the operation XXX is neither RPC nor document wrapped. The guys who are generating the wsdl from a Java service say they are specifying DOCUMENT-LITERAL/WRAPPED. Any help/pointer/clue would be highly appreciated. Update: this is a sample of my wsdl for one of the operations that look suspicious. Note the mismatch on the message element attribute for the request, compared to the response. <!- imports namespaces and defines elements --> <wsdl:types> <xsd:schema targetNamespace="http://WHATEVER/" xmlns:xsd_1="http://WHATEVER_1/" xmlns:xsd_2="http://WHATEVER_2/"> <xsd:import namespace="http://WHATEVER_1/" schemaLocation="WHATEVER_1.xsd"/> <xsd:import namespace="http://WHATEVER_2/" schemaLocation="WHATEVER_2.xsd"/> <xsd:element name="myOperationResponse" type="xsd_1:MyOperationResponse"/> <xsd:element name="myOperation" type="xsd_1:MyOperationRequest"/> </xsd:schema> </wsdl:types> <!- declares messages - NOTE the mismatch on the request element attribute compared to response --> <wsdl:message name="myOperationRequest"> <wsdl:part element="tns:myOperation" name="request"/> </wsdl:message> <wsdl:message name="myOperationResponse"> <wsdl:part element="tns:myOperationResponse" name="response"/> </wsdl:message> <!- operations --> <wsdl:portType name="MyService"> <wsdl:operation name="myOperation"> <wsdl:input message="tns:myOperationRequest"/> <wsdl:output message="tns:myOperationResponse"/> <wsdl:fault message="tns:myOperationFault" name="myOperationFault"/> <wsdl:fault message="tns:myOperationFault1" name="myOperationFault1"/> </wsdl:operation> </wsdl:portType> Update 2: I pulled all the types that I had in my imported namespace (they were in a separate xsd) into the wsdl, as I suspected the import could be triggering the message contract generation. To my surprise it was not the case and having all the types defined in the wsdl did not change anything. I then (out of desperation) started constructing wsdls from scratch and playing with the maxOccurs attributes of element attributes contained in a sequence attribute I was able to reproduce the undesired message contract generation behavior. Here's a sample of an element: <xsd:element name="myElement"> <xsd:complexType> <xsd:sequence> <xsd:element minOccurs="0" maxOccurs="1" name="arg1" type="xsd:string"/> </xsd:sequence> </xsd:complexType> </xsd:element> Playing with maxOccurs on elements that are used as messages (all requests and responses basically) the following happens: maxOccurs = "1" does not trigger the wrapping macOcccurs 1 triggers the wrapping maxOccurs = "unbounded" triggers the wrapping I was not able to reproduce this on my production wsdl yet because the nesting of the types goes very deep, and it's gonna take me time to inspect it thoroughly. In the meanwhile I am hoping it might ring a bell - any help highly appreciated.

    Read the article

  • facebook iframe stream.Publish cannot close dialog or skip

    - by fooyee
    am pulling my hair over this :( spent 10 hrs but nothing came out I read this thread http://forum.developers.facebook.com/viewtopic.php?pid=198128 but it didn't help much. I'm running a local dev App Engine server ( localhost:8080 ) iframe app so I have a couple of problems. 1) on safari 4.0.4, the publish story dialog comes up nicely with all images/data/action_links. upon posting a story (or skipping), the dialog goes blank and wouldn't close. 2) I tested the same code on firefox 3.5.8, the dialog comes up with all images/data/action_links, but then the whole thing freezes. Clicking anywhere on the dialog doesn't help at all. If i'm patient enough and click "publish", I have to wait for abt 10 seconds before the dialog says "story is published". then it freezes. (clicking on skip doesn't make a difference). btw, there is no "button clicking effect" : ie: the buttons don't look like they "sink down" upon clicking. I checked the firefox memory using the command "top" on the terminal, it all seems okay, no spike in CPU processes ( i could open other firefox tabs and work on them) My futile attempts at solving the problems... 1) so i thought, hmm could this be because of local dev (localhost) problem? I uploaded the code to the production server, the same thing happens. 2) I tried an older firefox (3.1) and the same problem persisted ( the freezing ) 3) I noticed that i kind of used 2 different FB features ( Connect and XFBML). The Connect Feature I used in the PostStory function. The XFBML feature I used before the tag. So I thought, hmm ... I tried replacing the FB_RequireFeatures["Connect"] feature with FB_RequireFeatures["XFBML"]. nothing changed. I still can't close the story dialog. 4) Is there a possibility that I didn't connect to xd_receiver.htm properly? my xd_receiver.htm is stored in my folder /media/fbconnect in my app.yaml handler: - url: /fbconnect static_dir: media/fbconnect so i thought a connection has to be established with xd_receiver.htm. any way I can test that? here're all the codes: <script type="text/javascript"> //post story function function PostStory() { //init facebook FB_RequireFeatures(["Connect"], function() { FB.Facebook.init('my_app_key', "/fbconnect/xd_receiver.htm"); FB.ensureInit(function() { var message = 'the message'; var attachment = { 'name': 'a simple app to send gifts', 'href': 'http://apps.facebook.com/my_app_name', 'caption': '{*actor*} sent u something', 'description': 'some description', "media": [{ "type": "image", "src": "http://bit.ly/105QYr", "href": "http://bit.ly/105QYr"}] }; //action links can only be seen AFTER the feed is published var action_links = [{ 'text': 'Send him/her a gift back!', 'href': 'http://somelink.com'}]; FB.Connect.streamPublish(message, attachment, action_links, null, "Share the gift with your friends", callback, false, null); }); }); function callback(post_id, exception) { //alert('Wall Post Complete'); } } </script> just before the end of the /body tag, i have this: <script type="text/javascript"> function callFBInit() { FB_RequireFeatures( ["XFBML"], function(){ FB.Facebook.init("my_app_key", "/fbconnect/xd_receiver.htm"); } ); } callFBInit(); btw, my xd_receiver.htm contains: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns=? http://www.w3.org/1999/xhtml? > <head> <title>cross-domain receiver page</title> </head> <body> <script src=?http://static.ak.facebook.com/js/api_lib/v0.4/xdcommreceiver.debug.js? type=? text/javascript? ></script> </body> </html> hope you guys can help out. thx

    Read the article

  • NHibernate which cache to use for WinForms application

    - by chiccodoro
    I have a C# WinForms application with a database backend (oracle) and use NHibernate for O/R mapping. I would like to reduce communication to the database as much as possible since the network in here is quite slow, so I read about second level caching. I found this quite good introduction, which lists the following available cache implementations. I'm wondering which implementation I should use for my application. The caching should be simple, it should not significantly slow down the first occurrence of a query, and it should not take much memory to load the implementing assemblies. (With NHibernate and Castle, the application already takes up to 80 MB of RAM!) Velocity: uses Microsoft Velocity which is a highly scalable in-memory application cache for all kinds of data. Prevalence: uses Bamboo.Prevalence as the cache provider. Bamboo.Prevalence is a .NET implementation of the object prevalence concept brought to life by Klaus Wuestefeld in Prevayler. Bamboo.Prevalence provides transparent object persistence to deterministic systems targeting the CLR. It offers persistent caching for smart client applications. SysCache: Uses System.Web.Caching.Cache as the cache provider. This means that you can rely on ASP.NET caching feature to understand how it works. SysCache2: Similar to NHibernate.Caches.SysCache, uses ASP.NET cache. This provider also supports SQL dependency-based expiration, meaning that it is possible to configure certain cache regions to automatically expire when the relevant data in the database changes. MemCache: uses memcached; memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load. Basically a distributed hash table. SharedCache: high-performance, distributed and replicated memory object caching system. See here and here for more info My considerations so far were: Velocity seems quite heavyweight and overkill (the files totally take 467 KB of disk space, haven't measured the RAM it takes so far because I didn't manage to make it run, see below) Prevalence, at least in my first attempt, slowed down my query from ~0.5 secs to ~5 secs, and caching didn't work (see below) SysCache seems to be for ASP.NET, not for winforms. MemCache and SharedCache seem to be for distributed scenarios. Which one would you suggest me to use? There would also be a built-in implementation, which of course is very lightweight, but the referenced article tells me that I "(...) should never use this cache provider for production code but only for testing." Besides the question which fits best into my situation I also faced problems with applying them: Velocity complained that "dcacheClient" tag not specified in the application configuration file. Specify valid tag in configuration file," although I created an app.config file for the assembly and pasted the example from this article. Prevalence, as mentioned above, heavily slowed down my first query, and the next time the exact same query was executed, another select was sent to the database. Maybe I should "externalize" this topic into another post. I will do that if someone tells me it is absolutely unusual that a query is slowed down so much and he needs further details to help me.

    Read the article

  • android database leak found IllegalStateException

    - by saravanan
    04-20 16:53:39.010: ERROR/Database(419): Leak found 04-20 16:53:39.010: ERROR/Database(419): java.lang.IllegalStateException: mPrograms size 1 04-20 16:53:39.010: ERROR/Database(419): at android.database.sqlite.SQLiteDatabase.finalize(SQLiteDatabase.java:1668) 04-20 16:53:39.010: ERROR/Database(419): at dalvik.system.NativeStart.run(Native Method) 04-20 16:53:39.010: ERROR/Database(419): Caused by: java.lang.IllegalStateException: /data/data/com.example.search/databases/rlite.db SQLiteDatabase created and never closed 04-20 16:53:39.010: ERROR/Database(419): at android.database.sqlite.SQLiteDatabase.(SQLiteDatabase.java:1694) 04-20 16:53:39.010: ERROR/Database(419): at android.database.sqlite.SQLiteDatabase.openDatabase(SQLiteDatabase.java:738) 04-20 16:53:39.010: ERROR/Database(419): at android.database.sqlite.SQLiteDatabase.openOrCreateDatabase(SQLiteDatabase.java:760) 04-20 16:53:39.010: ERROR/Database(419): at android.database.sqlite.SQLiteDatabase.openOrCreateDatabase(SQLiteDatabase.java:753) 04-20 16:53:39.010: ERROR/Database(419): at android.app.ApplicationContext.openOrCreateDatabase(ApplicationContext.java:473) 04-20 16:53:39.010: ERROR/Database(419): at android.content.ContextWrapper.openOrCreateDatabase(ContextWrapper.java:193) 04-20 16:53:39.010: ERROR/Database(419): at android.database.sqlite.SQLiteOpenHelper.getWritableDatabase(SQLiteOpenHelper.java:98) 04-20 16:53:39.010: ERROR/Database(419): at com.example.search.Database.(Database.java:33) 04-20 16:53:39.010: ERROR/Database(419): at com.example.search.JobDetails.applyJob(JobDetails.java:120) 04-20 16:53:39.010: ERROR/Database(419): at com.example.search.JobDetails.jobdetailsAction(JobDetails.java:98) 04-20 16:53:39.010: ERROR/Database(419): at java.lang.reflect.Method.invokeNative(Native Method) 04-20 16:53:39.010: ERROR/Database(419): at java.lang.reflect.Method.invoke(Method.java:521) 04-20 16:53:39.010: ERROR/Database(419): at android.view.View$1.onClick(View.java:2026) 04-20 16:53:39.010: ERROR/Database(419): at android.view.View.performClick(View.java:2364) 04-20 16:53:39.010: ERROR/Database(419): at android.view.View.onTouchEvent(View.java:4179) 04-20 16:53:39.010: ERROR/Database(419): at android.widget.TextView.onTouchEvent(TextView.java:6540) 04-20 16:53:39.010: ERROR/Database(419): at android.view.View.dispatchTouchEvent(View.java:3709) 04-20 16:53:39.010: ERROR/Database(419): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 04-20 16:53:39.010: ERROR/Database(419): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 04-20 16:53:39.010: ERROR/Database(419): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 04-20 16:53:39.010: ERROR/Database(419): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 04-20 16:53:39.010: ERROR/Database(419): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 04-20 16:53:39.010: ERROR/Database(419): at com.android.internal.policy.impl.PhoneWindow$DecorView.superDispatchTouchEvent(PhoneWindow.java:1659) 04-20 16:53:39.010: ERROR/Database(419): at com.android.internal.policy.impl.PhoneWindow.superDispatchTouchEvent(PhoneWindow.java:1107) 04-20 16:53:39.010: ERROR/Database(419): at android.app.Activity.dispatchTouchEvent(Activity.java:2061) 04-20 16:53:39.010: ERROR/Database(419): at com.android.internal.policy.impl.PhoneWindow$DecorView.dispatchTouchEvent(PhoneWindow.java:1643) 04-20 16:53:39.010: ERROR/Database(419): at android.view.ViewRoot.handleMessage(ViewRoot.java:1691) 04-20 16:53:39.010: ERROR/Database(419): at android.os.Handler.dispatchMessage(Handler.java:99) 04-20 16:53:39.010: ERROR/Database(419): at android.os.Looper.loop(Looper.java:123) 04-20 16:53:39.010: ERROR/Database(419): at android.app.ActivityThread.main(ActivityThread.java:4363) 04-20 16:53:39.010: ERROR/Database(419): at java.lang.reflect.Method.invokeNative(Native Method) 04-20 16:53:39.010: ERROR/Database(419): at java.lang.reflect.Method.invoke(Method.java:521) 04-20 16:53:39.010: ERROR/Database(419): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 04-20 16:53:39.010: ERROR/Database(419): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 04-20 16:53:39.010: ERROR/Database(419): at dalvik.system.NativeStart.main(Native Method) when i read the database show error like this. please do reply me

    Read the article

  • Accessing Oracle DB through SQL Server using OPENROWSET

    - by Ken Paul
    I'm trying to access a large Oracle database through SQL Server using OPENROWSET in client-side Javascript, and not having much luck. Here are the particulars: A SQL Server view that accesses the Oracle database using OPENROWSET works perfectly, so I know I have valid connection string parameters. However, the new requirement is for extremely dynamic Oracle queries that depend on client-side selections, and I haven't been able to get dynamic (or even parameterized) Oracle queries to work from SQL Server views or stored procedures. Client-side access to the SQL Server database works perfectly with dynamic and parameterized queries. I cannot count on clients having any Oracle client software. Therefore, access to the Oracle database has to be through the SQL Server database, using views, stored procedures, or dynamic queries using OPENROWSET. Because the SQL Server database is on a shared server, I'm not allowed to use globally-linked databases. My idea was to define a function that would take my own version of a parameterized Oracle query, make the parameter substitutions, wrap the query in an OPENROWSET, and execute it in SQL Server, returning the resulting recordset. Here's sample code: // db is a global variable containing an ADODB.Connection opened to the SQL Server DB // rs is a global variable containing an ADODB.Recordset . . . ss = "SELECT myfield FROM mytable WHERE {param0} ORDER BY myfield;"; OracleQuery(ss,["somefield='" + somevalue + "'"]); . . . function OracleQuery(sql,params) { var s = sql; var i; for (i = 0; i < params.length; i++) s = s.replace("{param" + i + "}",params[i]); var e = "SELECT * FROM OPENROWSET('MSDAORA','(connect-string-values)';" + "'user';'pass','" + s.split("'").join("''") + "') q"; try { rs.Open("EXEC ('" + e.split("'").join("''") + "')",db); } catch (eobj) { alert("SQL ERROR: " + eobj.description + "\nSQL: " + e); } } The SQL error that I'm getting is Ad hoc access to OLE DB provider 'MSDAORA' has been denied. You must access this provider through a linked server. which makes no sense to me. The Microsoft explanation for this error relates to a registry setting (DisallowAdhocAccess). This is set correctly on my PC, but surely this relates to the DB server and not the client PC, and I would expect that the setting there is correct since the view mentioned above works. One alternative that I've tried is to eliminate the enclosing EXEC in the Open statement: rs.Open(e,db); but this generates the same error. I also tried putting the OPENROWSET in a stored procedure. This works perfectly when executed from within SQL Server Management Studio, but fails with the same error message when the stored procedure is called from Javascript. Is what I'm trying to do possible? If so, can you recommend how to fix my code? Or is a completely different approach necessary? Any hints or related information will be welcome. Thanks in advance.

    Read the article

  • Help with SVN+SSH permissions with CentOS/WHM setup

    - by Furiam
    Hi Folks, I'll try my best to explain how I'm trying to set up this system. Imagine a production server running WHM with various sites. We'll call these sites... site1, site2, site2 Now, with the WHM setup, each site has a user/group defined for them, we'll keep these users/groups called site1,site2 for simplicity reasons. Now, updating these sites is accomplished using SVN, and through the use of a post commit script to auto update these sites (With .svn blocked through the apache configuration). There are two regular maintainers of these sites, we'll call them Joe and Bob. Joe and Bob both have commandline access to the server through thier respective limited accounts. So I've done the easy bit, managed to get SVN working with these "maintainers" so that when an SVN commit occurs, the changes are checked out and go live perfectly. Here's the cavet, and ultimately my problem. User permissions. Through my testing of this setup, I've only managed to get it working by giving what is being updated permissions of 777, so that Joe and Bob can both read and write access to webfront directories for each of the sites. So, an example of how it's set up now: Joe and Bob both belong to a group called "Dev". I have the master /svn folders set up for both read and write access to this group, and it works great. Post commit triggers, updates the site, and then sets 777 on each file within the webfront. I then changed this to try and factor in group permission updates, instead of straight 777. Each folder in /home/site1/public_html intially gets given a chmod of 664, and each folder 775 Which looks a little something like this drwxrwxr-x . drwxrwxr-x .. drwxrwxr-x site1 site1 my_test_folder -rw-rw-r-- site1 site1 my_test_file So site1 is sthe owner and group owner of those files and folders. So I then added site1 to Joe and Bobs secondary groups so that the SVN update will correctly allow access to these files. Herein lies the problem now. When I wish to add a file or folder to /home/site1, say Bobs_file, it then looks like this drwxrwxr-x . drwxrwxr-x .. drwxr-xr-x Bob dev bobs_folder drwxrwxr-x site1 site1 my_test_folder -rw-rw-r-- Bob dev bobs_file -rw-rw-r-- site1 site1 my_test_file How can I get it so that with the set of user permissions Bob has available, to change the owner and group owner of that file to reflect "site1" "site1". As Bob belongs to Dev I can set the permissions correctly with CHMOd, but It appears CHGRP is throwing back operation errors. Now this was long winded enough to give an overview of exactly what I'm trying to accomplish, just incase I'm going about this arse-over-tit and there's a far easier solution. Here's my goals 2 people to update multiple user accounts specified given the structure of WHM Trying to maintain master user/group permissions of file and folders to the original user account, and not the account of the updatee. I like the security of SVN+SSH over just SVN. Don't want to run all this over root. I hope this made sense, and thanks in advance :)

    Read the article

  • Django - no module named app

    - by Koran
    Hi, I have been trying to get an application written in django working - but it is not working at all. I have been working on for some time too - and it is working on dev-server perfectly. But I am unable to put in the production env (apahce). My project name is apstat and the app name is basic. I try to access it as following Blockquote http://hostname/apstat But it shows the following error: MOD_PYTHON ERROR ProcessId: 6002 Interpreter: 'domU-12-31-39-06-DD-F4.compute-1.internal' ServerName: 'domU-12-31-39-06-DD-F4.compute-1.internal' DocumentRoot: '/home/ubuntu/server/' URI: '/apstat/' Location: '/apstat' Directory: None Filename: '/home/ubuntu/server/apstat/' PathInfo: '' Phase: 'PythonHandler' Handler: 'django.core.handlers.modpython' Traceback (most recent call last): File "/usr/lib/python2.6/dist-packages/mod_python/importer.py", line 1537, in HandlerDispatch default=default_handler, arg=req, silent=hlist.silent) File "/usr/lib/python2.6/dist-packages/mod_python/importer.py", line 1229, in _process_target result = _execute_target(config, req, object, arg) File "/usr/lib/python2.6/dist-packages/mod_python/importer.py", line 1128, in _execute_target result = object(arg) File "/usr/lib/pymodules/python2.6/django/core/handlers/modpython.py", line 228, in handler return ModPythonHandler()(req) File "/usr/lib/pymodules/python2.6/django/core/handlers/modpython.py", line 201, in __call__ response = self.get_response(request) File "/usr/lib/pymodules/python2.6/django/core/handlers/base.py", line 134, in get_response return self.handle_uncaught_exception(request, resolver, exc_info) File "/usr/lib/pymodules/python2.6/django/core/handlers/base.py", line 154, in handle_uncaught_exception return debug.technical_500_response(request, *exc_info) File "/usr/lib/pymodules/python2.6/django/views/debug.py", line 40, in technical_500_response html = reporter.get_traceback_html() File "/usr/lib/pymodules/python2.6/django/views/debug.py", line 114, in get_traceback_html return t.render(c) File "/usr/lib/pymodules/python2.6/django/template/__init__.py", line 178, in render return self.nodelist.render(context) File "/usr/lib/pymodules/python2.6/django/template/__init__.py", line 779, in render bits.append(self.render_node(node, context)) File "/usr/lib/pymodules/python2.6/django/template/debug.py", line 81, in render_node raise wrapped TemplateSyntaxError: Caught an exception while rendering: No module named basic Original Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/django/template/debug.py", line 71, in render_node result = node.render(context) File "/usr/lib/pymodules/python2.6/django/template/debug.py", line 87, in render output = force_unicode(self.filter_expression.resolve(context)) File "/usr/lib/pymodules/python2.6/django/template/__init__.py", line 572, in resolve new_obj = func(obj, *arg_vals) File "/usr/lib/pymodules/python2.6/django/template/defaultfilters.py", line 687, in date return format(value, arg) File "/usr/lib/pymodules/python2.6/django/utils/dateformat.py", line 269, in format return df.format(format_string) File "/usr/lib/pymodules/python2.6/django/utils/dateformat.py", line 30, in format pieces.append(force_unicode(getattr(self, piece)())) File "/usr/lib/pymodules/python2.6/django/utils/dateformat.py", line 175, in r return self.format('D, j M Y H:i:s O') File "/usr/lib/pymodules/python2.6/django/utils/dateformat.py", line 30, in format pieces.append(force_unicode(getattr(self, piece)())) File "/usr/lib/pymodules/python2.6/django/utils/encoding.py", line 71, in force_unicode s = unicode(s) File "/usr/lib/pymodules/python2.6/django/utils/functional.py", line 201, in __unicode_cast return self.__func(*self.__args, **self.__kw) File "/usr/lib/pymodules/python2.6/django/utils/translation/__init__.py", line 62, in ugettext return real_ugettext(message) File "/usr/lib/pymodules/python2.6/django/utils/translation/trans_real.py", line 286, in ugettext return do_translate(message, 'ugettext') File "/usr/lib/pymodules/python2.6/django/utils/translation/trans_real.py", line 276, in do_translate _default = translation(settings.LANGUAGE_CODE) File "/usr/lib/pymodules/python2.6/django/utils/translation/trans_real.py", line 194, in translation default_translation = _fetch(settings.LANGUAGE_CODE) File "/usr/lib/pymodules/python2.6/django/utils/translation/trans_real.py", line 180, in _fetch app = import_module(appname) File "/usr/lib/pymodules/python2.6/django/utils/importlib.py", line 35, in import_module __import__(name) ImportError: No module named basic My settings.py is as follows: INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'apstat.basic', 'django.contrib.admin', ) If I remove the apstat.basic, it goes through, but that is not a solution. Is it something I am doing in apache? My apache - settings are - <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/ubuntu/server/ <Directory /> Options None AllowOverride None </Directory> <Directory /home/ubuntu/server/apstat> AllowOverride None Order allow,deny allow from all </Directory> <Location "/apstat"> SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE apstat.settings PythonOption django.root /home/ubuntu/server/ PythonDebug On PythonPath "['/home/ubuntu/server/'] + sys.path" </Location> </VirtualHost> I have now sat for more than a day on this. If someone can help me out, it would be very nice.

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >