Search Results

Search found 9611 results on 385 pages for 'cool hand luke uk'.

Page 286/385 | < Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >

  • Are separate business objects needed when persistent data can be stored in a usable format?

    - by Kylotan
    I have a system where data is stored in a persistent store and read by a server application. Some of this data is only ever seen by the server, but some of it is passed through unaltered to clients. So, there is a big temptation to persist data - whether whole rows/documents or individual fields/sub-documents - in the exact form that the client can use (eg. JSON), as this removes various layers of boilerplate, whether in the form of procedural SQL, an ORM, or any proxy structure which exists just to hold the values before having to re-encode them into a client-suitable form. This form can usually be used on the server too, though business logic may have to live outside of the object, On the other hand, this approach ends up leaking implementation details everywhere. 9 times out of 10 I'm happy just to read a JSON structure out of the DB and send it to the client, but 1 in every 10 times I have to know the details of that implicit structure (and be able to refactor access to it if the stored data ever changes). And this makes me think that maybe I should be pulling this data into separate business objects, so that business logic doesn't have to change when the data schema does. (Though you could argue this just moves the problem rather than solves it.) There is a complicating factor in that our data schema is constantly changing rapidly, to the point where we dropped our previous ORM/RDBMS system in favour of MongoDB and an implicit schema which was much easier to work with. So far I've not decided whether the rapid schema changes make me wish for separate business objects (so that server-side calculations need less refactoring, since all changes are restricted to the persistence layer) or for no separate business objects (because every change to the schema requires the business objects to change to stay in sync, even if the new sub-object or field is never used on the server except to pass verbatim to a client). So my question is whether it is sensible to store objects in the form they are usually going to be used, or if it's better to copy them into intermediate business objects to insulate both sides from each other (even when that isn't strictly necessary)? And I'd like to hear from anybody else who has had experience of a similar situation, perhaps choosing to persist XML or JSON instead of having an explicit schema which has to be assembled into a client format each time.

    Read the article

  • Squibbly: LibreOffice Integration Framework for the Java Desktop

    - by Geertjan
    Squibbly is a new framework for Java desktop applications that need to integrate with LibreOffice, or more generally, need office features as part of a Java desktop solution that could include, for example, JavaFX components. Here's what it looks like, right now, on Ubuntu 13.04: Why is the framework called Squibbly? Because I needed a unique-ish name, because "squibble" sounds a bit like "scribble" (which is what one does with text documents, etc), and because of the many absurd definitions in the Urban Dictionary for the apparently real word "squibble", e.g., "A name for someone who is squibblish in nature." And, another e.g., "A squibble is a small squabble. A squabble is a little skirmish." But the real reason is the first definition (and definitely not the fourth definition): "Taking a small portion of another persons something, such as a small hit off of a pipe, a bite of food, a sip of a drink, or drag of a cigarette." In other words, I took (or "squibbled") a small portion of LibreOffice, i.e., OfficeBean, and integrated it into a NetBeans Platform application. Now anyone can add new features to it, to do anything they need, such as create a legislative software system as Propylon has done with their own solution on the NetBeans Platform: For me, the starting point was Chuk Munn Lee's similar solution from some years ago. However, he uses reflection a lot in that solution, because he didn't want to bundle the related JARs with the application. I understand that benefit but I find it even more beneficial to not need to require the user to specify the location of the LibreOffice location, since all the necessary JARs and native libraries (currently 32-bit Linux only, by the way) are bundled with the application. Plus, hundreds of lines of reflection code, as in Chuk's solution, is not fun to work with at all. Switching between applications is done like this: It's a work in progress, a proof of concept only. Just the result of a few hours of work to get the basic integration to work. Several problems remain, some of them potentially unsolvable, starting with these, but others will be added here as I identify them: Window management problems. I'd like to let the user have multiple LibreOffice applications and documents open at the same time, each in a new TopComponent. However, I haven't figured out how to do that. Right now, each application is opened into the same TopComponent, replacing the currently open application. I don't know the OfficeBean API well enough, e.g., should a single OfficeBean be shared among multiple TopComponents or should each of them have their own instance of it? Focus problems. When putting the application behind other applications and then switching back to the application, typing text becomes impossible. When closing a TopComponent and reopening it, the content is lost completely. Somehow the loss of focus, and then the return of focus, disables something. No idea how to fix that. The project is checked into this location, which isn't public yet, so you can't access it yet. Once it's publicly available, it would be great to get some code contributions and tweaks, etc. https://java.net/projects/squibbly Here's the source structure, showing especially how the OfficeBean JARs and native libraries (currently for Linux 32-bit only) fit in: Ultimately, would be cool to integrate or share code with http://joeffice.com!

    Read the article

  • Pinterest and the Rising Power of Imagery

    - by Mike Stiles
    If images keep you glued to a screen, you’re hardly alone. Countless social users are letting their eyes do the walking, waiting for that special photo to grab their attention. And perhaps more than any other social network, Pinterest has been giving those eyes plenty of room to walk. Pinterest came along in 2010. Its play was that users could simply create topic boards and pin pictures to the appropriate boards for sharing. Yes there are some words, captions mostly, but not many. The speed of its growth raised eyebrows. Traffic quadrupled in the last quarter of 2011, with 7.51 million unique visitors in December alone. It now gets 1.9 billion monthly page views. And it was sticky. In the US, the average time a user spends strolling through boards and photos on Pinterest is 15 minutes, 50 seconds. Proving the concept of browsing a catalogue is not dead, it became a top 5 referrer for several apparel retailers like Land’s End, Nordstrom, and Bergdorfs. Now a survey of online shoppers by BizRate Insights says that Pinterest is responsible for more purchases online than Facebook. Over 70% of its users are going there specifically to keep up with trends and get shopping ideas. And when they buy, the average order value is $179. Pinterest is also scoring better in terms of user engagement. 66% of pinners regularly follow and repin retailers, whereas 17% of Facebook fans turn to that platform for purchase ideas. (Facebook still wins when it comes to reach and driving traffic to 3rd-party sites by the way). Social posting best practices have consistently shown that posts with photos are rewarded with higher engagement levels. You may be downright Shakespearean in your writing, but what makes images in the digital world so much more powerful than prose? 1. They transcend language barriers. 2. They’re fun and addictive to look at. 3. They can be consumed in fractions of a second, important considering how fast users move through their social content (admit it, you do too). 4. They’re efficient gateways. A good picture might get them to the headline. A good headline might then get them to the written content. 5. The audience for them surpasses demographic limitations. 6. They can effectively communicate and trigger an emotion. 7. With mobile use soaring, photos are created on those devices and easily consumed and shared on them. Pinterest’s iPad app hit #1 in the Apple store in 1 day. Even as far back as 2009, over 2.5 billion devices with cameras were on the streets generating in just 1 year, 10% of the number of photos taken…ever. But let’s say you’re not a retailer. What if you’re a B2B whose products or services aren’t visual? Should you worry about your presence on Pinterest? As with all things, you need a keen awareness of who your audience is, where they reside online, and what they want to do there. If it doesn’t make sense to put a tent stake in Pinterest, fine. But ignore the power of pictures at your own peril. If not visually, how are you going to attention-grab social users scrolling down their News Feeds at top speed? You’re competing with every other cool image out there from countless content sources. Bore us and we’ll fly right past you.

    Read the article

  • Oracle OpenWorld is on the Horizon

    - by Matthew Haavisto
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle OpenWorld 2012 is only a few months away, and we're excited about our slate of sessions and other activities scheduled this year.  PeopleSoft sessions are perennially among the best attended and well-received sessions, and we plan to keep that trend going. We have a full complement of sessions planned, from updates for some of your yearly favorites to lots of completely new topics.  Some of the long-standing favorites include the PeopleSoft Technology Roadmap, PeopleTools Tips and Techniques and a number of candid panel discussions. Coverage of the latest trends include new sessions on PeopleSoft on mobile platforms, PeopleSoft's new user experience and interaction model, advances in reporting and analytics and employing virtualization to reduce costs.  The PeopleTools team is also working closely with Applications groups this year to demonstrate to users how advances in PeopleTools will have a direct and beneficial impact on the latest applications releases.  There are plenty of sessions for developers and administrators as well, from sessions on enhancing, integrating, maintaining, and securing your applications, to tuning for performance.  We'll also update you on the latest platform roadmap. In addition to these conventional sessions, we will of course be manning the demo pods, where you'll be able to see the latest functionality first hand. We plan to engage in lots of direct customer interaction.  One of the highlights each year for our team as well as attendees is the session in which a panel of senior PeopleTools leaders talks candidly and engages in open Q&A with customers about our products.  This is definitely a discussion worth joining in on. Keep your eyes on this blog in the coming weeks for details on many of the sessions we have planned.  We look forward to seeing you at OpenWorld 2012!

    Read the article

  • WCF Routing Service Filter Generator

    - by Michael Stephenson
    Recently I've been working with the WCF routing service and in our case we were simply routing based on the SOAP Action. This is a pretty good approach for a standard redirection of the message when all messages matching a SOAP Action will go to the same endpoint. Using the SOAP Action also lets you be specific about which methods you expose via the router. One of the things which was a pain was the number of routing rules I needed to create because we were routing for a lot of different methods. I could have explored the option of using a regular expression to match the message to its routing but I wanted to be very specific about what's routed and not risk exposing methods I shouldn't via the router. I decided to put together a little spreadsheet so that I can generate part of the configuration I would need to put in the configuration file rather than have to type this by hand. To show how this works download the spreadsheet from the following url: https://s3.amazonaws.com/CSCBlogSamples/WCF+Routing+Generator.xlsx In the spreadsheet you will see that the squares in green are the ones which you need to amend. In the below picture you can see that you specify a prefix and suffix for the filter name. The core namespace from the web service your generating routing rules for and the WCF endpoint name which you want to route to. In column A you will see the green cells where you add the list of method names which you want to include routing rules for. The spreadsheet will workout what the full SOAP Action would be then the name you will use for that filter in your WCF Routing filters. In column D the spreadsheet will have generated the XML snippet which you can add to the routing filters section in your configuration file. In column E the spreadsheet will have created the XML snippet which you can add to the routing table to send messages matching each filter to the appropriate WCF client endpoint to forward the message to the required destination. Hopefully you can see that with this spreadsheet it would be very easy to produce accurate XML for the WCF Routing configuration if you had a large number of routing rules. If you had additional methods in other services you can simply copy the worksheet and add multiple copies to the Excel workbook. One worksheet per service would work well.

    Read the article

  • Best practice for organizing/storing character/monster data in an RPG?

    - by eclecto
    Synopsis: Attempting to build a cross-platform RPG app in Adobe Flash Builder and am trying to figure out the best class hierarchy and the best way to store the static data used to build each of the individual "hero" and "monster" types. My programming experience, particularly in AS3, is embarrassingly small. My ultra-alpha method is to include a "_class" object in the constructor for each instance. The _class, in turn, is a static Object pulled from a class created specifically for that purpose, so things look something like this: // Character.as package { public class Character extends Sprite { public var _strength:int; // etc. public function Character(_class:Object) { _strength = _class._strength; // etc. } } } // MonsterClasses.as package { public final class MonsterClasses extends Object { public static const Monster1:Object={ _strength:50, // etc. } // etc. } } // Some other class in which characters/monsters are created. // Create a new instance of Character var myMonster = new Character(MonsterClasses.Monster1); Another option I've toyed with is the idea of making each character class/monster type its own subclass of Character, but I'm not sure if it would be efficient or even make sense considering that these classes would only be used to store variables and would add no new methods. On the other hand, it would make creating instances as simple as var myMonster = new Monster1; and potentially cut down on the overhead of having to read a class containing the data for, at a conservative preliminary estimate, over 150 monsters just to fish out the one monster I want (assuming, and I really have no idea, that such a thing might cause any kind of slowdown in execution). But long story short, I want a system that's both efficient at compile time and easy to work with during coding. Should I stick with what I've got or try a different method? As a subquestion, I'm also assuming here that the best way to store data that will be bundled with the final game and not read externally is simply to declare everything in AS3. Seems to me that if I used, say, XML or JSON I'd have to use the associated AS3 classes and methods to pull in the data, parse it, and convert it to AS3 object(s) anyway, so it would be inefficient. Right?

    Read the article

  • Using Private Extension Galleries in Visual Studio 2012

    - by Jakob Ehn
    Note: The installer and the complete source code is available over at CodePlex at the following location: http://inmetavsgallery.codeplex.com   Extensions and addins are everywhere in the Visual Studio ALM ecosystem! Microsoft releases new cool features in the form of extensions and the list of 3rd party extensions that plug into Visual Studio just keeps growing. One of the nice things about the VSIX extensions is how they are deployed. Microsoft hosts a public Visual Studio Gallery where you can upload extensions and make them available to the rest of the community. Visual Studio checks for updates to the installed extensions when you start Visual Studio, and installing/updating the extensions is fast since it is only a matter of extracting the files within the VSIX package to the local extension folder. But for custom, enterprise-specific extensions, you don’t want to publish them online to the whole world, but you still want an easy way to distribute them to your developers and partners. This is where Private Extension Galleries come into play. In Visual Studio 2012, it is now possible to add custom extensions galleries that can point to any URL, as long as that URL returns the expected content of course (see below).Registering a new gallery in Visual Studio is easy, but there is very little documentation on how to actually host the gallery. Visual Studio galleries uses Atom Feed XML as the protocol for delivering new and updated versions of the extensions. This MSDN page describes how to create a static XML file that returns the information about your extensions. This approach works, but require manual updates of that file every time you want to deploy an update of the extension. Wouldn’t it be nice with a web service that takes care of this for you, that just lets you drop a new version of your VSIX file and have it automatically detect the new version and produce the correct Atom Feed XML? Well search no more, this is exactly what the Inmeta Visual Studio Gallery Service does for you :-) Here you can see that in addition to the standard Online galleries there is an Inmeta Gallery that contains two extensions (our WIX templates and our custom TFS Checkin Policies). These can be installed/updated i the same way as extensions from the public Visual Studio Gallery. Installing the Service Download the installler (Inmeta.VSGalleryService.Install.msi) for the service and run it. The installation is straight forward, just select web site, application pool and (optional) a virtual directory where you want to install the service.   Note: If you want to run it in the web site root, just leave the application name blank Press Next and finish the installer. Open web.config in a text editor and locate the the <applicationSettings> element Edit the following setting values: FeedTitle This is the name that is shown if you browse to the service using a browser. Not used by Visual Studio BaseURI When Visual Studio downloads the extension, it will be given this URI + the name of the extension that you selected. This value should be on the following format: http://SERVER/[VDIR]/gallery/extension/ VSIXAbsolutePath This is the path where you will deploy your extensions. This can be a local folder or a remote share. You just need to make sure that the application pool identity account has read permissions in this folder Save web.config to finish the installation Open a browser and enter the URL to the service. It should show an empty Feed page:   Adding the Private Gallery in Visual Studio 2012 Now you need to add the gallery in Visual Studio. This is very easy and is done as follows: Go to Tools –> Options and select Environment –> Extensions and Updates Press Add to add a new gallery Enter a descriptive name, and add the URL that points to the web site/virtual directory where you installed the service in the previous step   Press OK to save the settings. Deploying an Extension This one is easy: Just drop the file in the designated folder! :-)  If it is a new version of an existing extension, the developers will be notified in the same way as for extensions from the public Visual Studio gallery: I hope that you will find this sever useful, please contact me if you have questions or suggestions for improvements!

    Read the article

  • Practices for domain models in Javascript (with frameworks)

    - by AndyBursh
    This is a question I've to-and-fro'd with for a while, and searched for and found nothing on: what're the accepted practices surrounding duplicating domain models in Javascript for a web application, when using a framework like Backbone or Knockout? Given a web application of a non-trivial size with a set of domain models on the server side, should we duplicate these models in the web application (see the example at the bottom)? Or should we use the dynamic nature to load these models from the server? To my mind, the arguments for duplicating the models are in easing validation of fields, ensuring that fields that expected to be present are in fact present etc. My approach is to treat the client-side code like an almost separate application, doing trivial things itself and only relying on the server for data and complex operations (which require data the client-side doesn't have). I think treating the client-side code like this is akin to separation between entities from an ORM and the models used with the view in the UI layer: they may have the same fields and relate to the same domain concept, but they're distinct things. On the other hand, it seems to me that duplicating these models on the server side is a clear violation of DRY and likely to lead to differing results on the client- and server-side (where one piece gets updated but the other doesn't). To avoid this violation of DRY we can simply use Javascripts dynamism to get the field names and data from the server as and when they're neeed. So: are there any accepted guidelines around when (and when not) to repeat yourself in these situations? Or this a purely subjective thing, based on the project and developer(s)? Example Server-side model class M { int A DateTime B int C int D = (A*C) double SomeComplexCalculation = ServiceLayer.Call(); } Client-side model function M(){ this.A = ko.observable(); this.B = ko.observable(); this.C = ko.observable(); this.D = function() { return A() * C(); } this.SomeComplexCalculation = ko.observalbe(); return this; }l M.GetComplexValue = function(){ this.SomeComplexCalculation(Ajax.CallBackToServer()); }; I realise this question is quite similar to this one, but I think this is more about almost wholly untying the web application from the server, where that question is about doing this only in the case of complex calculation.

    Read the article

  • How do you turn on the customizable gnome-panel features (like gnome-applets) in Precise?

    - by chriv
    I resurrected a broken laptop today. I took out the HDD, put it in a USB 3.0 enclosure, and created a VM that would use it. It was running lucid. I took a screenshot of the desktop before I started "do-release-upgrade", because from experience, I will never have my GUI back the way I want it again. I know how to install gnome-panel to get back the "Gnome Classic" session option. I know how to put my minimize, maximize, and close buttons back in the upper-right hand corner of windows (where they belong). I know how to use gdm instead of lightdm. Unity gets worse in every version (and the other desktop OS is going to be even worse with Metro). Here's what I don't know (in order of importance): 1. How do you make the panels in gnome (gnome-panel, to be precise) customizable again (like they were in older versions of Ubuntu)? 2. How do you install applets in the panels now (right-click is now ignored)? 3. How can you customize all of the window elements (like you could in older versions of Ubuntu)? I can't remember much about maverick, natty, or oneiric (except their names), so I don't know exactly when I lost these capabilities. Edit: (no screenshot), my StackExchange reputation (on other StackExchange sites) doesn't carry over to this site, so I can't post the screenshot. Take a look at the panels in the screen hot. They are nice, compact, and VERY functional (disk mounter applet, frequently used shortcuts, workspaces, show desktop, kill window, and trash icons, etc.) Notice how small the fonts (and how little real estate they waste). You can't notice the compact title bars, fonts, and window icons in this screen shot (since I redacted the rest of the desktop), but it's the same story there. Please help. I don't want to learn another distro, but Ubuntu gets less customizable with every "upgrade." Screenshot (not an inline image, since I don't have the reputation yet)... i.stack.imgur.com/puoUT.png

    Read the article

  • Can someone explain the true landscape of Rails vs PHP deployment, particularly within the context of Reseller-based web hosting (e.g., Hostgator)?

    - by rcd
    Currently, I have a reseller account with the company HostGator. I design websites, which up until now have occasionally been wrapped in Wordpress CMSs and the like (PHP applications). I then sell hosting (of the site I've designed) to the client, which is pretty simple, in that I can simply click a button and add a new shared hosting account/site with whatever settings I want. Furthermore, I then utilize WHMCS to automate billing and account management. It's a nice package and pretty simple. I pay something like $25 a month, and can sell a hundred accounts under this (because my clients bandwidth requirements are low). Now I am finding the need to develop more customized applications, including a minimalist CMS and several proprietary things. I soon anticipate developing these apps for clients as well. Thus, I've spent the past few months learning Rails, and it's coming along well now. The thing that has nagged at me all along, though, is the deployment issue. I can't wrap my brain around it. It seems like all of the popular options (Heroku, etc) have nice automation with git and are set up in the "Rails Way". I get that (sort of). But it's terribly expensive... a single dyno, a helper, and the cheapest database (which they say is mainly suitable for testing) that isn't limited to 5MB runs $51. This is for ONE app!!! Throw in a "production" DB and you're over $200. This is like... the same prices as getting a server somewhere, right? Meanwhile, going back to what I guess is a "traditional" hosting environment with Hostgator, their server only has Ruby 1.8.7 and Rails 2.3.5... No Rails 3. AND, no Passenger (not that I really understand the difference in CGI or mod_rails or whatever, but they say Passenger is the simplest). So I'm to understand that if I build an app in Rails 3, it won't run at all on this host? But damn, I already have these accounts under my reseller account there, all running static html and/or PHP stuff, right? So what now? How do I get all of this under one simple (and affordable) roof? Forgive my ignorance, but I just don't get it. Managing a VPS is cool and all, but entails learning server admin stuff and security... And it's expensive. I get that a shared and/or reseller "server-based" (forgive the terminology) may be inadequate for large-scale apps that use a lot of bandwidth... But what about for those of us who are building real (but small and low bandwidth) apps (with Rails) and who want to deploy them simply, cheaply, using the same conceptual approach as PHP? Even after learning all of this Ruby and Rails stuff for months, I'm questioning whether it's worth it when it comes to deployment. I want to build a small app, upload it to my home directory on a shared server account, and just make it run. Why should that be so hard? Am I just choosing the wrong language/framework? Forgive my ignorance in the subject; these questions are not rhetorical; just trying to learn here. So: 1) I'd appreciate if someone could give me a good rundown of how to understand deployment in Rails vs. PHP. 2) I'd appreciate if someone could address my issue with running a hosting/web business around reseller hosting (Hostgator) while also being able to host Rails apps. Can it be done? And how can a company like Hostgator completely ignore what's current in Rails/Ruby? Thanks.

    Read the article

  • F# and the useful infinite Sequence (I think)

    - by MarkPearl
    So I have seen a few posts done by other F# fans on solving project Euler problems. They looked really interesting and I thought with my limited knowledge of F# I would attempt a few and the first one I had a look at was problem 5. Which said : “2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder. What is the smallest number that is evenly divisible by all of the numbers from 1 to 20?” So I jumped into coding it and straight away got stuck – the C# programmer in me wants to do a loop, starting at one and dividing every number by 1 to 20 to see if they all divide and once a match is found, there is your solution. Obviously not the most elegant way but a good old brute force approach. However I am pretty sure this would not be the F# way…. So after a bit of research I found the Sequences and how useful they were. Sequences seemed like the beginning of an approach to solve my problem. In my head I thought - create a sequence, and then start at the beginning of it and move through it till you find a value that is divisible by 1 to 20. Sounds reasonable? So the question is begged - how would you create a sequence that you are sure will be large enough to hold the solution to the problem? Well… You can’t know! Some more googling and I found what I would call infinite sequences – something that looks like this… let nums = 1 |> Seq.unfold (fun i -> Some (i, i + 1))   My interpretation of this would be as follows… create a sequence, and whenever it is called add 1 to its size (I would appreciate someone helping me on wording this right functionally). Something that I don’t understand fully yet is the forward pipe operator (|>) which I think plays a key role in this code. With this in hand I was able to code a basic optimized solution to this problem. I’m going to go over it some more before I post the full code just in case!

    Read the article

  • Response: Agile's Second Chasm

    - by Malcolm Anderson
    William Pietri over at Agile Focus has written an interesting article entitled, "Agile’s Second Chasm (and how we fell in)" in which he talks about how agile development has fallen into a common trap where large companies are now spending a lot of money hiring agile (Scrum) consultants just so that they can say they are agile, but all the while avoiding any change that is required by Scrum.   It echoes the questions that I've been asking for a while, "Can a fortune 500 company actually do agile development?"  I'm starting to think that the answer is "usually not"   William ask 3 questions at the end of his article that I will answer here.   1) Have I seen agile development brought in and then preemptively customized (read: made into ScrummerFall)?   Yes, Scrum is hard and disruptive.  It's a spotlight on company dysfunction.  In a low trust environment like most fortune 500 companies Scrum will be subverted by anyone who has ever seen "transparency" translate into someone being laid off.   2) If I had to do it all over again, would I change anything?  No, this is a natural progression, but the agile principles are powerful enough, that the companies that don't adopt them will no longer be competitive and will start to fail.   3) Is this situation solvable?  I think it is.  I think that one of the issues is that you often see companies implementing Scrum, but avoiding the agile engineering practices.  I believe that you cannot do one without the other.  Scrum keeps the ship sailing in smooth deep waters.  The agile engineering practices keep the engine running smoothly and cleanly.  If you implement agile engineering practices without Scrum, you run the risk of ending up with a great running piece of software that is useful to no one.  On the other hand, implementing cargo-cult Scrum without the agile engineering practices and you end up (especially in a fortune 500 company) being steered in the right direction, but with your development practices coming to a dead halt because you have code that can not keep up with the changes in requirements.   If you are trying to do Scrum, make sure that you hire some agile engineering coaches, or else you may find your deveolpment engines grinding to a dead halt in the middle of the open ocean.

    Read the article

  • Lazy Processing of Streams

    - by Giorgio
    I have the following problem scenario: I have a text file and I have to read it and split it into lines. Some lines might need to be dropped (according to criteria that are not fixed). The lines that are not dropped must be parsed into some predefined records. Records that are not valid must be dropped. Duplicate records may exist and, in such a case, they are consecutive. If duplicate / multiple records exist, only one item should be kept. The remaining records should be grouped according to the value contained in one field; all records belonging to the same group appear one after another (e.g. AAAABBBBCCDEEEFF and so on). The records of each group should be numbered (1, 2, 3, 4, ...). For each group the numbering starts from 1. The records must then be saved somewhere / consumed in the same order as they were produced. I have to implement this in Java or C++. My first idea was to define functions / methods like: One method to get all the lines from the file. One method to filter out the unwanted lines. One method to parse the filtered lines into valid records. One method to remove duplicate records. One method to group records and number them. The problem is that the data I am going to read can be too big and might not fit into main memory: so I cannot just construct all these lists and apply my functions one after the other. On the other hand, I think I do not need to fit all the data in main memory at once because once a record has been consumed all its underlying data (basically the lines of text between the previous record and the current record, and the record itself) can be disposed of. With the little knowledge I have of Haskell I have immediately thought about some kind of lazy evaluation, in which instead of applying functions to lists that have been completely computed, I have different streams of data that are built on top of each other and, at each moment, only the needed portion of each stream is materialized in main memory. But I have to implement this in Java or C++. So my question is which design pattern or other technique can allow me to implement this lazy processing of streams in one of these languages.

    Read the article

  • C# Collision Math Help

    - by user36037
    I am making my own collision detection in MonoGame. I have a PolyLine class That has a property to return the normal of that PolyLine instance. I have a ConvexPolySprite class that has a List LineSegments. I hav a CircleSprite class that has a Center Property and a Radius Property. I am using a static class for the collision detection method. I am testing it on a single line segment. Vector2(200,0) = Vector2(300, 200) The problem is it detects the collision anywhere along the path of line out into space. I cannot figure out why. Thanks in advance; public class PolyLine { //--------------------------------------------------------------------------------------------------------------------------- // Class Properties /// <summary> /// Property for the upper left-hand corner of the owner of this instance /// </summary> public Vector2 ParentPosition { get; set; } /// <summary> /// Relative start point of the line segment /// </summary> public Vector2 RelativeStartPoint { get; set; } /// <summary> /// Relative end point of the line segment /// </summary> public Vector2 RelativeEndPoint { get; set; } /// <summary> /// Property that gets the absolute position of the starting point of the line segment /// </summary> public Vector2 AbsoluteStartPoint { get { return ParentPosition + RelativeStartPoint; } }//end of AbsoluteStartPoint /// <summary> /// Gets the absolute position of the end point of the line segment /// </summary> public Vector2 AbsoluteEndPoint { get { return ParentPosition + RelativeEndPoint; } }//end of AbsoluteEndPoint public Vector2 NormalizedLeftNormal { get { Vector2 P = AbsoluteEndPoint - AbsoluteStartPoint; P.Normalize(); float x = P.X; float y = P.Y; return new Vector2(-y, x); } }//end of NormalizedLeftNormal //--------------------------------------------------------------------------------------------------------------------------- // Class Constructors /// <summary> /// Sole ctor /// </summary> /// <param name="parentPosition"></param> /// <param name="relStart"></param> /// <param name="relEnd"></param> public PolyLine(Vector2 parentPosition, Vector2 relStart, Vector2 relEnd) { ParentPosition = parentPosition; RelativeEndPoint = relEnd; RelativeStartPoint = relStart; }//end of ctor }//end of PolyLine class public static bool Collided(CircleSprite circle, ConvexPolygonSprite poly) { var distance = Vector2.Dot(circle.Position - poly.LineSegments[0].AbsoluteEndPoint, poly.LineSegments[0].NormalizedLeftNormal) + circle.Radius; if (distance <= 0) { return false; } else { return true; } }//end of collided

    Read the article

  • Why C++ people loves multithreading when it comes to performances?

    - by user1849534
    I have a question, it's about why programmers seems to love concurrency and multi-threaded programs in general. I'm considering 2 main approach here: an async approach basically based on signals, or just an async approach as called by many papers and languages like the new C# 5.0 for example, and a "companion thread" that maanges the policy of your pipeline a concurrent approach or multi-threading approach I will just say that I'm thinking about the hardware here and the worst case scenario, and I have tested this 2 paradigms myself, the async paradigm is a winner at the point that I don't get why people 90% of the time talk about concurrency when they wont to speed up things or make a good use of their resources. I have tested multi-threaded programs and async program on an old machine with an Intel quad-core that doesn't offer a memory controller inside the CPU, the memory is managed entirely by the motherboard, well in this case performances are horrible with a multi-threaded application, even a relatively low number of threads like 3-4-5 can be a problem, the application is unresponsive and is just slow and unpleasant. A good async approach is, on the other hand, probably not faster but it's not worst either, my application just waits for the result and doesn't hangs, it's responsive and there is a much better scaling going on. I have also discovered that a context change in the threading world it's not that cheap in real world scenario, it's infact quite expensive especially when you have more than 2 threads that need to cycle and swap among each other to be computed. On modern CPUs the situation it's not really that different, the memory controller it's integrated but my point is that an x86 CPUs is basically a serial machine and the memory controller works the same way as with the old machine with an external memory controller on the motherboard. The context switch is still a relevant cost in my application and the fact that the memory controller it's integrated or that the newer CPU have more than 2 core it's not bargain for me. For what i have experienced the concurrent approach is good in theory but not that good in practice, with the memory model imposed by the hardware, it's hard to make a good use of this paradigm, also it introduces a lot of issues ranging from the use of my data structures to the join of multiple threads. Also both paradigms do not offer any security abut when the task or the job will be done in a certain point in time, making them really similar from a functional point of view. According to the X86 memory model, why the majority of people suggest to use concurrency with C++ and not just an async aproach ? Also why not considering the worst case scenario of a computer where the context switch is probably more expensive than the computation itself ?

    Read the article

  • Problem with DualBooting Ubuntu 13.10 and Win7

    - by VinArrow
    this is my first post here on AskUbuntu, not first time using it though. I wanted to install Ubuntu 13.10 on my PC to have all my work stuff there and leave Win7 for gaming. So i did my research on how to Dual Boot when you already have Win7 installed, here are the steps i took Used Disk Management on Win7 and shrunk that partition, leaving 80GB free for Ubuntu. Made a Bootable pendrive following the instructions on Ubuntu`s website. During the installation steps there was supposed to be a Install alongside Win7, but there wasnt, so i chose Something else. Everything was fine and i was able to install Ubuntu no problem on my unallocated 80GB partition (76GB Ubuntu + 4GB swap) There was a prompt for me to restart my PC and so I did expecting to see the dual boot screen (grub right?) Now, when i restarted my PC, Grub never showed up and it booted straight to Windows. Then I did some more research and found out that that could happen. Tried three things then Plugging in bootable pendrive again and selected Try Ubuntu without installing. Then i followed some instructions found here (How can I repair grub? (How to get Ubuntu back after installing Windows?)) and i could chroot into my Ubuntu install just fine. Repaired grub as instructed on that link, restarted the PC and booted straight into Win7 again. Again, used the bootable pen drive to Try Ubuntu... and used the Boot-repair tool (recommended repair). Again, booted straight into Win7. Lastly, i installed easyBCD on my Win7 and made a new entry for Ubuntu (Linux/BSD). When i rebooted the PC, there was the option to choose between Win7 and Linux, chose linux and it didnt work, taking me straight to a command line-like enviroment that read Minimum bash like scripting or something, as if I didn`t have a Linux OS installed. So, I thought I`d try and repair my Ubuntu install. And during the Installation method step there was the choice to install alongside Ubuntu 13.10! and that right there drove me crazy. Here is a screenshot of gparted showing how things are set up now http://imageshack.us/f/801/77u3.png/ Notice on the left-hand side how i can access my installation files just fine. sdb1- win7 reserved space, sdb2- win7 OS, sdb3- 76GB ubuntu install, sdb5- 4GB swap area. Does anyone know why my Ubuntu 13.10 is not being recognized? and what should I do to get it working? Thanks and sorry for the long read and bad english! (BIOS = legacy)

    Read the article

  • The Minimalist Approach to Content Governance - Manage Phase

    - by Kellsey Ruppel
    Originally posted by John Brunswick. Most people would probably agree that creating content is the enjoyable part of the content life cycle. Management, on the other hand, is generally not. This is why we thankfully have an opportunity to leverage meta data, security and other settings that have been applied or inherited in the prior parts of our governance process. In the interests of keeping this process pragmatic, there is little day to day activity that needs to happen here. Most of the activity that happens post creation will occur in the final "Retire" phase in which content may be archived or removed. The Manage Phase will focus on updating content and the meta data associated with it - specifically around ownership. Often times the largest issues with content ownership occur when a content creator leaves and organization or changes roles within an organization.   1. Update Content Ownership (as needed and on a quarterly basis) Why - Without updating content to reflect ownership changes it will be impossible to continue to meaningfully govern the content. How - Run reports against the meta data (creator and department to which the creator belongs) on content items for a particular user that may have left the organization or changed job roles. If the content is without and owner connect with the department marked as responsible. The content's ownership should be reassigned or if the content is no longer needed by that department for some reason it can be archived and or deleted. With a minimal investment it is possible to create reports that use an LDAP or Active Directory system to contrast all noted content owners versus the users that exist in the directory. The delta will indicate which content will need new ownership assigned. Impact - This implicitly keeps your repository and search collection clean. A very large percent of content that ends up no longer being useful to an organization falls into this category. This management phase (automated if possible) should be completed every quarter or as needed. The impact of actually following through with this phase is substantial and will provide end users with a better experience when browsing and searching for content.

    Read the article

  • High Resolution Timeouts

    - by user12607257
    The default resolution of application timers and timeouts is now 1 msec in Solaris 11.1, down from 10 msec in previous releases. This improves out-of-the-box performance of polling and event based applications, such as ticker applications, and even the Oracle rdbms log writer. More on that in a moment. As a simple example, the poll() system call takes a timeout argument in units of msec: System Calls poll(2) NAME poll - input/output multiplexing SYNOPSIS int poll(struct pollfd fds[], nfds_t nfds, int timeout); In Solaris 11, a call to poll(NULL,0,1) returns in 10 msec, because even though a 1 msec interval is requested, the implementation rounds to the system clock resolution of 10 msec. In Solaris 11.1, this call returns in 1 msec. In specification lawyer terms, the resolution of CLOCK_REALTIME, introduced by POSIX.1b real time extensions, is now 1 msec. The function clock_getres(CLOCK_REALTIME,&res) returns 1 msec, and any library calls whose man page explicitly mention CLOCK_REALTIME, such as nanosleep(), are subject to the new resolution. Additionally, many legacy functions that pre-date POSIX.1b and do not explicitly mention a clock domain, such as poll(), are subject to the new resolution. Here is a fairly comprehensive list: nanosleep pthread_mutex_timedlock pthread_mutex_reltimedlock_np pthread_rwlock_timedrdlock pthread_rwlock_reltimedrdlock_np pthread_rwlock_timedwrlock pthread_rwlock_reltimedwrlock_np mq_timedreceive mq_reltimedreceive_np mq_timedsend mq_reltimedsend_np sem_timedwait sem_reltimedwait_np poll select pselect _lwp_cond_timedwait _lwp_cond_reltimedwait semtimedop sigtimedwait aiowait aio_waitn aio_suspend port_get port_getn cond_timedwait cond_reltimedwait setitimer (ITIMER_REAL) misc rpc calls, misc ldap calls This change in resolution was made feasible because we made the implementation of timeouts more efficient a few years back when we re-architected the callout subsystem of Solaris. Previously, timeouts were tested and expired by the kernel's clock thread which ran 100 times per second, yielding a resolution of 10 msec. This did not scale, as timeouts could be posted by every CPU, but were expired by only a single thread. The resolution could be changed by setting hires_tick=1 in /etc/system, but this caused the clock thread to run at 1000 Hz, which made the potential scalability problem worse. Given enough CPUs posting enough timeouts, the clock thread could be a performance bottleneck. We fixed that by re-implementing the timeout as a per-CPU timer interrupt (using the cyclic subsystem, for those familiar with Solaris internals). This decoupled the clock thread frequency from timeout resolution, and allowed us to improve default timeout resolution without adding CPU overhead in the clock thread. Here are some exceptions for which the default resolution is still 10 msec. The thread scheduler's time quantum is 10 msec by default, because preemption is driven by the clock thread (plus helper threads for scalability). See for example dispadmin, priocntl, fx_dptbl, rt_dptbl, and ts_dptbl. This may be changed using hires_tick. The resolution of the clock_t data type, primarily used in DDI functions, is 10 msec. It may be changed using hires_tick. These functions are only used by developers writing kernel modules. A few functions that pre-date POSIX CLOCK_REALTIME mention _SC_CLK_TCK, CLK_TCK, "system clock", or no clock domain. These functions are still driven by the clock thread, and their resolution is 10 msec. They include alarm, pcsample, times, clock, and setitimer for ITIMER_VIRTUAL and ITIMER_PROF. Their resolution may be changed using hires_tick. Now back to the database. How does this help the Oracle log writer? Foreground processes post a redo record to the log writer, which releases them after the redo has committed. When a large number of foregrounds are waiting, the release step can slow down the log writer, so under heavy load, the foregrounds switch to a mode where they poll for completion. This scales better because every foreground can poll independently, but at the cost of waiting the minimum polling interval. That was 10 msec, but is now 1 msec in Solaris 11.1, so the foregrounds process transactions faster under load. Pretty cool.

    Read the article

  • Protobuf design patterns

    - by Monster Truck
    I am evaluating Google Protocol Buffers for a Java based service (but am expecting language agnostic patterns). I have two questions: The first is a broad general question: What patterns are we seeing people use? Said patterns being related to class organization (e.g., messages per .proto file, packaging, and distribution) and message definition (e.g., repeated fields vs. repeated encapsulated fields*) etc. There is very little information of this sort on the Google Protobuf Help pages and public blogs while there is a ton of information for established protocols such as XML. I also have specific questions over the following two different patterns: Represent messages in .proto files, package them as a separate jar, and ship it to target consumers of the service --which is basically the default approach I guess. Do the same but also include hand crafted wrappers (not sub-classes!) around each message that implement a contract supporting at least these two methods (T is the wrapper class, V is the message class (using generics but simplified syntax for brevity): public V toProtobufMessage() { V.Builder builder = V.newBuilder(); for (Item item : getItemList()) { builder.addItem(item); } return builder.setAmountPayable(getAmountPayable()). setShippingAddress(getShippingAddress()). build(); } public static T fromProtobufMessage(V message_) { return new T(message_.getShippingAddress(), message_.getItemList(), message_.getAmountPayable()); } One advantage I see with (2) is that I can hide away the complexities introduced by V.newBuilder().addField().build() and add some meaningful methods such as isOpenForTrade() or isAddressInFreeDeliveryZone() etc. in my wrappers. The second advantage I see with (2) is that my clients deal with immutable objects (something I can enforce in the wrapper class). One disadvantage I see with (2) is that I duplicate code and have to sync up my wrapper classes with .proto files. Does anyone have better techniques or further critiques on any of the two approaches? *By encapsulating a repeated field I mean messages such as this one: message ItemList { repeated item = 1; } message CustomerInvoice { required ShippingAddress address = 1; required ItemList = 2; required double amountPayable = 3; } instead of messages such as this one: message CustomerInvoice { required ShippingAddress address = 1; repeated Item item = 2; required double amountPayable = 3; } I like the latter but am happy to hear arguments against it.

    Read the article

  • Recommended learning path?

    - by stairmast0r
    First, my current standing: I know C++ at an.. advanced beginner level? I've gone through a book, I know the syntax well enough, I know a fair amount of standard library functions, and I've programmed some simple console stuff with it. I'd probably be able to program more with it if I knew how to structure a program, but I just can't seem to wrap my head around the whole concept of structuring something remotely complex. I've messed around with Java for a day or two, and the syntax was extremely easy to get the hang of, except that I didn't really know any functions. I'm plenty willing to learn, and to work hard to do so, but I don't really know where to go from here. Now, at the risk of sounding cliche, what I'd like to become is someone like the great three of id; Carmack, Romero, and Abrash. To be considered a genius. I believe anything can be learned, and nothing mentally limits anyone except lack of desire to learn. But I don't know how to learn this. They learned by doing, and making do with what resources they had. On the other hand, I have access to almost any books I want, access to the internet, and access to a more than capable computer and software. Should I learn more languages? Assembly? LISP? BASIC? Haskell? Should I dive straight into advanced topics like OpenGL? Or should I wait until I feel I've come closer to mastering the simpler things, like console programs, first? Should I follow tutorials? Should I follow books? Should I just dive into writing something and follow a reference manual as I go? What order should I do all this in? How should I do it? I want to completely master this; to be considered a genius. The most perfect career I can imagine is to start the next id. I have the drive to do it, I just don't know where to begin...

    Read the article

  • squid and ftp connections

    - by Kstro21
    i have a squid proxy server for both, http and ftp connections, i'm trying to use filezilla to open a ftp, but it always fail with an error saying: Status: Connection with proxy established, performing handshake... Response: Proxy reply: HTTP/1.0 403 Forbidden Error: Proxy handshake failed: ECONNRESET - Connection reset by peer Error: Connection timed out Error: Failed to retrieve directory listing i sniff the traffic, and, filezilla is trying to connect to a different port and the proxy denied it look, this is a portion of the sniff result CONNECT 201.150.36.227:61179 HTTP/1.1 Host: 201.150.36.227:61179 User-Agent: FileZilla everytime is a different port, so, no way i can allow it in the squid, also, i set the filezilla to use a active connection, same result, passive connection, same result again, so, i'm out of bullets, and i need your help, maybe a setting in the filezilla or in the squid can do the job, so, give a hand here this is the full log of the filezilla Status: Connecting to uhma.mx through proxy Status: Connecting to 172.19.216.13:3128... Status: Connection with proxy established, performing handshake... Response: Proxy reply: HTTP/1.0 200 Connection established Status: Connection established, waiting for welcome message... Response: 220 ProFTPD 1.3.3a Server (a3 FTP CUATRO) [201.150.36.227] Command: USER uhmamx Response: 331 Password required for uhmamx Command: PASS ******* Response: 230 User uhmamx logged in Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is the current directory Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode (201,150,36,227,238,251). Command: MLSD Status: Connecting to 172.19.216.13:3128... Status: Connection with proxy established, performing handshake... Response: Proxy reply: HTTP/1.0 403 Forbidden Error: Proxy handshake failed: ECONNRESET - Connection reset by peer Error: Connection timed out Error: Failed to retrieve directory listing

    Read the article

  • Outlook 2007 unable to connect to exchange server

    - by mattwarren
    I came into the office a week ago and outlook has refused to connect ever since, it just says "Disconnected" in the bottom right-hand corner. I've tried restarting it, rebooting Windows etc. I'm the only one if our office who is having this problem, so it's not a general problem with the server. Things I've tried Pinging the server via IP address and host name, both work fine Connecting via OWA, this works using the same machine name Connecting to Exchange via HTTP ("Outlook Anywhere") doesn't work None of the suggestions in this question helped, http://serverfault.com/questions/21755/can-ping-exchange-server-cant-connect-outlook-to-it Disabling Windows firewall on my laptop also has no effect. There are no items in the event viewer that indicate that anything it up. Also no permissions have changed on the server since when it worked. Update: I've just tried logging onto a completely different PC, using my domain controller user-name/password. When I setup outlook on there it also fails, so the problem isn't specific to 1 pc/outlook it's something about my particular user name, but not when using OWA? What else can I do to diagnose this, any suggestions?

    Read the article

  • How do I install and use the cli53 tools on Windows?

    - by pavlos
    I'm trying to find the simplest way to import a large number of BIND zone files in to Route 53. I've had a quick look at the AWS CLI and AWS Tools for Windows PowerShell but they don't seem to include a zone file import option like the AWS Route53 GUI does. The cli53 utility on the other hand does, but is written in Python and appears to have a series of pre-requisites to get going which I'm having troubles working out for Windows. I can find plenty of examples of setting it up under Linux but only one reference to a PowerShell example here, but it doesn't explain how to install cli53 in the first place. The other option I'm exploring is to use the BIND to Amazon Route 53 Conversion Tool perl script to first convert the zone files to the Route53 CreateHostedZoneRequest XML format and then use the AWS New-R53HostedZone PowerShell cmdlet to import the zones. After the zones have been imported I'll be looking at running a script to validate what has been created in Route53 matches with the existing nameserver prior to updating each domains nameserver records - I was planning on whipping something up using the new PS4.0 Resolve-DnsName cmdlet, but let me know if you have any better suggestions. Any assistance would be greatly appreciated - thanks. (By the way, I had more reference links in my post but ServerFault won't allow me to post more than 2 links being a new member; and for this same reason I also can't comment on Vasili's example in the other linked thread)

    Read the article

  • Timer_EntityBody, Timer_ConnectionIdle and Connection Closed Unexpectly

    - by ihsany
    We have a windows application, it connects to a web service (XML web service hosted on a Windows 2008 Server IIS 7.5, no antivirus) and fetches some data to the client. But sometimes (around 5%-10% of the requests), it gives an error when trying to connect web service. Here is the client application error log; Exception:System.Net.WebException: The underlying connection was closed: The connection was closed unexpectedly. at System.Web.Services.Protocols.WebClientAsyncResult.WaitForResponse() at System.Web.Services.Protocols.WebClientProtocol.EndSend(IAsyncResult asyncResult, Object& internalAsyncState, Stream& responseStream) at System.Web.Services.Protocols.SoapHttpClientProtocol.EndInvoke(IAsyncResult asyncResult) at APPClient.APPFPService.WEBService.EndAddMoney(IAsyncResult asyncResult) at APPClient.BLL.ServiceAgent.AddMoneyCallback(IAsyncResult ar) From other hand, on the web server, i checked HTTP error logs and i see a long file like this; 2014-06-05 14:02:04 65.82.178.73 53798 SERVER.IP.ADDRESS 80 - - - - - Timer_ConnectionIdle - 2014-06-05 14:07:24 76.109.81.223 58985 SERVER.IP.ADDRESS 80 - - - - - Timer_ConnectionIdle - 2014-06-05 14:07:39 76.109.81.223 2803 SERVER.IP.ADDRESS 80 - - - - - Timer_ConnectionIdle - 2014-06-05 14:08:59 76.109.81.223 52656 SERVER.IP.ADDRESS 80 - - - - - Timer_ConnectionIdle - 2014-06-05 14:09:05 65.82.178.73 53904 SERVER.IP.ADDRESS 80 HTTP/1.1 POST /webservice/webservice.asmx - 2 Timer_EntityBody SYPService 2014-06-05 14:10:55 50.186.180.191 50648 SERVER.IP.ADDRESS 80 - - - - - Timer_ConnectionIdle - Here is a similar situation but it did not help me. UPDATE: When i checked the IIS logs, i see some issues like these; cs-method cs-uri-stem sc-status sc-win32-status time-taken cs-version POST /webservice/webservice.asmx 400 64 46 HTTP/1.1 POST /webservice/webservice.asmx 400 64 134675 HTTP/1.1 POST /webservice/webservice.asmx 400 64 37549 HTTP/1.1 POST /webservice/webservice.asmx 400 64 109 HTTP/1.1 POST /webservice/webservice.asmx 400 64 31 HTTP/1.1 POST /webservice/webservice.asmx 400 64 0 HTTP/1.1 POST /webservice/webservice.asmx 400 64 15 HTTP/1.1 sc-win32-status 64 : The specified network name is no longer available. sc-status 400 : Bad request Also some requests takes around 130 seconds, but some of less than 1 second. This is a windows application which connects to a web service for process some data. There is not a query takes around 130 seconds on the database.

    Read the article

  • How to avoid lftp Certificate verification error?

    - by pattulus
    I'm trying to get my Pelican blog working. It uses lftp to transfer the actual blog to ones server, but I always get an error: mirror: Fatal error: Certificate verification: subjectAltName does not match ‘blogname.com’ I think lftp is checking the SSL and the quick setup of Pelican just forgot to include that I don't have SSL on my FTP. This is the code in Pelican's Makefile: ftp_upload: $(OUTPUTDIR)/index.html lftp ftp://$(FTP_USER)@$(FTP_HOST) -e "mirror -R $(OUTPUTDIR) $(FTP_TARGET_DIR) ; quit" which renders in terminal as: lftp ftp://[email protected] -e "mirror -R /Volumes/HD/Users/me/Test/output /myblog_directory ; quit" What I managed so far is, denying the SSL check by changing the Makefile to: lftp ftp://$(FTP_USER)@$(FTP_HOST) -e "set ftp:ssl-allow no" "mirror -R $(OUTPUTDIR) $(FTP_TARGET_DIR) ; quit" Due to my incorrect implementation I get logged in correctly (lftp [email protected]:~>) but the one line feature doesn't work anymore and I have to enter the mirror command by hand: mirror -R /Volumes/HD/Users/me/Test/output/ /myblog_directory This works without an error and timeout. The question is how to do this with a one liner. In addition I tried: set ssl:verify-certificate/ftp.myblog.com no This trick to disable certificate verification in lftp: $ cat ~/.lftp/rc set ssl:verify-certificate no However, it seems there is no "rc" folder in my lftp directory - so this prompt has no chance to work.

    Read the article

< Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >