Search Results

Search found 9983 results on 400 pages for 'fuzzy c means'.

Page 19/400 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Future proof Tweeting with PHP

    - by YsoL8
    Hello I'm looking to implement a system for tweeting directly from my site backend, which is written in PHP 5. I have a script from the internet that I can adapt, but I'm concerned that when Twitter switches to Oauth only, I'll be out in the cold. Basically, I'm hoping someone can point me toward a script/tutorial that will let me do the following: access twitter via the Oauth system Post Tweets and receive error codes Let me define an application/site name (I'm a bit fuzzy on whether Twitter allows this) Ideally I need all 3 points explained in detail. Thanks

    Read the article

  • Is VisualAssistX's autorenaming reliable?

    - by Stefan Monov
    I'm using the VS addon called VisualAssistX. Using it for C++ only. In particular i'm using the feature "rename this entity projectwide", mainly for class names and function names. My question is: does this use fuzzy heuristics, or does it actually reliably implement C++ semantics so there's no false negatives/false positives? Has it ever renamed something wrong for you?

    Read the article

  • Getting N random numbers that the sum is M

    - by marionmaiden
    Hello I want to get N random numbers that the sum of them is a value. For example, let's suppose I want 5 random numbers that their sum is 1 Then, a valid possibility is: 0.2 0.2 0.2 0.2 0.2 Other possibility is: 0.8 0.1 0.03 0.03 0.04 And so on. I need this for the creation of the matrix of belongings of the Fuzzy C-means.

    Read the article

  • MVVM Tutorial/Example Code with internet connectivity

    - by SpikeX
    I understand the View and ViewModel portions of MVVM, but what I'm still really fuzzy on is how you connect your application to data sources on the Internet (say you're grabbing some XML or JSON from the web), and specifically, where that code goes in your application. Can someone provide or link to some example code or a tutorial that walks you through setting up a simple WPF (or Silverlight) application that fetches data from the Web?

    Read the article

  • Lucene MultiFieldQueryParser which column of the three generated the hit

    - by user549432
    I am using Lucene MultiFieldQueryParser and the implementation is as shown below QueryParser parser = new MultiFieldQueryParser (Version.LUCENE_30,new String[] {"First Name","Middle Name","Last Name"}, standardAnalyzer); Query query = parser.parse(queryString); and using it to find a match for the input string in my DB columns First Name, Middle Name and Last name . I am able to get the hits with normal search and fuzzy search - The only problem I am facing is finding which column of the three generated the hit - Can you pls help me here - Thanks

    Read the article

  • Mipmapping issue with textures rendered on to a flat quad (OpenGL)

    - by Mike2012
    I am having what seems to be a mipmapping problem when rendering textures on to a flat quad. At some camera positions the object looks fine, but then at others it gets very fuzzy. Unfortunately I don't really have any good leads on this problem but I thought if I posted some pictures other who have experiences other issue might be able to give me some insight. Normal: Zoomed Out: Rotated: Could anyone give me any clues about what could be going on here?

    Read the article

  • What Every Developer Should Know About MSI Components

    - by Alois Kraus
    Hopefully nothing. But if you have to do more than simple XCopy deployment and you need to support updates, upgrades and perhaps side by side scenarios there is no way around MSI. You can create Msi files with a Visual Studio Setup project which is severely limited or you can use the Windows Installer Toolset. I cannot talk about WIX with my German colleagues because WIX has a very special meaning. It is funny to always use the long name when I talk about deployment possibilities. Alternatively you can buy commercial tools which help you to author Msi files but I am not sure how good they are. Given enough pain with existing solutions you can also learn the MSI Apis and create your own packaging solution. If I were you I would use either a commercial visual tool when you do easy deployments or use the free Windows Installer Toolset. Once you know the WIX schema you can create well formed wix xml files easily with any editor. Then you can “compile” from the wxs files your Msi package. Recently I had the “pleasure” to get my hands dirty with C++ (again) and the MSI technology. Installation is a complex topic but after several month of digging into arcane MSI issues I can safely say that there should exist an easier way to install and update files as today. I am not alone with this statement as John Robbins (creator of the cool tool Paraffin) states: “.. It's a brittle and scary API in Windows …”. To help other people struggling with installation issues I present you the advice I (and others) found useful and what will happen if you ignore this advice. What is a MSI file? A MSI file is basically a database with tables which reference each other to control how your un/installation should work. The basic idea is that you declare via these tables what you want to install and MSI controls the how to get your stuff onto or off your machine. Your “stuff” consists usually of files, registry keys, shortcuts and environment variables. Therefore the most important tables are File, Registry, Environment and Shortcut table which define what will be un/installed. The key to master MSI is that every resource (file, registry key ,…) is associated with a MSI component. The actual payload consists of compressed files in the CAB format which can either be embedded into the MSI file or reside beside the MSI file or in a subdirectory below it. To examine MSI files you need Orca a free MSI editor provided by MS. There is also another free editor called Super Orca which does support diffs between MSI and it does not lock the MSI files. But since Orca comes with a shell extension I tend to use only Orca because it is so easy to right click on a MSI file and open it with this tool. How Do I Install It? Double click it. This does work for fresh installations as well as major upgrades. Updates need to be installed via the command line via msiexec /i <msi> REINSTALL=ALL REINSTALLMODE=vomus   This tells the installer to reinstall all already installed features (new features will NOT be installed). The reinstallmode letters do force an overwrite of the old cached package in the %WINDIR%\Installer folder. All files, shortcuts and registry keys are redeployed if they are missing or need to be replaced with a newer version. When things did go really wrong and you want to overwrite everything unconditionally use REINSTALLMODE=vamus. How To Enable MSI Logs? You can download a MSI from Microsoft which installs some registry keys to enable full MSI logging. The log files can be found in your %TEMP% folder and are called MSIxxxx.log. Alternatively you can add to your msiexec command line the option msiexec …. /l*vx <LogFileName> Personally I find it rather strange that * does not mean full logging. To really get all logs I need to add v and x which is documented in the msiexec help but I still find this behavior unintuitive. What are MSI components? The whole MSI logic is bound to the concept of MSI components. Nearly every msi table has a Component column which binds an installable resource to a component. Below are the screenshots of the FeatureComponents and Component table of an example MSI. The Feature table defines basically the feature hierarchy.  To find out what belongs to a feature you need to look at the FeatureComponents table where for each feature the components are listed which will be installed when a feature is installed. The MSI components are defined in the  Component table. This table has as first column the component name and as second column the component id which is a GUID. All resources you want to install belong to a MSI component. Therefore nearly all MSI tables have a Component_ column which contains the component name. If you look e.g. a the File table you see that every file belongs to a component which is true for all other tables which install resources. The component table is the glue between all other tables which contain the resources you want to install. So far so easy. Why is MSI then so complex? Most MSI problems arise from the fact that you did violate a MSI component rule in one or the other way. When you install a feature the reference count for all components belonging to this feature will increase by one. If your component is installed by more than one feature it will get a higher refcount. When you uninstall a feature its refcount will drop by one. Interesting things happen if the component reference count reaches zero: Then all associated resources will be deleted. That looks like a reasonable thing and it is. What it makes complex are the strange component rules you have to follow. Below are some important component rules from the Tao of the Windows Installer … Rule 16: Follow Component Rules Components are a very important part of the Installer technology. They are the means whereby the Installer manages the resources that make up your application. The SDK provides the following guidelines for creating components in your package: Never create two components that install a resource under the same name and target location. If a resource must be duplicated in multiple components, change its name or target location in each component. This rule should be applied across applications, products, product versions, and companies. Two components must not have the same key path file. This is a consequence of the previous rule. The key path value points to a particular file or folder belonging to the component that the installer uses to detect the component. If two components had the same key path file, the installer would be unable to distinguish which component is installed. Two components however may share a key path folder. Do not create a version of a component that is incompatible with all previous versions of the component. This rule should be applied across applications, products, product versions, and companies. Do not create components containing resources that will need to be installed into more than one directory on the user’s system. The installer installs all of the resources in a component into the same directory. It is not possible to install some resources into subdirectories. Do not include more than one COM server per component. If a component contains a COM server, this must be the key path for the component. Do not specify more than one file per component as a target for the Start menu or a Desktop shortcut. … And these rules do not even talk about component ids, update packages and upgrades which you need to understand as well. Lets suppose you install two MSIs (MSI1 and MSI2) which have the same ComponentId but different component names. Both do install the same file. What will happen when you uninstall MSI2?   Hm the file should stay there. But the component names are different. Yes and yes. But MSI uses not use the component name as key for the refcount. Instead the ComponentId column of the Component table which contains a GUID is used as identifier under which the refcount is stored. The components Comp1 and Comp2 are identical from the MSI perspective. After the installation of both MSIs the Component with the Id {100000….} has a refcount of two. After uninstallation of one MSI there is still a refcount of one which drops to zero just as expected when we uninstall the last msi. Then the file which was the same for both MSIs is deleted. You should remember that MSI keeps a refcount across MSIs for components with the same component id. MSI does manage components not the resources you did install. The resources associated with a component are then and only then deleted when the refcount of the component reaches zero.   The dependencies between features, components and resources can be described as relations. m,k are numbers >= 1, n can be 0. Inside a MSI the following relations are valid Feature    1  –> n Components Component    1 –> m Features Component      1  –>  k Resources These relations express that one feature can install several components and features can share components between them. Every (meaningful) component will install at least one resource which means that its name (primary key to stay in database speak) does occur in some other table in the Component column as value which installs some resource. Lets make it clear with an example. We want to install with the feature MainFeature some files a registry key and a shortcut. We can then create components Comp1..3 which are referenced by the resources defined in the corresponding tables.   Feature Component Registry File Shortcuts MainFeature Comp1 RegistryKey1     MainFeature Comp2   File.txt   MainFeature Comp3   File2.txt Shortcut to File2.txt   It is illegal that the same resource is part of more than one component since this would break the refcount mechanism. Lets illustrate this:            Feature ComponentId Resource Reference Count Feature1 {1000-…} File1.txt 1 Feature2 {2000-….} File1.txt 1 The installation part works well but what happens when you uninstall Feature2? Component {20000…} gets a refcount of zero where MSI deletes all resources belonging to this component. In this case File1.txt will be deleted. But Feature1 still has another component {10000…} with a refcount of one which means that the file was deleted too early. You just have ruined your installation. To fix it you then need to click on the Repair button under Add/Remove Programs to let MSI reinstall any missing registry keys, files or shortcuts. The vigilant reader might has noticed that there is more in the Component table. Beside its name and GUID it has also an installation directory, attributes and a KeyPath. The KeyPath is a reference to a file or registry key which is used to detect if the component is already installed. This becomes important when you repair or uninstall a component. To find out if the component is already installed MSI checks if the registry key or file referenced by the KeyPath property does exist. When it does not exist it assumes that it was either already uninstalled (can lead to problems during uninstall) or that it is already installed and all is fine. Why is this detail so important? Lets put all files into one component. The KeyPath should be then one of the files of your component to check if it was installed or not. When your installation becomes corrupt because a file was deleted you cannot repair it with the Repair button under Add/Remove Programs because MSI checks the component integrity via the Resource referenced by its KeyPath. As long as you did not delete the KeyPath file MSI thinks all resources with your component are installed and never executes any repair action. You get even more trouble when you try to remove files during an upgrade (you cannot remove files during an update) from your super component which contains all files. The only way out and therefore best practice is to assign for every resource you want to install an extra component. This ensures painless updatability and repairs and you have much less effort to remove specific files during an upgrade. In effect you get this best practice relation Feature 1  –> n Components Component   1  –>  1 Resources MSI Component Rules Rule 1 – One component per resource Every resource you want to install (file, registry key, value, environment value, shortcut, directory, …) must get its own component which does never change between versions as long as the install location is the same. Penalty If you add more than one resources to a component you will break the repair capability of MSI because the KeyPath is used to check if the component needs repair. MSI ComponentId Files MSI 1.0 {1000} File1-5 MSI 2.0 {2000} File2-5 You want to remove File1 in version 2.0 of your MSI. Since you want to keep the other files you create a new component and add them there. MSI will delete all files if the component refcount of {1000} drops to zero. The files you want to keep are added to the new component {2000}. Ok that does work if your upgrade does uninstall the old MSI first. This will cause the refcount of all previously installed components to reach zero which means that all files present in version 1.0 are deleted. But there is a faster way to perform your upgrade by first installing your new MSI and then remove the old one.  If you choose this upgrade path then you will loose File1-5 after your upgrade and not only File1 as intended by your new component design.   Rule 2 – Only add, never remove resources from a component If you did follow rule 1 you will not need Rule 2. You can add in a patch more resources to one component. That is ok. But you can never remove anything from it. There are tricky ways around that but I do not want to encourage bad component design. Penalty Lets assume you have 2 MSI files which install under the same component one file   MSI1 MSI2 {1000} - ComponentId {1000} – ComponentId File1.txt File2.txt   When you install and uninstall both MSIs you will end up with an installation where either File1 or File2 will be left. Why? It seems that MSI does not store the resources associated with each component in its internal database. Instead Windows will simply query the MSI that is currently uninstalled for all resources belonging to this component. Since it will find only one file and not two it will only uninstall one file. That is the main reason why you never can remove resources from a component!   Rule 3 Never Remove A Component From an Update MSI. This is the same as if you change the GUID of a component by accident for your new update package. The resulting update package will not contain all components from the previously installed package. Penalty When you remove a component from a feature MSI will set the feature state during update to Advertised and log a warning message into its log file when you did enable MSI logging. SELMGR: ComponentId '{2DCEA1BA-3E27-E222-484C-D0D66AEA4F62}' is registered to feature 'xxxxxxx, but is not present in the Component table.  Removal of components from a feature is not supported! MSI (c) (24:44) [07:53:13:436]: SELMGR: Removal of a component from a feature is not supported Advertised means that MSI treats all components of this feature as not installed. As a consequence during uninstall nothing will be removed since it is not installed! This is not only bad because uninstall does no longer work but this feature will also not get the required patches. All other features which have followed component versioning rules for update packages will be updated but the one faulty feature will not. This results in very hard to find bugs why an update was only partially successful. Things got better with Windows Installer 4.5 but you cannot rely on that nobody will use an older installer. It is a good idea to add to your update msiexec call MSIENFORCEUPGRADECOMPONENTRULES=1 which will abort the installation if you did violate this rule.

    Read the article

  • The Incremental Architect&acute;s Napkin &ndash; #3 &ndash; Make Evolvability inevitable

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/04/the-incremental-architectacutes-napkin-ndash-3-ndash-make-evolvability-inevitable.aspxThe easier something to measure the more likely it will be produced. Deviations between what is and what should be can be readily detected. That´s what automated acceptance tests are for. That´s what sprint reviews in Scrum are for. It´s no small wonder our software looks like it looks. It has all the traits whose conformance with requirements can easily be measured. And it´s lacking traits which cannot easily be measured. Evolvability (or Changeability) is such a trait. If an operation is correct, if an operation if fast enough, that can be checked very easily. But whether Evolvability is high or low, that cannot be checked by taking a measure or two. Evolvability might correlate with certain traits, e.g. number of lines of code (LOC) per function or Cyclomatic Complexity or test coverage. But there is no threshold value signalling “evolvability too low”; also Evolvability is hardly tangible for the customer. Nevertheless Evolvability is of great importance - at least in the long run. You can get away without much of it for a short time. Eventually, though, it´s needed like any other requirement. Or even more. Because without Evolvability no other requirement can be implemented. Evolvability is the foundation on which all else is build. Such fundamental importance is in stark contrast with its immeasurability. To compensate this, Evolvability must be put at the very center of software development. It must become the hub around everything else revolves. Since we cannot measure Evolvability, though, we cannot start watching it more. Instead we need to establish practices to keep it high (enough) at all times. Chefs have known that for long. That´s why everybody in a restaurant kitchen is constantly seeing after cleanliness. Hygiene is important as is to have clean tools at standardized locations. Only then the health of the patrons can be guaranteed and production efficiency is constantly high. Still a kitchen´s level of cleanliness is easier to measure than software Evolvability. That´s why important practices like reviews, pair programming, or TDD are not enough, I guess. What we need to keep Evolvability in focus and high is… to continually evolve. Change must not be something to avoid but too embrace. To me that means the whole change cycle from requirement analysis to delivery needs to be gone through more often. Scrum´s sprints of 4, 2 even 1 week are too long. Kanban´s flow of user stories across is too unreliable; it takes as long as it takes. Instead we should fix the cycle time at 2 days max. I call that Spinning. No increment must take longer than from this morning until tomorrow evening to finish. Then it should be acceptance checked by the customer (or his/her representative, e.g. a Product Owner). For me there are several resasons for such a fixed and short cycle time for each increment: Clear expectations Absolute estimates (“This will take X days to complete.”) are near impossible in software development as explained previously. Too much unplanned research and engineering work lurk in every feature. And then pervasive interruptions of work by peers and management. However, the smaller the scope the better our absolute estimates become. That´s because we understand better what really are the requirements and what the solution should look like. But maybe more importantly the shorter the timespan the more we can control how we use our time. So much can happen over the course of a week and longer timespans. But if push comes to shove I can block out all distractions and interruptions for a day or possibly two. That´s why I believe we can give rough absolute estimates on 3 levels: Noon Tonight Tomorrow Think of a meeting with a Product Owner at 8:30 in the morning. If she asks you, how long it will take you to implement a user story or bug fix, you can say, “It´ll be fixed by noon.”, or you can say, “I can manage to implement it until tonight before I leave.”, or you can say, “You´ll get it by tomorrow night at latest.” Yes, I believe all else would be naive. If you´re not confident to get something done by tomorrow night (some 34h from now) you just cannot reliably commit to any timeframe. That means you should not promise anything, you should not even start working on the issue. So when estimating use these four categories: Noon, Tonight, Tomorrow, NoClue - with NoClue meaning the requirement needs to be broken down further so each aspect can be assigned to one of the first three categories. If you like absolute estimates, here you go. But don´t do deep estimates. Don´t estimate dozens of issues; don´t think ahead (“Issue A is a Tonight, then B will be a Tomorrow, after that it´s C as a Noon, finally D is a Tonight - that´s what I´ll do this week.”). Just estimate so Work-in-Progress (WIP) is 1 for everybody - plus a small number of buffer issues. To be blunt: Yes, this makes promises impossible as to what a team will deliver in terms of scope at a certain date in the future. But it will give a Product Owner a clear picture of what to pull for acceptance feedback tonight and tomorrow. Trust through reliability Our trade is lacking trust. Customers don´t trust software companies/departments much. Managers don´t trust developers much. I find that perfectly understandable in the light of what we´re trying to accomplish: delivering software in the face of uncertainty by means of material good production. Customers as well as managers still expect software development to be close to production of houses or cars. But that´s a fundamental misunderstanding. Software development ist development. It´s basically research. As software developers we´re constantly executing experiments to find out what really provides value to users. We don´t know what they need, we just have mediated hypothesises. That´s why we cannot reliably deliver on preposterous demands. So trust is out of the window in no time. If we switch to delivering in short cycles, though, we can regain trust. Because estimates - explicit or implicit - up to 32 hours at most can be satisfied. I´d say: reliability over scope. It´s more important to reliably deliver what was promised then to cover a lot of requirement area. So when in doubt promise less - but deliver without delay. Deliver on scope (Functionality and Quality); but also deliver on Evolvability, i.e. on inner quality according to accepted principles. Always. Trust will be the reward. Less complexity of communication will follow. More goodwill buffer will follow. So don´t wait for some Kanban board to show you, that flow can be improved by scheduling smaller stories. You don´t need to learn that the hard way. Just start with small batch sizes of three different sizes. Fast feedback What has been finished can be checked for acceptance. Why wait for a sprint of several weeks to end? Why let the mental model of the issue and its solution dissipate? If you get final feedback after one or two weeks, you hardly remember what you did and why you did it. Resoning becomes hard. But more importantly youo probably are not in the mood anymore to go back to something you deemed done a long time ago. It´s boring, it´s frustrating to open up that mental box again. Learning is harder the longer it takes from event to feedback. Effort can be wasted between event (finishing an issue) and feedback, because other work might go in the wrong direction based on false premises. Checking finished issues for acceptance is the most important task of a Product Owner. It´s even more important than planning new issues. Because as long as work started is not released (accepted) it´s potential waste. So before starting new work better make sure work already done has value. By putting the emphasis on acceptance rather than planning true pull is established. As long as planning and starting work is more important, it´s a push process. Accept a Noon issue on the same day before leaving. Accept a Tonight issue before leaving today or first thing tomorrow morning. Accept a Tomorrow issue tomorrow night before leaving or early the day after tomorrow. After acceptance the developer(s) can start working on the next issue. Flexibility As if reliability/trust and fast feedback for less waste weren´t enough economic incentive, there is flexibility. After each issue the Product Owner can change course. If on Monday morning feature slices A, B, C, D, E were important and A, B, C were scheduled for acceptance by Monday evening and Tuesday evening, the Product Owner can change her mind at any time. Maybe after A got accepted she asks for continuation with D. But maybe, just maybe, she has gotten a completely different idea by then. Maybe she wants work to continue on F. And after B it´s neither D nor E, but G. And after G it´s D. With Spinning every 32 hours at latest priorities can be changed. And nothing is lost. Because what got accepted is of value. It provides an incremental value to the customer/user. Or it provides internal value to the Product Owner as increased knowledge/decreased uncertainty. I find such reactivity over commitment economically very benefical. Why commit a team to some workload for several weeks? It´s unnecessary at beast, and inflexible and wasteful at worst. If we cannot promise delivery of a certain scope on a certain date - which is what customers/management usually want -, we can at least provide them with unpredecented flexibility in the face of high uncertainty. Where the path is not clear, cannot be clear, make small steps so you´re able to change your course at any time. Premature completion Customers/management are used to premeditating budgets. They want to know exactly how much to pay for a certain amount of requirements. That´s understandable. But it does not match with the nature of software development. We should know that by now. Maybe there´s somewhere in the world some team who can consistently deliver on scope, quality, and time, and budget. Great! Congratulations! I, however, haven´t seen such a team yet. Which does not mean it´s impossible, but I think it´s nothing I can recommend to strive for. Rather I´d say: Don´t try this at home. It might hurt you one way or the other. However, what we can do, is allow customers/management stop work on features at any moment. With spinning every 32 hours a feature can be declared as finished - even though it might not be completed according to initial definition. I think, progress over completion is an important offer software development can make. Why think in terms of completion beyond a promise for the next 32 hours? Isn´t it more important to constantly move forward? Step by step. We´re not running sprints, we´re not running marathons, not even ultra-marathons. We´re in the sport of running forever. That makes it futile to stare at the finishing line. The very concept of a burn-down chart is misleading (in most cases). Whoever can only think in terms of completed requirements shuts out the chance for saving money. The requirements for a features mostly are uncertain. So how does a Product Owner know in the first place, how much is needed. Maybe more than specified is needed - which gets uncovered step by step with each finished increment. Maybe less than specified is needed. After each 4–32 hour increment the Product Owner can do an experient (or invite users to an experiment) if a particular trait of the software system is already good enough. And if so, she can switch the attention to a different aspect. In the end, requirements A, B, C then could be finished just 70%, 80%, and 50%. What the heck? It´s good enough - for now. 33% money saved. Wouldn´t that be splendid? Isn´t that a stunning argument for any budget-sensitive customer? You can save money and still get what you need? Pull on practices So far, in addition to more trust, more flexibility, less money spent, Spinning led to “doing less” which also means less code which of course means higher Evolvability per se. Last but not least, though, I think Spinning´s short acceptance cycles have one more effect. They excert pull-power on all sorts of practices known for increasing Evolvability. If, for example, you believe high automated test coverage helps Evolvability by lowering the fear of inadverted damage to a code base, why isn´t 90% of the developer community practicing automated tests consistently? I think, the answer is simple: Because they can do without. Somehow they manage to do enough manual checks before their rare releases/acceptance checks to ensure good enough correctness - at least in the short term. The same goes for other practices like component orientation, continuous build/integration, code reviews etc. None of that is compelling, urgent, imperative. Something else always seems more important. So Evolvability principles and practices fall through the cracks most of the time - until a project hits a wall. Then everybody becomes desperate; but by then (re)gaining Evolvability has become as very, very difficult and tedious undertaking. Sometimes up to the point where the existence of a project/company is in danger. With Spinning that´s different. If you´re practicing Spinning you cannot avoid all those practices. With Spinning you very quickly realize you cannot deliver reliably even on your 32 hour promises. Spinning thus is pulling on developers to adopt principles and practices for Evolvability. They will start actively looking for ways to keep their delivery rate high. And if not, management will soon tell them to do that. Because first the Product Owner then management will notice an increasing difficulty to deliver value within 32 hours. There, finally there emerges a way to measure Evolvability: The more frequent developers tell the Product Owner there is no way to deliver anything worth of feedback until tomorrow night, the poorer Evolvability is. Don´t count the “WTF!”, count the “No way!” utterances. In closing For sustainable software development we need to put Evolvability first. Functionality and Quality must not rule software development but be implemented within a framework ensuring (enough) Evolvability. Since Evolvability cannot be measured easily, I think we need to put software development “under pressure”. Software needs to be changed more often, in smaller increments. Each increment being relevant to the customer/user in some way. That does not mean each increment is worthy of shipment. It´s sufficient to gain further insight from it. Increments primarily serve the reduction of uncertainty, not sales. Sales even needs to be decoupled from this incremental progress. No more promises to sales. No more delivery au point. Rather sales should look at a stream of accepted increments (or incremental releases) and scoup from that whatever they find valuable. Sales and marketing need to realize they should work on what´s there, not what might be possible in the future. But I digress… In my view a Spinning cycle - which is not easy to reach, which requires practice - is the core practice to compensate the immeasurability of Evolvability. From start to finish of each issue in 32 hours max - that´s the challenge we need to accept if we´re serious increasing Evolvability. Fortunately higher Evolvability is not the only outcome of Spinning. Customer/management will like the increased flexibility and “getting more bang for the buck”.

    Read the article

  • Log in/out of Gmail chat programmatically, clicking Gmail's span "links"

    - by endolith
    At work, I use Gmail's chat, since it's encrypted and logs chats without installing or saving anything to the hard drive. At home, I use Pidgin. When I log into GMail at home, I have to log out of chat, or messages will end up in the wrong place. When I log into GMail at work, I have to log back in to chat. In other words, when I start Firefox at home, I want Gmail's chat disabled automatically. When I start Firefox at work, I want Gmail's chat enabled automatically. Is there a way to use a Greasemonkey script or similar to force logging in and logging out on specific machines? It would seem simple enough; just follow a URL or simulate clicking a link. Unfortunately, Gmail doesn't use actual links. While logged out: <span tabindex="0" role="link" action="si" class="az9OKd">Sign into chat</span> While logged in, in drop-down menu: <div tabindex="-1" id=":1mj" role="menuitem" class="oA" value="si"><div class="uQ c6"/>Sign into chat</div> <div tabindex="-1" id=":8f" role="menuitem" class="oA" value="sia"><div class="uQ c5"/>Sign into AIM®</div> <div tabindex="-1" id=":8e" role="menuitem" class="oA" value="so"><div class="uQ df"/>Sign out of chat</div> At bottom of page: <span id=":im" class="l8 ou" tabindex="0" role="link">turn off chat</span> <span id=":im" class="l8 ou" tabindex="0" role="link">turn on chat</span> Anyone know how to "click" these non-links with JavaScript or access their functions? I would imagine that "so" means "sign out", "si" means "sign in", and "sia" means "sign in AIM". Can I somehow call these actions directly? Is there some other alternative for disabling chat?

    Read the article

  • ELF: linking: Why do I get undefined references in .so files

    - by ki.lya.online.fr
    Hi, I'm trying to build a program against wxWidgets, and I get a linker error. I'd like to really understand what it means. The error is: /usr/lib/libwx_baseu-2.8.so: undefined reference to `std::ctype<char>::_M_widen_init() const@GLIBCXX_3.4.11' What I don't understand is why the error is at libwx_baseu-2.8.so. I thought that .so files had all its symbols resolved, contrary to .o files that still need linking. When I ldd the .so, I get can resolve all its linked libraries, so there is no problem there: $ ldd /usr/lib/libwx_baseu-2.8.so linux-gate.so.1 => (0x00476000) libz.so.1 => /lib/libz.so.1 (0x00d9c000) libdl.so.2 => /lib/libdl.so.2 (0x002a8000) libm.so.6 => /lib/libm.so.6 (0x00759000) libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x002ad000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x0068d000) libpthread.so.0 => /lib/libpthread.so.0 (0x006f0000) libc.so.6 => /lib/libc.so.6 (0x00477000) /lib/ld-linux.so.2 (0x007f6000) Does it means that the .so file was not compiled correctly (in that case, it's a bug in my distribution package) or does it means that there are missing libraries on the linker command line for my particular program? Additionally, do you know how I can get a list on undefined symbols in an ELF file. I tried readelf -s but I can't find the missing symbol. Thank you. Mildred

    Read the article

  • branch prediction

    - by Alexander
    Consider the following sequence of actual outcomes for a single static branch. T means the branch is taken. N means the branch is not taken. For this question, assume that this is the only branch in the program. T T T N T N T T T N T N T T T N T N Assume a two-level branch predictor that uses one bit of branch history—i.e., a one-bit BHR. Since there is only one branch in the program, it does not matter how the BHR is concatenated with the branch PC to index the BHT. Assume that the BHT uses one-bit counters and that, again, all entries are initialized to N. Which of the branches in this sequence would be mis-predicted? Use the table below. Now I am not asking answers to this question, rather than guides and pointers on this. What does a two level branch predictor means and how does it works? What does the BHR and BHT stands for?

    Read the article

  • Spell checker software

    - by Naren
    Hello Guys, I have been assigned a task to find a decent spell checker (UK English) preferably the free one for a project that we are doing. I have looked at Google AJAX API for this. The project contains some young person's (kids less than 18 years old) data which shouldn't allow exposing or storing outside the application boundaries. Google logs the data for research purpose that means Google owns the data whatever we send over the wire through Google API. Is this right? I fired an email to Google regarding the privacy of data and storage but they haven't come back. If you have some knowledge regarding this please share with me. At this point our servers might not have access to external entities that means we might not be able to use Web API for this over the wire. But it may change in the future. That means I have to find out some spell checker alternatives that can sit in our environment and do the job or an external APIs. Would you mind share your findings and knowledge in this regard. I would prefer free services but never know if you have some cracking spell checker for a few quid’s then I don't mind recommending to the project board. Technology using ASP.NET 3.5/4.0, MVC, jQuery, SQL Sever 2008 etc Cheers, Naren

    Read the article

  • how to write this typical mysql query( ho to use subquery column into main query)

    - by I Like PHP
    I HAVE TWO TABLES shown below table_joining id join_id(PK) transfer_id(FK) unit_id transfer_date joining_date 1 j_1 t_1 u_1 2010-06-05 2010-03-05 2 j_2 t_2 u_3 2010-05-10 2010-03-10 3 j_3 t_3 u_6 2010-04-10 2010-01-01 4 j_5 NULL u_3 NULL 2010-06-05 5 j_6 NULL u_4 NULL 2010-05-05 table_transfer id transfer_id(PK) pastUnitId futureUnitId effective_transfer_date 1 t_1 u_3 u_1 2010-06-05 2 t_2 u_6 u_1 2010-05-10 3 t_3 u_5 u_3 2010-04-10 now i want to know total employee detalis( using join_id) which are currently working on unit u_3 . means i want only join_id j_1 (has transfered but effective_transfer_date is future date, right now in u_3) j_2 ( tansfered and right now in `u_3` bcoz effective_transfer_date has been passed) j_6 ( right now in `u_3` and never transfered) what i need to take care of below steps( as far as i know ) <1> first need to check from table_joining whether transfer_id is NULL or not <2> if transfer_id= is NULL then see unit_id=u_3 where joining_date <=CURDATE() ( means that person already joined u_3) <3> if transfer_id is NOT NULL then go to table_transfer using transfer_id (foreign key reference) <4> now see the effective_transfer_date regrading that transfer_id whether effective_transfer_date<=CURDATE() <5> if transfer date has been passed(means transfer has been done) then return futureUnitID otherwise return pastUnitID i used two separate query but don't know how to join those query?? for step <1 ans <2 SELECT unit_id FROM table_joining WHERE joining_date<=CURDATE() AND transfer_id IS NULL AND unit_id='u_3' for step<5 SELECT IF(effective_transfer_date <= CURDATE(),futureUnitId,pastUnitId) AS currentUnitID FROM table_transfer // here how do we select only those rows which have currentUnitID='u_3' ?? please guide me the process?? i m just confused with JOINS. i think using LEFT JOIN can return the data i need, but i m not getting how to implement ...please help me. Thanks for helping me alwayz

    Read the article

  • Google App Engine ClassNotPersistenceCapableException

    - by Frank
    I have the following class : import javax.jdo.annotations.IdGeneratorStrategy; import javax.jdo.annotations.IdentityType; import javax.jdo.annotations.PersistenceCapable; import javax.jdo.annotations.Persistent; import javax.jdo.annotations.PrimaryKey; import com.google.appengine.api.datastore.*; @PersistenceCapable(identityType=IdentityType.APPLICATION) public class PayPal_Message { @PrimaryKey @Persistent(valueStrategy=IdGeneratorStrategy.IDENTITY) private Long id; @Persistent private Text content; @Persistent private String time; public PayPal_Message(Text content,String time) { this.content=content; this.time=time; } public Long getId() { return id; } public Text getContent() { return content; } public String getTime() { return time; } public void setContent(Text content) { this.content=content; } public void setTime(String time) { this.time=time; } } It used to be in a package, and works fine, now I put all classes in the default package, which caused me this error : org.datanucleus.jdo.exceptions.ClassNotPersistenceCapableException: The class "The class "PayPal_Message" is not persistable. This means that it either hasnt been enhanced, or that the enhanced version of the file is not in the CLASSPATH (or is hidden by an unenhanced version), or the Meta-Data/annotations for the class are not found." is not persistable. This means that it either hasnt been enhanced, or that the enhanced version of the file is not in the CLASSPATH (or is hidden by an unenhanced version), or the Meta-Data for the class is not found. NestedThrowables: org.datanucleus.exceptions.ClassNotPersistableException: The class "PayPal_Message" is not persistable. This means that it either hasnt been enhanced, or that the enhanced version of the file is not in the CLASSPATH (or is hidden by an unenhanced version), or the Meta-Data/annotations for the class are not found. What should I do to fix it ?

    Read the article

  • Method return type

    - by sarah xia
    Hi, In my company, a system is designed to have 3 layers. Layer1 is responsible for business logic handling. Layer3 is calling back end systems. Layer2 sits between the two layers so that layer1 doesn't need to know about the back end systems. To replay information from layer3, layer2 needs to define interface to layer1. For example, layer1 wants to check if a PIN from user is correct. It calls layer2 checkPin() method and then layer2 calls the relevant back end system. The checkPin() results could be: correctPin, inCorrectPin and internalError. At the moment, we defined the return type 'int'. So if layer2 returns 0, it means correctPin; if 1 is returned, it means inCorrectPin; if 9 is returned it means internalError. It works. However I feel a bit uneasy about this approach. Are there better ways to do it? For example define an enum CheckPinResult{CORRECT_PIN,INCORRECT_PIN,INTERNAL_ERROR}, and return CheckPinResult type? Thanks, Sarah

    Read the article

  • Is there a good way to QuickCheck Happstack.State methods?

    - by Paul Kuliniewicz
    I have a set of Happstack.State MACID methods that I want to test using QuickCheck, but I'm having trouble figuring out the most elegant way to accomplish that. The problems I'm running into are: The only way to evaluate an Ev monad computation is in the IO monad via query or update. There's no way to create a purely in-memory MACID store; this is by design. Therefore, running things in the IO monad means there are temporary files to clean up after each test. There's no way to initialize a new MACID store except with the initialValue for the state; it can't be generated via Arbitrary unless I expose an access method that replaces the state wholesale. Working around all of the above means writing methods that only use features of MonadReader or MonadState (and running the test inside Reader or State instead of Ev. This means forgoing the use of getRandom or getEventClockTime and the like inside the method definitions. The only options I can see are: Run the methods in a throw-away on-disk MACID store, cleaning up after each test and settling for starting from initialValue each time. Write the methods to have most of the code run in a MonadReader or MonadState (which is more easily testable), and rely on a small amount of non-QuickCheck-able glue around it that calls getRandom or getEventClockTime as necessary. Is there a better solution that I'm overlooking?

    Read the article

  • DotNetNuke and Subversion guidelines

    - by David Stratton
    I've Googled, Binged, and here at StackOverflow, looked through the related questions and searched, but I'm not finding what I'm looking for. I've also searched documentation on DNN. What I'm looking for is any guidance (tutorials, blogs, step-by-step instructions for setting up a repository) etc from people who are experienced in using DotNetNuke with SVN. We use SVN for all our source control, and have no problem with standard applications, because we pretty much built the repository and directory structure to work with our processes. This means when we do web sites, in Visual Studio, we do file based web sites, rather than setting them up in the local IIS. It just makes things easier for us. However, with DNN, it appears that even if you get the source code, it is expecting to be set up in the local IIS, which means additional headaches for us. For example, we are moving all of our source code off our local C drives, and onto a shared drive on a server. This is to enable backups in addition to our normal source control. (This was a management decision). So that means that we need to change the virtual web app when we make the move. Has anyone come up with a good way to work around this? Can DNN be set up so that the developer web server in Visual Studio can be used, so that we can treat it just like any normal web app? Am I missing something obvious? Edit - added I'm willing to accept answers like "We tried it and never got it to work", and "It can't be done" as answers. I'm always open to hearing "It can't be done the way you want. You need to change your procedures to match how it works" if necessary. I guess if you've got experience trying this and just couldn't get it to work, I can learn from your experience that way as well, but some detail would be good.

    Read the article

  • Treating differential operator as algebraic entity

    - by chappar
    I know that this question is offtopic and don't belong here. But i didn't know somewhere else to ask. So here is the question. I was reading e:the story of a number by Eli Maor, where he treats differential operator as just like any algebraic entity. For example if we have a differential equation like y’’ + 5y’ - 6y = 0. This can be treaed as (D^2 + 5D – 6)y = 0. So, either y = 0 (trivial solution) or (D^2 + 5D – 6) = 0. Factoring out above equation we get (D-1)(D+6)= 0 with solutions as D = 1 and D = -6. Since D does not have any meaning on its own, multiplying by y on both the sides we get Dy = y and Dy = -6y for which the solutions are Ae^x and Be^-6x. Combining these 2 solutions we get Ae^x + Be^-6x. Now my doubt is this approach break when we have an equation like D^2y = 0. Which means y = 0 (again trivial) or D^2 = 0 which means D = 0. Now Dy = y*0 = 0. That means y = C ( a constant). The actual answer should be Cx. I know that it is stupidity to treat D^2 = 0 as D = 0, it led me to doubt the entire process of treating differential equation as algebraic equation. Can someone throw light on this? Or any other site where i might get answer?

    Read the article

  • Are .NET's regular expressions Turing complete?

    - by Robert
    Regular expressions are often pointed to as the classical example of a language that is not Turning complete. For example "regular expressions" is given in as the answer to this SO question looking for languages that are not Turing complete. In my, perhaps somewhat basic, understanding of the notion of Turning completeness, this means that regular expressions cannot be used check for patterns that are "balanced". Balanced meaning have an equal number of opening characters as closing characters. This is because to do this would require you to have some kind of state, to allow you to match the opening and closing characters. However the .NET implementation of regular expressions introduces the notion of a balanced group. This construct is designed to let you backtrack and see if a previous group was matched. This means that a .NET regular expressions: ^(?<p>a)*(?<-p>b)*(?(p)(?!))$ Could match a pattern that: ab aabb aaabbb aaaabbbb ... etc. ... Does this means .NET's regular expressions are Turing complete? Or are there other things that are missing that would be required for the language to be Turing complete?

    Read the article

  • A typical mysql query( how to use subquery column into main query)

    - by I Like PHP
    I HAVE TWO TABLES shown below table_joining id join_id(PK) transfer_id(FK) unit_id transfer_date joining_date 1 j_1 t_1 u_1 2010-06-05 2010-03-05 2 j_2 t_2 u_3 2010-05-10 2010-03-10 3 j_3 t_3 u_6 2010-04-10 2010-01-01 4 j_5 NULL u_3 NULL 2010-06-05 5 j_6 NULL u_4 NULL 2010-05-05 table_transfer id transfer_id(PK) pastUnitId futureUnitId effective_transfer_date 1 t_1 u_3 u_1 2010-06-05 2 t_2 u_6 u_1 2010-05-10 3 t_3 u_5 u_3 2010-04-10 now i want to know total employee detalis( using join_id) which are currently working on unit u_3 . means i want only join_id j_1 (has transfered but effective_transfer_date is future date, right now in u_3) j_2 ( tansfered and right now in `u_3` bcoz effective_transfer_date has been passed) j_6 ( right now in `u_3` and never transfered) what i need to take care of below steps( as far as i know ) <1> first need to check from table_joining whether transfer_id is NULL or not <2> if transfer_id= is NULL then see unit_id=u_3 where joining_date <=CURDATE() ( means that person already joined u_3) <3> if transfer_id is NOT NULL then go to table_transfer using transfer_id (foreign key reference) <4> now see the effective_transfer_date regrading that transfer_id whether effective_transfer_date<=CURDATE() <5> if transfer date has been passed(means transfer has been done) then return futureUnitID otherwise return pastUnitID i used two separate query but don't know how to join those query?? for step <1 ans <2 SELECT unit_id FROM table_joining WHERE joining_date<=CURDATE() AND transfer_id IS NULL AND unit_id='u_3' for step<5 SELECT IF(effective_transfer_date <= CURDATE(),futureUnitId,pastUnitId) AS currentUnitID FROM table_transfer // here how do we select only those rows which have currentUnitID='u_3' ?? please guide me the process?? i m just confused with JOINS. i think using LEFT JOIN can return the data i need, or if we use subquery value to main query? but i m not getting how to implement ...please help me. Thanks for helping me alwayz

    Read the article

  • Should I use a regular server instead of AWS?

    - by Jon Ramvi
    Reading about and using the Amazon Web Services, I'm not really able to grasp how to use it correctly. Sorry about the long question: I have a EC2 instance which mostly does the work of a web server (apache for file sharing and Tomcat with Play Framework for the web app). As it's a web server, the instance is running 24/7. It just came to my attention that the data on the EC2 instance is non persistent. This means I lose my database and files if it's stopped. But I guess it also means my server settings and installed applications are lost as they are just files in the same way as the other data. This means that I will either have to rewrite the whole app to use amazon CloudDB or write some code which stores the db on S3 and make my own AMI with the correct applications installed and configured. Or can this be quick-fixed by using EBS somehow? My question is 1. is my understanding of aws is correct? and 2. is it's worth it? It could be a possibility to just set up a regular dedicated server where everything is persistent, as you would expect. Would love to have the scaleability of aws though..

    Read the article

  • #include - brackets vs quotes in XCode?

    - by Chris Becke
    In MSVC++ #include files are searched for differently depending on whether the file is enclosed in "" or <. The quoted form searches first in the local folder, then in /I specified locations, The angle bracket form avoids the local folder. This means, in MSVC++, its possible to have header files with the same name as runtime and SDK headers. So, for example, I need to wrap up the windows sdk windows.h file to undefine some macro's that cause trouble. With MSVS I can just add a (optional) windows.h file to my project as long as I include it using the quoted form :- // some .cpp file #include "windows.h" // will include my local windows.h file And in my windows.h, I can pull in the real one using the angle bracket form: // my windows.h #include <windows.h> // will load the real one #undef ConflictingSymbol Trying this trick with GCC in XCode didn't work. angle bracket #includes in system header files in fact are finding my header files with similar names in my local folder structure. The MSVC system means its quite safe to have a "String.h" header file in my own folder structre. On XCode this seems to be a major no no. Is there some way to control this search path behaviour in XCode to be more like MSVC's? Or do I just have to avoid naming any of my headers anything that might possibly conflict with a system header. Writing cross platform code and using lots of frameworks means the possibility of incidental conflicts seems large.

    Read the article

  • pushd - handling multiple drives from cmd

    - by user673600
    I'm trying to figure out how to install some programs where the components reside on two different drives on a networked path. However whenever I use pushd \\xyz\c$ I get a mapped drive which means I cannot use any knowledge of using for example c:\install e:\mycomponents.dll. Is there anyway that I can do this once I have used the pushd command? How can I ensure that I keep the drives the same for example. I'm in the process of installing services. So it seems that when I install the service, I need to keep the path as the same as the actual location of the .exe which means that I'm running into issues. Is there a way to simply use pushd but at the sametime not actually map drives? As when installing services, when I've been using net use, I've found that there is an issue with installing on drives which are mapped, as the service whilst can be installed doesn't find the actual .exe when it comes to starting up the service. So to expand this, is there a way to solve this using net use or pushd or a combination that lets me install a service as such: c:\windows\..\installutil e:\mynode? So to clarify, I need to somehow be able to see both drives on the remote machine by their relative drives i.e. E:\ and C:\ - if I use a mapped drive letter then it means installing service is a pain because I cannot use the path.

    Read the article

  • Create a model that switches between two different states using Temporal Logic?

    - by NLed
    Im trying to design a model that can manage different requests for different water sources. Platform : MAC OSX, using latest Python with TuLip module installed. For example, Definitions : Two water sources : w1 and w2 3 different requests : r1,r2,and r3 - Specifications : Water 1 (w1) is preferred, but w2 will be used if w1 unavailable. Water 2 is only used if w1 is depleted. r1 has the maximum priority. If all entities request simultaneously, r1's supply must not fall below 50%. - The water sources are not discrete but rather continuous, this will increase the difficulty of creating the model. I can do a crude discretization for the water levels but I prefer finding a model for the continuous state first. So how do I start doing that ? Some of my thoughts : Create a matrix W where w1,w2 ? W Create a matrix R where r1,r2,r3 ? R or leave all variables singular without putting them in a matrix I'm not an expert in coding so that's why I need help. Not sure what is the best way to start tackling this problem. I am only interested in the model, or a code sample of how can this be put together. edit Now imagine I do a crude discretization of the water sources to have w1=[0...4] and w2=[0...4] for 0, 25, 50, 75,100 percent respectively. == means implies Usage of water sources : if w1[0]==w2[4] -- meaning if water source 1 has 0%, then use 100% of water source 2 etc if w1[1]==w2[3] if w1[2]==w2[2] if w1[3]==w2[1] if w1[4]==w2[0] r1=r2=r3=[0,1] -- 0 means request OFF and 1 means request ON Now what model can be designed that will give each request 100% water depending on the values of w1 and w2 (w1 and w2 values are uncontrollable so cannot define specific value, but 0...4 is used for simplicity )

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >