Search Results

Search found 6058 results on 243 pages for 'short film'.

Page 23/243 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Linker error: wants C++ virtual base class destructor

    - by jdmuys
    Hi, I have a link error where the linker complains that my concrete class's destructor is calling its abstract superclass destructor, the code of which is missing. This is using GCC 4.2 on Mac OS X from XCode. I saw http://stackoverflow.com/questions/307352/g-undefined-reference-to-typeinfo but it's not quite the same thing. Here is the linker error message: Undefined symbols: "ConnectionPool::~ConnectionPool()", referenced from: AlwaysConnectedConnectionZPool::~AlwaysConnectedConnectionZPool()in RKConnector.o ld: symbol(s) not found collect2: ld returned 1 exit status Here is the abstract base class declaration: class ConnectionPool { public: static ConnectionPool* newPool(std::string h, short p, std::string u, std::string pw, std::string b); virtual ~ConnectionPool() =0; virtual int keepAlive() =0; virtual int disconnect() =0; virtual sql::Connection * getConnection(char *compression_scheme = NULL) =0; virtual void releaseConnection(sql::Connection * theConnection) =0; }; Here is the concrete class declaration: class AlwaysConnectedConnectionZPool: public ConnectionPool { protected: <snip data members> public: AlwaysConnectedConnectionZPool(std::string h, short p, std::string u, std::string pw, std::string b); virtual ~AlwaysConnectedConnectionZPool(); virtual int keepAlive(); // will make sure the connection doesn't time out. Call regularly virtual int disconnect(); // disconnects/destroys all connections. virtual sql::Connection * getConnection(char *compression_scheme = NULL); virtual void releaseConnection(sql::Connection * theConnection); }; Needless to say, all those members are implemented. Here is the destructor: AlwaysConnectedConnectionZPool::~AlwaysConnectedConnectionZPool() { printf("AlwaysConnectedConnectionZPool destructor call"); // nothing to destruct in fact } and also maybe the factory routine: ConnectionPool* ConnectionPool::newPool(std::string h, short p, std::string u, std::string pw, std::string b) { return new AlwaysConnectedConnectionZPool(h, p, u, pw, b); } I can fix this by artificially making my abstract base class concrete. But I'd rather do something better. Any idea? Thanks

    Read the article

  • Recursive powerof-function, see if you can solve it

    - by Jonas B
    First of all, this is not schoolwork - just my curiousity as I for some reason can't get my head around it and solve it. I come up with these stupid things all the time and it annoys the hell out of me when I cant solve them. Code example is in C# but solution doesn't have to be in any particular programming-language. long powerofnum(short num, long powerof) { return powerofnum2(num, powerof, powerof); } long powerofnum2(short num, long powerof, long holder) { if (num == 1) return powerof; else { return powerof = powerofnum2(num - 1, holder * powerof, holder); } } As you can see I have two methods. I call for powerofnum(value, powerofvalue) which then calls the next method with the powerofvalue also in a third parameter as a placeholder so it remembers the original powerof value through the recursion. What I want to accomplish is to do this with only one method. I know I could just declare a variable in the first method with the powerof value to remember it and then iterate from 0 to value of num. But as this is a theoretical question I want it done recursively. I could also in the first method just take a third parameter called whatever to store the value just like I do in the second method that is called by the first, but that looks really stupid. Why should you have to write what seems like the same parameter twice? Rules explained in short: no iteration scope-specific variables only only one method Anyhow, I'd appreciate a clean solution. Good luck :)

    Read the article

  • Access to map data

    - by herzl shemuelian
    I have a complex map that defined typedef short short1 typedef short short2 typedef map<short1,short2> data_list; typedef map<string,list> table_list; I have a class that fill table_list class GroupingClass { table_list m_table_list; string Buildkey(OD e1){ string ostring; ostring+=string(e1.m_Date,sizeof(Date)); ostring+=string(e1.m_CT,sizeof(CT)); ostring+=string(e1.m_PT,sizeof(PT)); return ostring; } void operator() (const map<short1,short2>::value_type& myPair) { OptionsDefine e1=myPair.second; string key=Buildkey(e1); m_table_list[key][e1.m_short2]=e1.m_short2; } operator table_list() { return m_table_list; } }; and I use it by table_list TL2 GroupingClass gc; TL2=for_each(mapOD.begin(), mapOD.end(), gc); but when I try to access to internal map I have problems for example data_list tmp; tmp=TL2["AAAA"]; short i=tmp[1]; //I dont update i variable but if i use a loop by itrator this work properly why this no work at first way thanks herzl

    Read the article

  • Redeploying an ASP.NET site in IIS7 without files in use interfering

    - by fyjham
    Hey, We've got a process currently which causes ASP.NET websites to be redeployed. The code is itself an ASP.NET application. The current method, which has worked for quite a while, is simply to loop over all the files in one folder and copy them over the top of the files in the webroot. The problem that's arisen is that occasionally files end up being in use and hence can't be copied over. This has in the past been intermittent to the point it didn't matter but on some of our higher traffic sites it happens the majority of the time now. I'm wondering if anyone has a workaround or alternative approach to this that I haven't thought of. Currently my ideas are: Simply retry each file until it works. That's going to cause errors for a short time though which isn't really that good. Deploy to a new folder and update IIS's webroot to the new folder. I'm not sure how to do this short of running the application as an administrator and running batch files, which is very untidy. Does anyone know what the best way to do this is, or if it's possible to do #2 without running the publishing application as a user who has admin access (Willing to grant it special privileges, but I'd prefer to stop short of administrator)?

    Read the article

  • Generic that takes only numeric types (int double etc)?

    - by brandon
    In a program I'm working on, I need to write a function to take any numeric type (int, short, long etc) and shove it in to a byte array at a specific offset. There exists a Bitconverter.GetBytes() method that takes the numeric type and returns it as a byte array, and this method only takes numeric types. So far I have: private void AddToByteArray<T>(byte[] destination, int offset, T toAdd) where T : struct { Buffer.BlockCopy(BitConverter.GetBytes(toAdd), 0, destination, offset, sizeof(toAdd)); } So basically my goal is that, for example, a call to AddToByteArray(array, 3, (short)10) would take 10 and store it in the 4th slot of array. The explicit cast exists because I know exactly how many bytes I want it to take up. There are cases where I would want a number that is small enough to be a short to really take up 4 bytes. On the flip side, there are times when I want an int to be crunched down to just a single byte. I'm doing this to create a custom network packet, if that makes any ideas pop in to your heads. If the where clause of a generic supported something like "where T : int || long || etc" I would be ok. (And no need to explain why they don't support that, the reason is fairly obvious) Any help would be greatly appreciated! Edit: I realize that I could just do a bunch of overloads, one for each type I want to support... but I'm asking this question because I want to avoid precisely that :)

    Read the article

  • AudioRecord - empty buffer

    - by Arxas
    I' m trying to record some audio using AudioRecord class. Here is my code: int audioSource = AudioSource.MIC; int sampleRateInHz = 44100; int channelConfig = AudioFormat.CHANNEL_IN_MONO; int audioFormat = AudioFormat.ENCODING_PCM_16BIT; int bufferSizeInShorts = 44100; int bufferSizeInBytes = 2*bufferSizeInShorts; short Data[] = new short[bufferSizeInShorts]; Thread recordingThread; AudioRecord audioRecorder = new AudioRecord(audioSource, sampleRateInHz, channelConfig, audioFormat, bufferSizeInBytes); @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.activity_main, menu); return true; } public void startRecording(View arg0) { audioRecorder.startRecording(); recordingThread = new Thread(new Runnable() { public void run() { while (Data[bufferSizeInShorts-1] == 0) audioRecorder.read(Data, 0, bufferSizeInShorts); } }); audioRecorder.stop(); } Unfortunately my short array is empty after the recording is over. May I kindly ask you to help me figure out what's wrong?

    Read the article

  • Adding Variables to JavaScript in Joomla

    - by Vikram
    Hello friends! This is the script I am using to display an accordion in my Joomla site: <?php defined('JPATH_BASE') or die(); gantry_import('core.gantryfeature'); class GantryFeatureAccordion extends GantryFeature { var $_feature_name = 'accordion'; function init() { global $gantry; if ($this->get('enabled')) { $gantry->addScript('accordion.js'); $gantry->addInlineScript($this->_accordion()); } } function render($position="") { ob_start(); ?> <div id="accordion"> <dl> <?php foreach (glob("templates/rt_gantry_j15/features/accordion/*.php") as $filename) {include($filename);} ?> </dl> </div> <?php return ob_get_clean(); } function _accordion() { global $gantry; $js = " jQuery.noConflict(); (function($){ $(document).ready(function () { $('#accordion').easyAccordion({ slideNum: true, autoStart: true, slideInterval: 4000 }); }); })(jQuery); "; return $js; } } I want to call these three values in the templateDetails.XML file as a user input. slideNum: true, autoStart: true, slideInterval: 4000 Like this in the templateDetails.xml file: <param name="accordion" type="chain" label="ACCORDION" description="ACCORDION_DESC"> <param name="slideNum" type="text" default="true" label="Offset Y" class="text-short" /> <param name="autoStart" type="text" default="true" label="Offset Y" class="text-short" /> <param name="autoStart" type="text" default="4000" label="Offset Y" class="text-short" /> </param> How can I do so? What will be the exact syntax for the same. I am very new to programming ans specially to JavaScript. Kindly help.

    Read the article

  • IE textarea wrap bug?

    - by user2227033
    It seems that IE starting from IE7 to IE10 wraps text in the textarea control incorrectly when using \n (or \r\n - doesn't matter - results are the same). Is this a bug in IE or they treat the html standard differently than other browsers - who is right? I have defined: <textarea id="TextArea1" runat="server" style="width: 190px; height: 390px; white-space: normal; word-wrap: normal; overflow: scroll" ></textarea> When I try to add long string like "VeryLongStringEndingWithNewLine\n" by using JavaScript code (obj.value += text;) the text is shown in one line with scroll (this is ok) but added with an additional empty line (\r\n) - why? When I try to add short string like "Short\n" multiple times, again via JavaScript code the text is on the same line (should be on the separate lines because normal wrapping should be applied). Moreover when I do postback then all \r\n's are replaced with spaces (why?) and then text parsed correctly (assuming if I used spaces instead of crlf normal wraping with space only wraps when does not fit in the area). When using FF or Chrome same control behaves correctly - long lines are shown without an additional empty next line, short lines are on the different lines, no replacement with spaces when doing postback. I know I could probably use other options or white space characters, but I feel that above is not correct about IE. Any comments? Mindaugas

    Read the article

  • Ruby & ActiveRecord: referring to integer fields by (uniquely mapped) strings

    - by JP
    While its not my application a simple way to explain my problem is to assume I'm running a URL shortener. Rather than attempt to try and figure out what the next string I should use as the unique section of the URL, I just index all my URLs by integer and map the numbers to strings behind the scenes, essentially just changing the base of the number to, let's say, 62: a-z + A-Z + 0-9. In ActiveRecord I can easily alter the reader for the url_id field so that it returns my base 62 string instead of the number being stored in the database: class Short < ActiveRecord::Base def url_id i = read_attribute(:convo) return '0' if i == 0 s = '' while i > 0 s << CHARS[i.modulo(62)] i /= 62 end s end end but is there a way to tell ActiveRecord to accept Short.find(:first,:conditions=>{:url_id=>'Ab7'}), ie. putting the 'decoding' logic into my Short ActiveRecord class? I guess I could define my own def self.find_by_unique_string(string), but that feels like cheating somehow! Thanks!

    Read the article

  • Custom stream wrappers, what could they be useful for in web applications?

    - by michael
    I suppose the concept is language agnostic, but I don't know what it's called in other languages. In PHP they're Stream Wrappers. In short, a wrapper class that allows manipulation of a streamable resource (resource that can be read to/written to/seek into, such as a file, a db, an url). For example, in a template engine (a view), upon including a template file such as: include "view.wrapper://path/to/my/template/file.phtml"; my custom wrapper, declared elsewhere and associated with "view.wrapper", would first intercepts the file to replace such things as short tags (<?=) with a more verbose counterpart (<?php echo). This allows developers to use short tags in views, even if the server isn't set to allow it. It can also be applied to the preprocessing of views pseudo syntax such as {@myVar} (e.g. replacing it with $this->myVar). This is only one application of custom stream wrappers, but the feature seems powerful enough to make me think that there are others that could make life a lot simpler for developers. What have you built, or thought about building, custom stream wrappers for? where have you seen some interesting implementations? I'm particularly interested in their applications in web development.

    Read the article

  • 3.5 mm component video jack -> Ipod female connection?

    - by Jigs
    At my gym the treadmills all have ipod male cables hanging out of them so that you can plug in a video ipod and play a video directly to the screen on the treadmill. I own a non apple MP4 player is there an adapter that will go from a 3.5mm component video jack to a female ipod connector that will allow me to watch a film on the screen?

    Read the article

  • Pause que in Windows Video?

    - by thomas
    Is therer a way to make pause ques in Windows Media Video, like sprites in Quicktime? I want to be able to run a wmv file in Windows Media Player that stops automaticly on a text, then I click and the film starts again and goes on to the next text and stops, and so on.

    Read the article

  • Can you disable the light up buttons on the HP HDX series laptops?

    - by Connor W
    Im intrested in buying a laptop from the HP HDX series, but I have one concern. As you can see below, they have touch sensitive buttons above the keyboard which are lit up. I cant help but think how distracting they would be if you were watching a film on it. So does anyone know if its possible to turn these lights off? And to any owners of this laptop, do you find it distracting? Thanks

    Read the article

  • Codec Pack with that can easily be deployed via group policy

    - by testguy
    We have a teacher that has a project for doing some basic film editing with windows movie maker. We loaded the avi file onto the computer and Windows is trying to install a codec but can't. I assume I need to install some type of codec pack. I'm looking for suggestions on a codec pack that I can easily deploy through a Win2003 server to WinXP clients. Ideally, this codec pack shouldn't break anything else and be easily removed if need be.

    Read the article

  • windows 7 internet speed significantly slower than ubuntu

    - by Infestor
    i have windows 7 x64 and ubuntu 10.04 installed on my machine. while i download at ~15 MiB/sec in ubuntu (reaches almost at an instant), it takes almost a minute or two for win7 to reach that speed while downloading -say- a film. also the connection becomes unresponsive for some time periods in win7. i experienced this in several win7 installations (x64 prof. always, same version).

    Read the article

  • Codec Pack that can easily be deployed via group policy

    - by testguy
    We have a teacher that has a project for doing some basic film editing with windows movie maker. We loaded the avi file onto the computer and Windows is trying to install a codec but can't. I assume I need to install some type of codec pack. I'm looking for suggestions on a codec pack that I can easily deploy through a Win2003 server to WinXP clients. Ideally, this codec pack shouldn't break anything else and be easily removed if need be.

    Read the article

  • custom video icon for a single video file in windows 7 file explorer

    - by MrBrody
    recently I found a video on the net ( a .mp4 file), and when I had it on my computer with Windows7, I noticed its thumbnail was not the average windows 7 video thumbnail (which looks like a piece of video film with a random picture from the movie), but a custom thumbnail! Looking in the file properties did not help find the correct button to change the thumbnail...so I just wonder how he did it! Here is a picture: left: the custom thumbnail, right: the average thumbnail...

    Read the article

  • The Incremental Architect&acute;s Napkin &ndash; #3 &ndash; Make Evolvability inevitable

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/04/the-incremental-architectacutes-napkin-ndash-3-ndash-make-evolvability-inevitable.aspxThe easier something to measure the more likely it will be produced. Deviations between what is and what should be can be readily detected. That´s what automated acceptance tests are for. That´s what sprint reviews in Scrum are for. It´s no small wonder our software looks like it looks. It has all the traits whose conformance with requirements can easily be measured. And it´s lacking traits which cannot easily be measured. Evolvability (or Changeability) is such a trait. If an operation is correct, if an operation if fast enough, that can be checked very easily. But whether Evolvability is high or low, that cannot be checked by taking a measure or two. Evolvability might correlate with certain traits, e.g. number of lines of code (LOC) per function or Cyclomatic Complexity or test coverage. But there is no threshold value signalling “evolvability too low”; also Evolvability is hardly tangible for the customer. Nevertheless Evolvability is of great importance - at least in the long run. You can get away without much of it for a short time. Eventually, though, it´s needed like any other requirement. Or even more. Because without Evolvability no other requirement can be implemented. Evolvability is the foundation on which all else is build. Such fundamental importance is in stark contrast with its immeasurability. To compensate this, Evolvability must be put at the very center of software development. It must become the hub around everything else revolves. Since we cannot measure Evolvability, though, we cannot start watching it more. Instead we need to establish practices to keep it high (enough) at all times. Chefs have known that for long. That´s why everybody in a restaurant kitchen is constantly seeing after cleanliness. Hygiene is important as is to have clean tools at standardized locations. Only then the health of the patrons can be guaranteed and production efficiency is constantly high. Still a kitchen´s level of cleanliness is easier to measure than software Evolvability. That´s why important practices like reviews, pair programming, or TDD are not enough, I guess. What we need to keep Evolvability in focus and high is… to continually evolve. Change must not be something to avoid but too embrace. To me that means the whole change cycle from requirement analysis to delivery needs to be gone through more often. Scrum´s sprints of 4, 2 even 1 week are too long. Kanban´s flow of user stories across is too unreliable; it takes as long as it takes. Instead we should fix the cycle time at 2 days max. I call that Spinning. No increment must take longer than from this morning until tomorrow evening to finish. Then it should be acceptance checked by the customer (or his/her representative, e.g. a Product Owner). For me there are several resasons for such a fixed and short cycle time for each increment: Clear expectations Absolute estimates (“This will take X days to complete.”) are near impossible in software development as explained previously. Too much unplanned research and engineering work lurk in every feature. And then pervasive interruptions of work by peers and management. However, the smaller the scope the better our absolute estimates become. That´s because we understand better what really are the requirements and what the solution should look like. But maybe more importantly the shorter the timespan the more we can control how we use our time. So much can happen over the course of a week and longer timespans. But if push comes to shove I can block out all distractions and interruptions for a day or possibly two. That´s why I believe we can give rough absolute estimates on 3 levels: Noon Tonight Tomorrow Think of a meeting with a Product Owner at 8:30 in the morning. If she asks you, how long it will take you to implement a user story or bug fix, you can say, “It´ll be fixed by noon.”, or you can say, “I can manage to implement it until tonight before I leave.”, or you can say, “You´ll get it by tomorrow night at latest.” Yes, I believe all else would be naive. If you´re not confident to get something done by tomorrow night (some 34h from now) you just cannot reliably commit to any timeframe. That means you should not promise anything, you should not even start working on the issue. So when estimating use these four categories: Noon, Tonight, Tomorrow, NoClue - with NoClue meaning the requirement needs to be broken down further so each aspect can be assigned to one of the first three categories. If you like absolute estimates, here you go. But don´t do deep estimates. Don´t estimate dozens of issues; don´t think ahead (“Issue A is a Tonight, then B will be a Tomorrow, after that it´s C as a Noon, finally D is a Tonight - that´s what I´ll do this week.”). Just estimate so Work-in-Progress (WIP) is 1 for everybody - plus a small number of buffer issues. To be blunt: Yes, this makes promises impossible as to what a team will deliver in terms of scope at a certain date in the future. But it will give a Product Owner a clear picture of what to pull for acceptance feedback tonight and tomorrow. Trust through reliability Our trade is lacking trust. Customers don´t trust software companies/departments much. Managers don´t trust developers much. I find that perfectly understandable in the light of what we´re trying to accomplish: delivering software in the face of uncertainty by means of material good production. Customers as well as managers still expect software development to be close to production of houses or cars. But that´s a fundamental misunderstanding. Software development ist development. It´s basically research. As software developers we´re constantly executing experiments to find out what really provides value to users. We don´t know what they need, we just have mediated hypothesises. That´s why we cannot reliably deliver on preposterous demands. So trust is out of the window in no time. If we switch to delivering in short cycles, though, we can regain trust. Because estimates - explicit or implicit - up to 32 hours at most can be satisfied. I´d say: reliability over scope. It´s more important to reliably deliver what was promised then to cover a lot of requirement area. So when in doubt promise less - but deliver without delay. Deliver on scope (Functionality and Quality); but also deliver on Evolvability, i.e. on inner quality according to accepted principles. Always. Trust will be the reward. Less complexity of communication will follow. More goodwill buffer will follow. So don´t wait for some Kanban board to show you, that flow can be improved by scheduling smaller stories. You don´t need to learn that the hard way. Just start with small batch sizes of three different sizes. Fast feedback What has been finished can be checked for acceptance. Why wait for a sprint of several weeks to end? Why let the mental model of the issue and its solution dissipate? If you get final feedback after one or two weeks, you hardly remember what you did and why you did it. Resoning becomes hard. But more importantly youo probably are not in the mood anymore to go back to something you deemed done a long time ago. It´s boring, it´s frustrating to open up that mental box again. Learning is harder the longer it takes from event to feedback. Effort can be wasted between event (finishing an issue) and feedback, because other work might go in the wrong direction based on false premises. Checking finished issues for acceptance is the most important task of a Product Owner. It´s even more important than planning new issues. Because as long as work started is not released (accepted) it´s potential waste. So before starting new work better make sure work already done has value. By putting the emphasis on acceptance rather than planning true pull is established. As long as planning and starting work is more important, it´s a push process. Accept a Noon issue on the same day before leaving. Accept a Tonight issue before leaving today or first thing tomorrow morning. Accept a Tomorrow issue tomorrow night before leaving or early the day after tomorrow. After acceptance the developer(s) can start working on the next issue. Flexibility As if reliability/trust and fast feedback for less waste weren´t enough economic incentive, there is flexibility. After each issue the Product Owner can change course. If on Monday morning feature slices A, B, C, D, E were important and A, B, C were scheduled for acceptance by Monday evening and Tuesday evening, the Product Owner can change her mind at any time. Maybe after A got accepted she asks for continuation with D. But maybe, just maybe, she has gotten a completely different idea by then. Maybe she wants work to continue on F. And after B it´s neither D nor E, but G. And after G it´s D. With Spinning every 32 hours at latest priorities can be changed. And nothing is lost. Because what got accepted is of value. It provides an incremental value to the customer/user. Or it provides internal value to the Product Owner as increased knowledge/decreased uncertainty. I find such reactivity over commitment economically very benefical. Why commit a team to some workload for several weeks? It´s unnecessary at beast, and inflexible and wasteful at worst. If we cannot promise delivery of a certain scope on a certain date - which is what customers/management usually want -, we can at least provide them with unpredecented flexibility in the face of high uncertainty. Where the path is not clear, cannot be clear, make small steps so you´re able to change your course at any time. Premature completion Customers/management are used to premeditating budgets. They want to know exactly how much to pay for a certain amount of requirements. That´s understandable. But it does not match with the nature of software development. We should know that by now. Maybe there´s somewhere in the world some team who can consistently deliver on scope, quality, and time, and budget. Great! Congratulations! I, however, haven´t seen such a team yet. Which does not mean it´s impossible, but I think it´s nothing I can recommend to strive for. Rather I´d say: Don´t try this at home. It might hurt you one way or the other. However, what we can do, is allow customers/management stop work on features at any moment. With spinning every 32 hours a feature can be declared as finished - even though it might not be completed according to initial definition. I think, progress over completion is an important offer software development can make. Why think in terms of completion beyond a promise for the next 32 hours? Isn´t it more important to constantly move forward? Step by step. We´re not running sprints, we´re not running marathons, not even ultra-marathons. We´re in the sport of running forever. That makes it futile to stare at the finishing line. The very concept of a burn-down chart is misleading (in most cases). Whoever can only think in terms of completed requirements shuts out the chance for saving money. The requirements for a features mostly are uncertain. So how does a Product Owner know in the first place, how much is needed. Maybe more than specified is needed - which gets uncovered step by step with each finished increment. Maybe less than specified is needed. After each 4–32 hour increment the Product Owner can do an experient (or invite users to an experiment) if a particular trait of the software system is already good enough. And if so, she can switch the attention to a different aspect. In the end, requirements A, B, C then could be finished just 70%, 80%, and 50%. What the heck? It´s good enough - for now. 33% money saved. Wouldn´t that be splendid? Isn´t that a stunning argument for any budget-sensitive customer? You can save money and still get what you need? Pull on practices So far, in addition to more trust, more flexibility, less money spent, Spinning led to “doing less” which also means less code which of course means higher Evolvability per se. Last but not least, though, I think Spinning´s short acceptance cycles have one more effect. They excert pull-power on all sorts of practices known for increasing Evolvability. If, for example, you believe high automated test coverage helps Evolvability by lowering the fear of inadverted damage to a code base, why isn´t 90% of the developer community practicing automated tests consistently? I think, the answer is simple: Because they can do without. Somehow they manage to do enough manual checks before their rare releases/acceptance checks to ensure good enough correctness - at least in the short term. The same goes for other practices like component orientation, continuous build/integration, code reviews etc. None of that is compelling, urgent, imperative. Something else always seems more important. So Evolvability principles and practices fall through the cracks most of the time - until a project hits a wall. Then everybody becomes desperate; but by then (re)gaining Evolvability has become as very, very difficult and tedious undertaking. Sometimes up to the point where the existence of a project/company is in danger. With Spinning that´s different. If you´re practicing Spinning you cannot avoid all those practices. With Spinning you very quickly realize you cannot deliver reliably even on your 32 hour promises. Spinning thus is pulling on developers to adopt principles and practices for Evolvability. They will start actively looking for ways to keep their delivery rate high. And if not, management will soon tell them to do that. Because first the Product Owner then management will notice an increasing difficulty to deliver value within 32 hours. There, finally there emerges a way to measure Evolvability: The more frequent developers tell the Product Owner there is no way to deliver anything worth of feedback until tomorrow night, the poorer Evolvability is. Don´t count the “WTF!”, count the “No way!” utterances. In closing For sustainable software development we need to put Evolvability first. Functionality and Quality must not rule software development but be implemented within a framework ensuring (enough) Evolvability. Since Evolvability cannot be measured easily, I think we need to put software development “under pressure”. Software needs to be changed more often, in smaller increments. Each increment being relevant to the customer/user in some way. That does not mean each increment is worthy of shipment. It´s sufficient to gain further insight from it. Increments primarily serve the reduction of uncertainty, not sales. Sales even needs to be decoupled from this incremental progress. No more promises to sales. No more delivery au point. Rather sales should look at a stream of accepted increments (or incremental releases) and scoup from that whatever they find valuable. Sales and marketing need to realize they should work on what´s there, not what might be possible in the future. But I digress… In my view a Spinning cycle - which is not easy to reach, which requires practice - is the core practice to compensate the immeasurability of Evolvability. From start to finish of each issue in 32 hours max - that´s the challenge we need to accept if we´re serious increasing Evolvability. Fortunately higher Evolvability is not the only outcome of Spinning. Customer/management will like the increased flexibility and “getting more bang for the buck”.

    Read the article

  • configure spam assassin to delete all spam above a score domain wide and override individual settin

    - by Marlon
    Okay, so this is my scenario and what I want to try and do. I maintain a Red Hat email server running qmail and spamassassin. I have a domain that has well over 100 email account each with individual settings for spam scores and, whether or not to delete email incoming mail deemed spam. What I want to accomplish is to change all those email email accounts to say a more stringent spam score value, AND to enable the deletion of email immediately as it flagged as such, for EACH AND EVERY email box. In short, I want to be able to override a user's individual settings spam settings, with my own. Short of tediously going into each and every email box one by one, is there an way to do this all in one fell swoop? Any advice would be greatly appreciated! :-)

    Read the article

  • How to use a custom .bashrc file on SSH login

    - by gsingh2011
    I've found that with the new company I'm working with I often have to access linux servers with relatively short lifetimes. On each of these servers I have an account, but whenever a new one is created, I have to go through the hassle of transferring over my .bashrc. It's possible however that in about a months time that server won't be around anymore. I also have to access many other servers for short periods of times (minutes) where it's just not worth it to transfer over my .bashrc but since I'm working on a lot of servers, this adds up to a lot of wasted time. I don't want to change anything on the servers, but I was wondering if there was a way to have a "per-connection" .bashrc, so whenever I would SSH to a server my settings would be used for that session. If this is possible, it would be nice if I could do the same thing with other configuration files, like gitconfig files.

    Read the article

  • AT91SAM7X512's SPI peripheral gets disabled on write to SPI_TDR

    - by Dor
    My AT91SAM7X512's SPI peripheral gets disabled on the X time (X varies) that I write to SPI_TDR. As a result, the processor hangs on the while loop that checks the TDRE flag in SPI_SR. This while loop is located in the function SPI_Write() that belongs to the software package/library provided by ATMEL. The problem occurs arbitrarily - sometimes everything works OK and sometimes it fails on repeated attempts (attemp = downloading the same binary to the MCU and running the program). Configurations are (defined in the order of writing): SPI_MR: MSTR = 1 PS = 0 PCSDEC = 0 PCS = 0111 DLYBCS = 0 SPI_CSR[3]: CPOL = 0 NCPHA = 1 CSAAT = 0 BITS = 0000 SCBR = 20 DLYBS = 0 DLYBCT = 0 SPI_CR: SPIEN = 1 After setting the configurations, the code verifies that the SPI is enabled, by checking the SPIENS flag. I perform a transmission of bytes as follows: const short int dataSize = 5; // Filling array with random data unsigned char data[dataSize] = {0xA5, 0x34, 0x12, 0x00, 0xFF}; short int i = 0; volatile unsigned short dummyRead; SetCS3(); // NPCS3 == PIOA15 while(i-- < dataSize) { mySPI_Write(data[i]); while((AT91C_BASE_SPI0->SPI_SR & AT91C_SPI_TXEMPTY) == 0); dummyRead = SPI_Read(); // SPI_Read() from Atmel's library } ClearCS3(); /**********************************/ void mySPI_Write(unsigned char data) { while ((AT91C_BASE_SPI0->SPI_SR & AT91C_SPI_TXEMPTY) == 0); AT91C_BASE_SPI0->SPI_TDR = data; while ((AT91C_BASE_SPI0->SPI_SR & AT91C_SPI_TDRE) == 0); // <-- This is where // the processor hangs, because that the SPI peripheral is disabled // (SPIENS equals 0), which makes TDRE equal to 0 forever. } Questions: What's causing the SPI peripheral to become disabled on the write to SPI_TDR? Should I un-comment the line in SPI_Write() that reads the SPI_RDR register? Means, the 4th line in the following code: (The 4th line is originally marked as a comment) void SPI_Write(AT91S_SPI *spi, unsigned int npcs, unsigned short data) { // Discard contents of RDR register //volatile unsigned int discard = spi->SPI_RDR; /* Send data */ while ((spi->SPI_SR & AT91C_SPI_TXEMPTY) == 0); spi->SPI_TDR = data | SPI_PCS(npcs); while ((spi->SPI_SR & AT91C_SPI_TDRE) == 0); } Is there something wrong with the code above that transmits 5 bytes of data? Please note: The NPCS line num. 3 is a GPIO line (means, in PIO mode), and is not controlled by the SPI controller. I'm controlling this line by myself in the code, by de/asserting the ChipSelect#3 (NPCS3) pin when needed. The reason that I'm doing so is because that problems occurred while trying to let the SPI controller to control this pin. I didn't reset the SPI peripheral twice, because that the errata tells to reset it twice only if I perform a reset - which I don't do. Quoting the errata: If a software reset (SWRST in the SPI Control Register) is performed, the SPI may not work properly (the clock is enabled before the chip select.) Problem Fix/Workaround The SPI Control Register field, SWRST (Software Reset) needs to be written twice to be cor- rectly set. I noticed that sometimes, if I put a delay before the write to the SPI_TDR register (in SPI_Write()), then the code works perfectly and the communications succeeds. Useful links: AT91SAM7X Series Preliminary.pdf ATMEL software package/library spi.c from Atmel's library spi.h from Atmel's library An example of initializing the SPI and performing a transfer of 5 bytes is highly appreciated and helpful.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >