Search Results

Search found 32728 results on 1310 pages for 'custom software'.

Page 46/1310 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • GNU General Public License (v2): can a company use the licensed software for free?

    - by EOL
    When a library is released under the GPL v2, can a company use it internally for free? If they develop software based on it, do they have to release it under the GPL, even if they don't distribute it? Can they make money by using (not distributing) internally developed software that links to the GPL'ed library, without any compensation for the author? I am looking for a software license that only allows non-commercial uses (copy, modify, link to); the resulting derived programs must also be free for non-commercial uses. Is there any software license that does this for non-commercial uses, and prevents any commercial use (including using the software in order to make money)? It looks like the Creative Commons licenses are flexible enough to do something close to that, but I've read against using them for software. What do you think?

    Read the article

  • The theory of evolution applied to software

    - by Michel Grootjans
    I recently realized the many parallels you can draw between the theory of evolution and evolving software. Evolution is not the proverbial million monkeys typing on a million typewriters, where one of them comes up with the complete works of Shakespeare. We would have noticed by now, since the proverbial monkeys are now blogging on the Internet ;-) One of the main ideas of the theory of evolution is the balance between random mutations and natural selection. Random mutations happen all the time: millions of mutations over millions of years. Most of them are totally useless. Some of them are beneficial to the evolved species. Natural selection favors the beneficially mutated species. Less beneficial mutations die off. The mutated rabbit doesn't have to be faster than the fox. It just has to be faster than the other rabbits.   Theory of evolution Evolving software Random mutations happen all the time. Most of these mutations are so bad, the new species dies off, or cannot reproduce. Developers write new code all the time. New ideas come up during the act of writing software. The really bad ones don't get past the stage of idea. The bad ones don't get committed to source control. Natural selection favors the beneficial mutated species Good ideas and new code gets discussed in group during informal peer review. Less than good code gets refactored. Enhanced code makes it more readable, maintainable... A good set of traits makes the species superior to others. It becomes widespread A good design tends to make it easier to add new features, easier to understand the current implementations, easier to optimize for performance...thus superior. The best designs get carried over from project to project. They appear in blogs, articles and books about principles, patterns and practices.   Of course the act of writing software is deliberate. This can hardly be called random mutations. Though it sometimes might seem that code evolves through a will of its own ;-) Does this mean that evolving software (evolution) is better than a big design up front (creationism)? Not necessarily. It's a false idea to think that a project starts from scratch and everything evolves from there. Everyone carries his experience of what works and what doesn't. Up front design is necessary, but is best kept simple and minimal, just enough to get you started. Let the good experiences and ideas help to drive the process, whether they come from you or from others, from past experience or from the most junior developer on your team. Once again, balance is the keyword. Balance design up front with evolution on a daily basis. How do you know what balance is right? Through your own experience of what worked and what didn't (here's evolution again). Notes: The evolution of software can quickly degenerate without discipline. TDD is a discipline that leaves little to chance on that part. Write your test to describe the new behavior. Write just enough code to make it behave as specified. Refactor to evolve the code to a higher standard. The responsibility of good design rests continuously on each developers' shoulders. Promiscuous pair programming helps quickly spreading the design to the whole team.

    Read the article

  • Mac OS X Server 10.6 - Apple's software mirrored RAID worth it?

    - by Arko
    Hi, I am installing an Intel Xserve (Quad core Xeon) with Snow Leopard Server (10.6) on two 80Gb 7200rpm SATA HDs. I created a mirrored RAID set using Disk Utility with those two drives, all went fine. I was then asking myself if this is really a good idea. I know that an hardware RAID system would be better, but what about this software RAID? Have you any feedback on this? Will it work fine if one HD breaks down? Does this affect performance? [UPDATE] In short: Hardware RAID is better than software RAID which is better than none. Thank you all for the answers, they were very helpful. Especially Gordon's script to monitor failures. As Apple's software RAID is pretty silent about a drive failure.

    Read the article

  • Any screen capture software that captures webcam, microphone inputs too ?

    - by mohanr
    I am going to conduct a user study. Apart from capturing the screen while the user is interacting with the system, I also want to capture the video/audio of the user. Is there any software that in addition to capturing the screen also overlays it with the webcam/microphone inputs. The goal is to capture the complete experience of the user: key/mouse interactions with the system along with their facial/vocal responses. I know that I can maybe run a screen-capture software and also run a software for capturing webcam audio/video alongside and try to sync/overlay both these streams with timestamps. But I am going to be dealing with probably several hundred hours of data. So I am looking for a tool that can streamline the process for me amap and help me keep my sanity at end of the process. Thanks,

    Read the article

  • Linux Software RAID: How to fsck on hard drive?

    - by Rick-Rainer Ludwig
    We have a Linux server running with Software RAID1. We see some issues in /var/log/messages like: unreadable sector. I want to perform a complete fsck on the drive to get some more information, but a fsck /dev/md0 brings a clean due to the Software RAID layer in between. How can I check the real hard drive? Do I need to disassemble the whole RAID? How do I deal with the inconsistency in the partition due to the additional Software RAID header? Does anyone have a good idea for this?

    Read the article

  • Is there any automatic Windows software to check status of website..

    - by user59280
    Is there any automatic Windows software application to check status of website and alert me through mail or message or trigger am alarm.. Example: Consider I am waiting to buy a new latest movie ticket online (through) and the ticket booking has not been informed properly (online booking is opening at a random time). In this situation I will be forced to slave for my PC to get the tickets. To avoid such situation, can you suggest me a software? So I need a software which will alert me when the online booking is open.. Can anyone please help me?

    Read the article

  • The Incremental Architect&acute;s Napkin &ndash; #3 &ndash; Make Evolvability inevitable

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/04/the-incremental-architectacutes-napkin-ndash-3-ndash-make-evolvability-inevitable.aspxThe easier something to measure the more likely it will be produced. Deviations between what is and what should be can be readily detected. That´s what automated acceptance tests are for. That´s what sprint reviews in Scrum are for. It´s no small wonder our software looks like it looks. It has all the traits whose conformance with requirements can easily be measured. And it´s lacking traits which cannot easily be measured. Evolvability (or Changeability) is such a trait. If an operation is correct, if an operation if fast enough, that can be checked very easily. But whether Evolvability is high or low, that cannot be checked by taking a measure or two. Evolvability might correlate with certain traits, e.g. number of lines of code (LOC) per function or Cyclomatic Complexity or test coverage. But there is no threshold value signalling “evolvability too low”; also Evolvability is hardly tangible for the customer. Nevertheless Evolvability is of great importance - at least in the long run. You can get away without much of it for a short time. Eventually, though, it´s needed like any other requirement. Or even more. Because without Evolvability no other requirement can be implemented. Evolvability is the foundation on which all else is build. Such fundamental importance is in stark contrast with its immeasurability. To compensate this, Evolvability must be put at the very center of software development. It must become the hub around everything else revolves. Since we cannot measure Evolvability, though, we cannot start watching it more. Instead we need to establish practices to keep it high (enough) at all times. Chefs have known that for long. That´s why everybody in a restaurant kitchen is constantly seeing after cleanliness. Hygiene is important as is to have clean tools at standardized locations. Only then the health of the patrons can be guaranteed and production efficiency is constantly high. Still a kitchen´s level of cleanliness is easier to measure than software Evolvability. That´s why important practices like reviews, pair programming, or TDD are not enough, I guess. What we need to keep Evolvability in focus and high is… to continually evolve. Change must not be something to avoid but too embrace. To me that means the whole change cycle from requirement analysis to delivery needs to be gone through more often. Scrum´s sprints of 4, 2 even 1 week are too long. Kanban´s flow of user stories across is too unreliable; it takes as long as it takes. Instead we should fix the cycle time at 2 days max. I call that Spinning. No increment must take longer than from this morning until tomorrow evening to finish. Then it should be acceptance checked by the customer (or his/her representative, e.g. a Product Owner). For me there are several resasons for such a fixed and short cycle time for each increment: Clear expectations Absolute estimates (“This will take X days to complete.”) are near impossible in software development as explained previously. Too much unplanned research and engineering work lurk in every feature. And then pervasive interruptions of work by peers and management. However, the smaller the scope the better our absolute estimates become. That´s because we understand better what really are the requirements and what the solution should look like. But maybe more importantly the shorter the timespan the more we can control how we use our time. So much can happen over the course of a week and longer timespans. But if push comes to shove I can block out all distractions and interruptions for a day or possibly two. That´s why I believe we can give rough absolute estimates on 3 levels: Noon Tonight Tomorrow Think of a meeting with a Product Owner at 8:30 in the morning. If she asks you, how long it will take you to implement a user story or bug fix, you can say, “It´ll be fixed by noon.”, or you can say, “I can manage to implement it until tonight before I leave.”, or you can say, “You´ll get it by tomorrow night at latest.” Yes, I believe all else would be naive. If you´re not confident to get something done by tomorrow night (some 34h from now) you just cannot reliably commit to any timeframe. That means you should not promise anything, you should not even start working on the issue. So when estimating use these four categories: Noon, Tonight, Tomorrow, NoClue - with NoClue meaning the requirement needs to be broken down further so each aspect can be assigned to one of the first three categories. If you like absolute estimates, here you go. But don´t do deep estimates. Don´t estimate dozens of issues; don´t think ahead (“Issue A is a Tonight, then B will be a Tomorrow, after that it´s C as a Noon, finally D is a Tonight - that´s what I´ll do this week.”). Just estimate so Work-in-Progress (WIP) is 1 for everybody - plus a small number of buffer issues. To be blunt: Yes, this makes promises impossible as to what a team will deliver in terms of scope at a certain date in the future. But it will give a Product Owner a clear picture of what to pull for acceptance feedback tonight and tomorrow. Trust through reliability Our trade is lacking trust. Customers don´t trust software companies/departments much. Managers don´t trust developers much. I find that perfectly understandable in the light of what we´re trying to accomplish: delivering software in the face of uncertainty by means of material good production. Customers as well as managers still expect software development to be close to production of houses or cars. But that´s a fundamental misunderstanding. Software development ist development. It´s basically research. As software developers we´re constantly executing experiments to find out what really provides value to users. We don´t know what they need, we just have mediated hypothesises. That´s why we cannot reliably deliver on preposterous demands. So trust is out of the window in no time. If we switch to delivering in short cycles, though, we can regain trust. Because estimates - explicit or implicit - up to 32 hours at most can be satisfied. I´d say: reliability over scope. It´s more important to reliably deliver what was promised then to cover a lot of requirement area. So when in doubt promise less - but deliver without delay. Deliver on scope (Functionality and Quality); but also deliver on Evolvability, i.e. on inner quality according to accepted principles. Always. Trust will be the reward. Less complexity of communication will follow. More goodwill buffer will follow. So don´t wait for some Kanban board to show you, that flow can be improved by scheduling smaller stories. You don´t need to learn that the hard way. Just start with small batch sizes of three different sizes. Fast feedback What has been finished can be checked for acceptance. Why wait for a sprint of several weeks to end? Why let the mental model of the issue and its solution dissipate? If you get final feedback after one or two weeks, you hardly remember what you did and why you did it. Resoning becomes hard. But more importantly youo probably are not in the mood anymore to go back to something you deemed done a long time ago. It´s boring, it´s frustrating to open up that mental box again. Learning is harder the longer it takes from event to feedback. Effort can be wasted between event (finishing an issue) and feedback, because other work might go in the wrong direction based on false premises. Checking finished issues for acceptance is the most important task of a Product Owner. It´s even more important than planning new issues. Because as long as work started is not released (accepted) it´s potential waste. So before starting new work better make sure work already done has value. By putting the emphasis on acceptance rather than planning true pull is established. As long as planning and starting work is more important, it´s a push process. Accept a Noon issue on the same day before leaving. Accept a Tonight issue before leaving today or first thing tomorrow morning. Accept a Tomorrow issue tomorrow night before leaving or early the day after tomorrow. After acceptance the developer(s) can start working on the next issue. Flexibility As if reliability/trust and fast feedback for less waste weren´t enough economic incentive, there is flexibility. After each issue the Product Owner can change course. If on Monday morning feature slices A, B, C, D, E were important and A, B, C were scheduled for acceptance by Monday evening and Tuesday evening, the Product Owner can change her mind at any time. Maybe after A got accepted she asks for continuation with D. But maybe, just maybe, she has gotten a completely different idea by then. Maybe she wants work to continue on F. And after B it´s neither D nor E, but G. And after G it´s D. With Spinning every 32 hours at latest priorities can be changed. And nothing is lost. Because what got accepted is of value. It provides an incremental value to the customer/user. Or it provides internal value to the Product Owner as increased knowledge/decreased uncertainty. I find such reactivity over commitment economically very benefical. Why commit a team to some workload for several weeks? It´s unnecessary at beast, and inflexible and wasteful at worst. If we cannot promise delivery of a certain scope on a certain date - which is what customers/management usually want -, we can at least provide them with unpredecented flexibility in the face of high uncertainty. Where the path is not clear, cannot be clear, make small steps so you´re able to change your course at any time. Premature completion Customers/management are used to premeditating budgets. They want to know exactly how much to pay for a certain amount of requirements. That´s understandable. But it does not match with the nature of software development. We should know that by now. Maybe there´s somewhere in the world some team who can consistently deliver on scope, quality, and time, and budget. Great! Congratulations! I, however, haven´t seen such a team yet. Which does not mean it´s impossible, but I think it´s nothing I can recommend to strive for. Rather I´d say: Don´t try this at home. It might hurt you one way or the other. However, what we can do, is allow customers/management stop work on features at any moment. With spinning every 32 hours a feature can be declared as finished - even though it might not be completed according to initial definition. I think, progress over completion is an important offer software development can make. Why think in terms of completion beyond a promise for the next 32 hours? Isn´t it more important to constantly move forward? Step by step. We´re not running sprints, we´re not running marathons, not even ultra-marathons. We´re in the sport of running forever. That makes it futile to stare at the finishing line. The very concept of a burn-down chart is misleading (in most cases). Whoever can only think in terms of completed requirements shuts out the chance for saving money. The requirements for a features mostly are uncertain. So how does a Product Owner know in the first place, how much is needed. Maybe more than specified is needed - which gets uncovered step by step with each finished increment. Maybe less than specified is needed. After each 4–32 hour increment the Product Owner can do an experient (or invite users to an experiment) if a particular trait of the software system is already good enough. And if so, she can switch the attention to a different aspect. In the end, requirements A, B, C then could be finished just 70%, 80%, and 50%. What the heck? It´s good enough - for now. 33% money saved. Wouldn´t that be splendid? Isn´t that a stunning argument for any budget-sensitive customer? You can save money and still get what you need? Pull on practices So far, in addition to more trust, more flexibility, less money spent, Spinning led to “doing less” which also means less code which of course means higher Evolvability per se. Last but not least, though, I think Spinning´s short acceptance cycles have one more effect. They excert pull-power on all sorts of practices known for increasing Evolvability. If, for example, you believe high automated test coverage helps Evolvability by lowering the fear of inadverted damage to a code base, why isn´t 90% of the developer community practicing automated tests consistently? I think, the answer is simple: Because they can do without. Somehow they manage to do enough manual checks before their rare releases/acceptance checks to ensure good enough correctness - at least in the short term. The same goes for other practices like component orientation, continuous build/integration, code reviews etc. None of that is compelling, urgent, imperative. Something else always seems more important. So Evolvability principles and practices fall through the cracks most of the time - until a project hits a wall. Then everybody becomes desperate; but by then (re)gaining Evolvability has become as very, very difficult and tedious undertaking. Sometimes up to the point where the existence of a project/company is in danger. With Spinning that´s different. If you´re practicing Spinning you cannot avoid all those practices. With Spinning you very quickly realize you cannot deliver reliably even on your 32 hour promises. Spinning thus is pulling on developers to adopt principles and practices for Evolvability. They will start actively looking for ways to keep their delivery rate high. And if not, management will soon tell them to do that. Because first the Product Owner then management will notice an increasing difficulty to deliver value within 32 hours. There, finally there emerges a way to measure Evolvability: The more frequent developers tell the Product Owner there is no way to deliver anything worth of feedback until tomorrow night, the poorer Evolvability is. Don´t count the “WTF!”, count the “No way!” utterances. In closing For sustainable software development we need to put Evolvability first. Functionality and Quality must not rule software development but be implemented within a framework ensuring (enough) Evolvability. Since Evolvability cannot be measured easily, I think we need to put software development “under pressure”. Software needs to be changed more often, in smaller increments. Each increment being relevant to the customer/user in some way. That does not mean each increment is worthy of shipment. It´s sufficient to gain further insight from it. Increments primarily serve the reduction of uncertainty, not sales. Sales even needs to be decoupled from this incremental progress. No more promises to sales. No more delivery au point. Rather sales should look at a stream of accepted increments (or incremental releases) and scoup from that whatever they find valuable. Sales and marketing need to realize they should work on what´s there, not what might be possible in the future. But I digress… In my view a Spinning cycle - which is not easy to reach, which requires practice - is the core practice to compensate the immeasurability of Evolvability. From start to finish of each issue in 32 hours max - that´s the challenge we need to accept if we´re serious increasing Evolvability. Fortunately higher Evolvability is not the only outcome of Spinning. Customer/management will like the increased flexibility and “getting more bang for the buck”.

    Read the article

  • Custom Checkbox style for WPF

    - by lorddarq
    I would like to skin a wpf default checkbox to something custom. Since it does not really make sense to start off a entirely new control, i'd like to override the Windows Chrome template binding for the Bulletchrome sub component of the checkbox. However I cannot do this like I can with checkboxes for example. tried using something like this to override the default style, but it seems to not compile like this <Style x:Key="cb_BULLETSTYLE" TargetType="{x:Type BulletDecorator}"> <Setter Property="OverridesDefaultStyle" Value="true"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type BulletDecorator}"> <Border x:Name="Chrome" SnapsToDevicePixels="true" BorderBrush="{x:Null}" BorderThickness="0"> </Border> <ControlTemplate.Triggers> <Trigger Property="IsMouseOver" Value="True"> <Setter Property="Background" TargetName="Chrome" Value="#FF4C4C4C"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter>

    Read the article

  • JQuery Ajax error handling, show custom exception messages

    Hey Folks, I am wondering if there is some way where I can show custom exception messages as an alert in my Jquery ajax error message. For example, say if I want to throw an exception in server side via Struts by "throw new ApplicationException("User name already exists");", I want to catch this message(User name already exists) in Jquery ajax error message. jQuery("#save").click(function(){ if(jQuery('#form').jVal()){ jQuery.ajax({ type: "POST", url: "saveuser.do", dataType:"html", data:"userId="+encodeURIComponent(trim(document.forms[0].userId.value)), success:function(response){ jQuery("#usergrid").trigger("reloadGrid"); clear(); alert("Details saved successfully!!!"); }, error:function (xhr, ajaxOptions, thrownError){ alert(xhr.status); alert(thrownError); } }); } } ); On the second alert where I alert thrown error, I am getting undefined and the status code is 500. I am not sure where I am going wrong. Please let me know on this. Thanks, Dukes

    Read the article

  • How to create this MongoMapper custom data type?

    - by Kapslok
    I'm trying to create a custom MongoMapper data type in RoR 2.3.5 called Translatable: class Translatable < String def initialize(translation, culture="en") end def languages end def has_translation(culture)? end def self.to_mongo(value) end def self.from_mongo(value) end end I want to be able to use it like this: class Page include MongoMapper::Document key :title, Translatable, :required => true key :content, String end Then implement like this: p = Page.new p.title = "Hello" p.title(:fr) = "Bonjour" p.title(:es) = "Hola" p.content = "Some content here" p.save p = Page.first p.languages => [:en, :fr, :es] p.has_translation(:fr) => true en = p.title => "Hello" en = p.title(:en) => "Hello" fr = p.title(:fr) => "Bonjour" es = p.title(:es) => "Hola" In mongoDB I imagine the information would be stored like: { "_id" : ObjectId("4b98cd7803bca46ca6000002"), "title" : { "en" : "Hello", "fr" : "Bonjour", "es" : "Hola" }, "content" : "Some content here" } So Page.title is a string that defaults to English (:en) when culture is not specified. I would really appreciate any help.

    Read the article

  • Custom Error Handling

    - by Michael
    Using GoDaddy to host my site (I know that's my first problem)! :-) Trying to setup custom error messages for my site using IIS7. GoDaddy allows you to setup a 404 in their control panel, but I can't override this, or setup any additional error redirects, specifically a 500-server error. Here is my web.config file: <configuration> <system.webServer> <rewrite> <rules> <rule name="Redirect to WWW" stopProcessing="true"> <match url=".*" /> <conditions> <add input="{HTTP_HOST}" pattern="^mysite.com$" /> </conditions> <action type="Redirect" url="http://www.mysite.com/{R:0}" redirectType="Permanent" /> </rule> </rules> </rewrite> </system.webServer> <system.web> <customErrors mode="On" defaultRedirect="http://www.mysite.com/oops.php"> <error statusCode="404" redirect="http://www.mysite.com/oops.php?error=404" /> <error statusCode="500" redirect="http://www.mysite.com/oops.php?error=500" /> </customErrors> </system.web> </configuration>

    Read the article

  • How to build a GridView like DataBound Templated Custom ASP.NET Server Control

    - by Deyan
    I am trying to develop a very simple templated custom server control that resembles GridView. Basically, I want the control to be added in the .aspx page like this: <cc:SimpleGrid ID="SimpleGrid1" runat="server"> <TemplatedColumn>ID: <%# Eval("ID") %></ TemplatedColumn> <TemplatedColumn>Name: <%# Eval("Name") %></ TemplatedColumn> <TemplatedColumn>Age: <%# Eval("Age") %></ TemplatedColumn> </cc:SimpleGrid> and when providing the following datasource: DataTable table = new DataTable(); table.Columns.Add("ID", typeof(int)); table.Columns.Add("Name", typeof(string)); table.Columns.Add("Age", typeof(int)); table.Rows.Add(1, "Bob", 35); table.Rows.Add(2, "Sam", 44); table.Rows.Add(3, "Ann", 26); SimpleGrid1.DataSource = table; SimpleGrid1.DataBind(); the result should be a HTML table like this one. ------------------------------- | ID: 1 | Name: Bob | Age: 35 | ------------------------------- | ID: 2 | Name: Sam | Age: 44 | ------------------------------- | ID: 3 | Name: Ann | Age: 26 | ------------------------------- The main problem is that I cannot define the TemplatedColumn. When I tried to do it like this ... private ITemplate _TemplatedColumn; [PersistenceMode(PersistenceMode.InnerProperty)] [TemplateContainer(typeof(TemplatedColumnItem))] [TemplateInstance(TemplateInstance.Multiple)] public ITemplate TemplatedColumn { get { return _TemplatedColumn; } set { _TemplatedColumn = value; } } .. and subsequently instantiate the template in the CreateChildControls I got the following result which is not what I want: ----------- | Age: 35 | ----------- | Age: 44 | ----------- | Age: 26 | ----------- I know that what I want to achieve is pointless and that I can use DataGrid to achieve it, but I am giving this very simple example because if I know how to do this I would be able to develop the control that I need. Thank you.

    Read the article

  • Custom title of PreferenceActivity (problem)

    - by Emerald214
    I have the same problem like this question: Custom title bar in PreferenceActivity ?? After extending PreferenceActivity, I write this code in onCreate(), it just shows a blank grey title. I think it is a bug (because this solution works well with Activity). requestWindowFeature(Window.FEATURE_CUSTOM_TITLE); getWindow().setFeatureInt(Window.FEATURE_CUSTOM_TITLE, R.layout.window_title); super.onCreate(savedInstanceState); addPreferencesFromResource(R.xml.main_pref); Edited: window_title.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="horizontal" android:layout_width="match_parent" android:background="@color/titleBar" android:layout_height="wrap_content" android:gravity="center_vertical" android:paddingLeft="5dip" android:paddingRight="5dip"> <ImageView android:id="@+id/imageView1" android:src="@drawable/megadict_icon" android:layout_height="35dip" android:layout_width="35dip" android:layout_gravity="center_vertical"> </ImageView> <TextView android:layout_width="wrap_content" android:id="@+id/textView1" android:layout_height="wrap_content" android:textColor="@color/white" android:textSize="16dip" android:layout_weight="1" android:layout_gravity="center_vertical" android:text="@string/appName" android:paddingLeft="5dip" android:paddingRight="5dip"> </TextView> <ProgressBar style="?android:attr/progressBarStyleSmall" android:id="@+id/progressBar" android:layout_width="28dip" android:layout_height="28dip" android:layout_gravity="center_vertical" android:visibility="invisible"> </ProgressBar> </LinearLayout> main_pref.xml <?xml version="1.0" encoding="utf-8"?> <PreferenceScreen xmlns:android="http://schemas.android.com/apk/res/android" android:title="@string/mainPrefTitle"> <ListPreference android:entries="@array/languageStrings" android:entryValues="@array/languageValues" android:dialogTitle="@string/languagePrefTitle" android:title="@string/mainPrefTitle" android:key="languagePrefKey"> </ListPreference> </PreferenceScreen>

    Read the article

  • Custom UITableViewCell changing indexPath While Scrolling ?

    - by Chris
    I have a custom UITableViewCell which I created in Interface Builder. I am successfully Dequeuing cells, but as I scroll, the cells appear to begin calling different indexPaths. In this example, I am feeding the current indexPath.section and indexPath.row into the customCellLabel. As I scroll the table up and down, some of the cells will change. The numbers can be all over the place, but the cells are not skipping around visually. If I comment out the if(cell==nil), then the problem goes away. If I use a standard cell, the problem goes away. Ideas why this might be happening? // Customize the appearance of table view cells. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell *cell = (UITableViewCell *)[tableView dequeueReusableCellWithIdentifier:@"CalendarEventCell"]; if (cell == nil) { NSLog(@"Creating New Cell !!!!!"); NSArray *nib = [[NSBundle mainBundle] loadNibNamed:@"CalendarEventCell" owner:self options:nil]; cell = (UITableViewCell *)[nib objectAtIndex:0]; cell.accessoryType = UITableViewCellAccessoryDisclosureIndicator; } // Set up the cell... [customCellLabel setText:[NSString stringWithFormat:@"%d - %d",indexPath.section, indexPath.row]]; return cell; }

    Read the article

  • Magento- custom attribute causes blank order number.

    - by frank
    Hi- I created a simple custom attribute on the sales/order entity. Now, for new orders, order number is null. I looked at the sales_order table, and sure enough, increment_id is null... can anyone help me out, I am stumped? This is my setup.php: `public function getDefaultEntities() { return array( 'order' => array( 'entity_model' => 'sales/order', //'attribute_model' => 'catalog/resource_eav_attribute', 'table' => 'sales/order', 'attributes' => array( 'pr_email_sent' => array( 'label' => 'prEmailSent', 'type' => 'varchar', 'default' => 'false' ), ) ) ); }` This is my config.xml <fieldsets> <sales_order> <pr_email_sent><create>1</create><update>1</update></pr_email_sent> </sales_order> </fieldsets> Thanks.

    Read the article

  • SerializationException with custom GenericIdentiy?

    - by MunkiPhD
    I'm trying to implement my own GenericIdentity implementation but keep receiving the following error when it attempts to load the views (I'm using asp.net MVC): System.Runtime.Serialization.SerializationException was unhandled by user code Message="Type is not resolved for member 'OpenIDExtendedIdentity,Training.Web, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'." Source="WebDev.WebHost" I've ended up with the following class: [Serializable] public class OpenIDExtendedIdentity : GenericIdentity { private string _nickName; private int _userId; public OpenIDExtendedIdentity(String name, string nickName, int userId) : base(name, "OpenID") { _nickName = nickName; _userId = userId; } public string NickName { get { return _nickName; } } public int UserID { get { return _userId; } } } In my Global.asax I read a cookie's serialized value into a memory stream and then use that to create my OpenIDExtendedIdentity object. I ended up with this attempt at a solution after countless tries of various sorts. It works correctly up until the point where it attempts to render the views. What I'm essentially attempting to achieve is the ability to do the following (While using the default Role manager from asp.net): User.Identity.UserID User.Identity.NickName ... etc. I've listed some of the sources I've read in my attempt to get this resolved. Some people have reported a Cassini error, but it seems like others have had success implementing this type of custom functionality - thus a boggling of my mind. http://forums.asp.net/p/32497/161775.aspx http://ondotnet.com/pub/a/dotnet/2004/02/02/effectiveformsauth.html http://social.msdn.microsoft.com/Forums/en-US/netfxremoting/thread/e6767ae2-dfbf-445b-9139-93735f1a0f72

    Read the article

  • Custom DateTime model binder in Asp.net MVC

    - by Robert Koritnik
    I would like to write my own model binder for DateTime type. First of all I'd like to write a new attribute that I can attach to my model property like: [DateTimeFormat("d.M.yyyy")] public DateTime Birth { get; set,} This is the easy part. But the binder part is a bit more difficult. I would like to add a new model binder for type DateTime. I can either implement IModelBinder interface and write my own BindModel() method inherit from DefaultModelBinder and override BindModel() method My model has a property as seen above (Birth). So when the model tries to bind request data to this property, my model binder's BindModel(controllerContext, bindingContext) gets invoked. Everything ok, but. How do I get property attributes from controller/bindingContext, to parse my date correctly? How can I get to the PropertyDesciptor of property Birth? Edit Because of separation of concerns my model class is defined in an assembly that doesn't (and shouldn't) reference System.Web.MVC assembly. Setting custom binding (similar to Scott Hanselman's example) attributes is a no-go here.

    Read the article

  • Custom basic authentication fails in IIS7

    - by manu08
    I have an ASP.NET MVC application, with some RESTful services that I'm trying to secure using custom basic authentication (they are authenticated against my own database). I have implemented this by writing an HTTPModule. I have one method attached to the HttpApplication.AuthenticateRequest event, which calls this method in the case of authentication failure: private static void RejectWith401(HttpApplication app) { app.Response.StatusCode = 401; app.Response.StatusDescription = "Access Denied"; app.CompleteRequest(); } This method is attached to the HttpApplication.EndRequest event: public void OnEndRequest(object source, EventArgs eventArgs) { var app = (HttpApplication) source; if (app.Response.StatusCode == 401) { string val = String.Format("Basic Realm=\"{0}\"", "MyCustomBasicAuthentication"); app.Response.AppendHeader("WWW-Authenticate", val); } } This code adds the "WWW-Authenticate" header which tells the browser to throw up the login dialog. This works perfectly when I debug locally using Visual Studio's web server. But it fails when I run it in IIS7. For IIS7 I have the built-in authentication modules all turned off, except anonymous. It still returns an HTTP 401 response, but it appears to be removing the WWW-Authenticate header. Any ideas?

    Read the article

  • How can I programatically create this custom binding?

    - by user277040
    We've got to access a web service that uses soap11... no problem I'll just set the binding as: BasicHttpBinding wsBinding = new BasicHttpBinding(BasicHttpSecurityMode.TransportWithMessageCredential); Nope. No dice. So I asked the host of the service why we're having authentication issues and he said that our config needed to have this custom binding: <bindings> <customBinding> <binding name="lbinding"> <security authenticationMode="UserNameOverTransport" messageSecurityVersion="WSSecurity11WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11" securityHeaderLayout="Strict" includeTimestamp="false" requireDerivedKeys="true" keyEntropyMode="ServerEntropy"> </security> <textMessageEncoding messageVersion="Soap11" /> <httpsTransport authenticationScheme ="Negotiate" requireClientCertificate ="false" realm =""/> </binding> </customBinding> </bindings> Only problem is we're creating our binding programmatically not via the config. So if someone could point me in the right direction in regards to changing my BasicHttpBinding into a custombinding that conforms to the .config value provided I'll give them a big shiny gold star for the day.

    Read the article

  • Render multiple control collections in ASP.NET custom control

    - by Monty
    I've build a custom WebControl, which has the following structure: <gws:ModalBox ID="ModalBox1" HeaderText="Title" runat="server"> <Contents>(controls...)</Contents> <Footer>(controls...)</Footer> </gws:ModalBox> The control contains two ControlCollection properties, 'Contents' and 'Footer'. Never tried to build a control with multiple control collections, but solved it like this (simplified): [PersistChildren(false), ParseChildren(true)] public class ModalBox : WebControl { private ControlCollection _contents; private ControlCollection _footer; public ModalBox() : base() { this._contents = base.CreateControlCollection(); this._footer = base.CreateControlCollection(); } [PersistenceMode(PersistenceMode.InnerProperty)] public ControlCollection Contents { get { return this._contents; } } [PersistenceMode(PersistenceMode.InnerProperty)] public ControlCollection Footer { get { return this._footer; } } protected override void RenderContents(HtmlTextWriter output) { // Render content controls. foreach (Control control in this.Contents) { control.RenderControl(output); } // Render footer controls. foreach (Control control in this.Footer) { control.RenderControl(output); } } } However it seems to render properly, it doesn't work anymore if I add some asp.net labels and input controls inside the property. I'll get the HttpException: Unable to find control with id 'KeywordTextBox' that is associated with the Label 'KeywordLabel'. Somewhat understandable, because the label appears before the textbox in the controlcollection. However, with default asp.net controls it does work, so why doesn't this work? What am I doing wrong? Is it even possible to have two control collections in one control? Should I render it differently? Thanks for replies.

    Read the article

  • Display: Custom Control

    - by pipelinecache
    Hi folks, I've made a simple custom control: base.Style[HtmlTextWriterStyle.Position] = "absolute"; if (this.Expanded) // If the Calendar is expanded. this.Style[HtmlTextWriterStyle.Display] = "block"; else this.Style[HtmlTextWriterStyle.Display] = "none"; //create the containing table Panel pnl = new Panel(); pnl.ID = "pnl_" + this.ClientID; pnl.Enabled = true; SherlockIIITextBox txtCalendar = new SherlockIIITextBox(); txtCalendar.ID = "Cal_" + this.ClientID; pnl.Controls.Add(txtCalendar); ImageButton imgButton = new ImageButton(); imgButton.ImageUrl = ""; imgButton.ID = "img_" + this.ClientID; imgButton.BorderStyle = BorderStyle.NotSet; imgButton.OnClientClick = "o = document.getElementById('" + this.ClientID + "');if (o.style.display=='none'){o.style.display='block';}else{o.style.display='none';}"; pnl.Controls.Add(imgButton); pnl.RenderControl(writer); When I click the ImageButton it should render the calendar. But when I click the ImageButton nothing happens. Anyone got an idea?

    Read the article

  • Subscribe to the Button's events into a custom control

    - by ThitoO
    Do you know how can I subscribe to an event of the base of my customControl ? I've a custom control with some dependency properties : public class MyCustomControl : Button { static MyCustomControl () { DefaultStyleKeyProperty.OverrideMetadata( typeof( MyCustomControl ), new FrameworkPropertyMetadata( typeof( MyCustomControl ) ) ); } public ICommand KeyDownCommand { get { return (ICommand)GetValue( KeyDownCommandProperty ); } set { SetValue( KeyDownCommandProperty, value ); } } public static readonly DependencyProperty KeyDownCommandProperty = DependencyProperty.Register( "KeyDownCommand", typeof( ICommand ), typeof( MyCustomControl ) ); public ICommand KeyUpCommand { get { return (ICommand)GetValue( KeyUpCommandProperty ); } set { SetValue( KeyUpCommandProperty, value ); } } public static readonly DependencyProperty KeyUpCommandProperty = DependencyProperty.Register( "KeyUpCommand", typeof( ICommand ), typeof( MyCustomControl ) ); public ICommand KeyPressedCommand { get { return (ICommand)GetValue( KeyPressedCommandProperty ); } set { SetValue( KeyPressedCommandProperty, value ); } } public static readonly DependencyProperty KeyPressedCommandProperty = DependencyProperty.Register( "KeyPressedCommand", typeof( ICommand ), typeof( MyCustomControl ) ); } And I whant to subscribe to Button's events (like MouseLeftButtonDown) to run some code in my customControl. Do you know how can I do something like this in the constructor ? static MyCustomControl() { DefaultStyleKeyProperty.OverrideMetadata( typeof( MyCustomControl ), new FrameworkPropertyMetadata( typeof( MyCustomControl ) ) ); MouseLeftButtonDownEvent += (object sender, MouseButtonEventArgs e) => "something"; } Thanks for you help

    Read the article

  • Trouble with custom WPF Panel-derived class

    - by chaiguy
    I'm trying to write a custom Panel class for WPF, by overriding MeasureOverride and ArrangeOverride but, while it's mostly working I'm experiencing one strange problem I can't explain. In particular, after I call Arrange on my child items in ArrangeOverride after figuring out what their sizes should be, they aren't sizing to the size I give to them, and appear to be sizing to the size passed to their Measure method inside MeasureOverride. Am I missing something in how this system is supposed to work? My understanding is that calling Measure simply causes the child to evaluate its DesiredSize based on the supplied availableSize, and shouldn't affect its actual final size. Here is my full code (the Panel, btw, is intended to arrange children in the most space-efficient manner, giving less space to rows that don't need it and splitting remaining space up evenly among the rest--it currently only supports vertical orientation but I plan on adding horizontal once I get it working properly): protected override Size MeasureOverride( Size availableSize ) { foreach ( UIElement child in Children ) child.Measure( availableSize ); return availableSize; } protected override System.Windows.Size ArrangeOverride( System.Windows.Size finalSize ) { double extraSpace = 0.0; var sortedChildren = Children.Cast<UIElement>().OrderBy<UIElement, double>( new Func<UIElement, double>( delegate( UIElement child ) { return child.DesiredSize.Height; } ) ); double remainingSpace = finalSize.Height; double normalSpace = 0.0; int remainingChildren = Children.Count; foreach ( UIElement child in sortedChildren ) { normalSpace = remainingSpace / remainingChildren; if ( child.DesiredSize.Height < normalSpace ) // if == there would be no point continuing as there would be no remaining space remainingSpace -= child.DesiredSize.Height; else { remainingSpace = 0; break; } remainingChildren--; } extraSpace = remainingSpace / Children.Count; double offset = 0.0; foreach ( UIElement child in Children ) { //child.Measure( new Size( finalSize.Width, normalSpace ) ); double value = Math.Min( child.DesiredSize.Height, normalSpace ) + extraSpace; child.Arrange( new Rect( 0, offset, finalSize.Width, value ) ); offset += value; } return finalSize; }

    Read the article

  • Flex custom list selection not highlighting

    - by aip.cd.aish
    I want to create a custom list in Flex for an interface prototype. The list is supposed to have an image and 3 text fields. This is what I have done so far, the control displayed is what I want. But, when I click on one of the items, the item does not appear (visually) to be selected. I was not sure how I would implement this. Here is my code so far: <s:List width="400" height="220" dataProvider="{arrColl}" alternatingItemColors="[#EEEEEE, white]"> <s:itemRenderer> <fx:Component> <mx:Canvas height="100"> <mx:Image height="90" width="120" source="{data.imageSource}"></mx:Image> <mx:Label left="125" y="10" text="{data.title}" /> <mx:Label left="125" y="30" text="{data.type}" /> <mx:Label left="125" y="50" text="{data.description}" /> </mx:Canvas> </fx:Component> </s:itemRenderer> </s:List>

    Read the article

  • Scrollbars in ScrollView not showing after custom child view changes size

    - by Ted Hopp
    I have a ScrollView containing a client that is a vertical LinearLayout. The client, in turn, contains custom views that change height dynamically as a background thread does some work. The problem I'm having is that the ScrollView's vertical scroll bar is not updating correctly in Android 1.5. The most common problem occurs when the initial total height of the client is less than the viewport and grows. Initially there is no scroll bar, and it does not show up when the client grows until I actually scroll the window with a touch gesture or other UI action. After that, the scroll bar shows up. However, the thumb does not update as the height continues to change from the background thread. If I scroll the view again through the UI, the thumb immediately corrects itself. My code for updating the height from the background thread is: post(mLayoutRequestor); postInvalidate(); (mLayoutRequestor is a Runnable that just calls requestLayout().) This is done after I've recorded the new height in a variable mHeight. My onMeasure() method calls setMeasuredDimension with a height computed like this: private int measureHeight(int measureSpec) { int result; int specMode = MeasureSpec.getMode(measureSpec); int specSize = MeasureSpec.getSize(measureSpec); if (specMode == MeasureSpec.EXACTLY) { result = specSize; } else { result = mHeight; if (specMode == MeasureSpec.AT_MOST && result > specSize) result = specSize; } return result; } Am I supposed to be calling something else besides requestLayout() to get the scroll bar to update correctly?

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >