Search Results

Search found 33468 results on 1339 pages for 'behaviour change'.

Page 184/1339 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • Unique Business Value vs. Unique IT

    - by barry.perkins
    When the age of computing started, technology was new, exciting, full of potential and had a long way to grow. Vendor architectures were proprietary, and limited in function at first, growing in capability and complexity over time. There were few if any "standards", let alone "open standards" and the concepts of "open systems", and "open architectures" were far in the future. Companies employed intelligent, talented and creative people to implement the best possible solutions for their company. At first, those solutions were "unique" to each company. As time progressed, standards emerged, companies shared knowledge, business capability supplied by technology grew, and companies continued to expand their use of technology. Taking advantage of change required companies to struggle through periodic "revolutionary" change cycles, struggling through costly changes that were fraught with risk, resulted in solutions with an increasingly shorter half-life, and frequently required altering existing business processes and retraining employees and partner businesses. The pace of technological invention and implementation grew at an ever increasing rate, making the "revolutionary" approach based upon "proprietary" or "closed" architectures or technologies no longer viable. Concurrent with the advancement of technology, the rate of change in business increased, leading us to the incredibly fast paced, highly charged, and competitive global economy that we have today, where the most successful companies are companies that are good at implementing, leveraging and exploiting change. Fast forward to today, a world where dramatic changes in business and technology happen continually, a world where "evolutionary" change is crucial. Companies can no longer afford to build "unique IT", nor can they afford regular intervals of "revolutionary" change, with the associated costs and risks. Human ingenuity was once again up to the task, turning technology into a platform supporting business through evolutionary change, by employing "open": open standards; open systems; open architectures; and open solutions. Employing "open", enables companies to implement systems based upon technology, capability and standards that will evolve over time, providing a solid platform upon which a company can drive business needs, requirements, functions, and processes down into the technology, rather than exposing technology to the business, allowing companies to focus on providing "unique business value" rather than "unique IT". The big question! Does moving from "older" technology that no longer meets the needs of today's business, to new "open" technology require yet another "revolutionary change"? A "revolutionary" change with a short half-life, camouflaging reality with great marketing? The answer is "perhaps". With the endless options available to choose from, it is entirely possible to implement a solution that may work well today, but in 5 years time will become yet another albatross for the company to bear. Some solutions may look good today, solving a budget challenge by reducing cost, or solving a specific tactical challenge, but result in highly complex environments, that may be difficult to manage and maintain and limit the future potential of your business. Put differently, some solutions might push today's challenge into the future, resulting in a more complex and expensive solution. There is no such thing as a "1 size fits all" IT solution for business. If all companies implemented business solutions based upon technology that required, or forced the same business processes across all businesses in an industry, it would be extremely difficult to show competitive advantage through "unique business value". It would be equally difficult to "evolve" to meet or exceed business needs and keep up with today's rapid pace of change. How does one ensure that they do not jump from one trap directly into another? Or to put it positively, there are solutions available today that can address these challenges and issues. How does one ensure that the buying decision of today will serve the business well for years into the future? Intelligent & Informed decisions - "buying right" In a previous blog entry, we discussed the value of linking tactical to strategic The key is driving the focus to what is best for your business, handling today's tactical issues while also aligning with a roadmap/strategy that is tightly aligned with your strategic business objectives. When considering the plethora of possible options that provide various approaches to solving today's complex business problems, it is extremely important to ensure that vendors supplying those options, focus on what is best for your business, supplying sufficient information, providing adequate answers to questions, addressing challenges, issues, concerns and objections honestly and openly, and focus on supplying solutions that are tailored for, and deliver the most business value possible for your business. Here are a few questions to consider relative to the proposed options that should help ensure that today's solution doesn't become tomorrow's problem. Do the proposed solutions: Solve the problem(s) you are trying to address? Provide a solid foundation upon which to grow/enhance your business? Provide tactical gains that align with and enable your strategic business goals/objectives? Provide an infrastructure that can be leveraged with subsequent projects? Solve problems for the business overall, the lines of business, or just IT? Simplify your current environment Provide the basis for business: Efficiency Agility Clarity governance, risk, compliance real time business visibility and trend analysis Does your IT staff have the knowledge/experience to successfully manage the proposed systems once they are deployed in production? Done well, you will be presented with options tailored to your business, that enable you to drive the "unique business value" necessary to help your business stand out from others, creating a distinct competitive advantage, delivering what your customers need, when they need it, so you can attract new customers, new business, and grow top line revenue, all at a cost that provides a strong Return on Investment/Return on Assets. The net result is growth with managed cost providing significantly improved profit margin and shareholder value.

    Read the article

  • Is there a viable alternative to the agile development methodology? [closed]

    - by Eric Wilson
    The two predominant software-development methodologies are waterfall and agile. When discussing these two, there is often much focus on the particular practices that distinguish them (pair programming, TDD, etc. vs. functional spec, big up-front design, etc.) But the real differences are far deeper, in that these practices come from a philosophy. Waterfall says: Change is costly, so it should be minimized. Agile says: Change is inevitable, so make change cheap. My question is, regardless of what you think of TDD or functional specs, is the waterfall development methodology really viable? Does anyone really think that minimizing change in software is a viable option for those that desire to deliver valuable software? Or is the question really about what sort of practices work best in our situations to manage the inevitable change?

    Read the article

  • Source Control and SQL Development &ndash; Part 3

    - by Ajarn Mark Caldwell
    In parts one and two of this series, I have been specifically focusing on the latest version of SQL Source Control by Red Gate Software.  But I have been doing source-controlled SQL development for years, long before this product was available, and well before Microsoft came out with Database Projects for Visual Studio.  “So, how does that work?” you may wonder.  Well, let me share some of the details of how we do it where I work… The key to this approach is that everything is done via Transact-SQL script files; either natively written T-SQL, or generated.  My preference is to write all my code by hand, which forces you to become better at your SQL syntax.  But if you really prefer to use the Management Studio GUI to make database changes, you can still do that, and then you use the Generate Scripts feature of the GUI to produce T-SQL scripts afterwards, and store those in your source control system.  You can generate scripts for things like stored procedures and views by right-clicking on the database in the Object Explorer, and Choosing Tasks, Generate Scripts (see figure 1 to the left).  You can also do that for the CREATE scripts for tables, but that does not work when you have a table that is already in production, and you need to make just a simple change, such as adding a new column or index.  In this case, you can use the GUI to make the table changes, and then instead of clicking the Save button, click the Generate Change Script button (). Then, once you have saved the change script, go ahead and execute it on your development database to actually make the change.  I believe that it is important to actually execute the script rather than just click the Save button because this is your first test that your change script is working and you didn’t somehow lose a portion of the change. As you can imagine, all this generating of scripts can get tedious and tempting to skip entirely, so again, I would encourage you to just get in the habit of writing your own Transact-SQL code, and then it is just a matter of remembering to save your work, just like you are in the habit of saving changes to a Word or Excel document before you exit the program. So, now that you have all of these script files, what do you do with them?  Well, we organize ours into folders labeled ChangeScripts, Functions, Views, and StoredProcedures, and those folders are loaded into our source control system.  ChangeScripts contains all of the table and index changes, and anything else that is basically a one-time-only execution.  Of course you want to write your scripts with qualifying logic so that if a script were accidentally run more than once in a database, it would not crash nor corrupt anything; but these scripts are really intended to be run only once in a database. Once you have your initial set of scripts loaded into source control, then making changes, such as altering a stored procedure becomes a simple matter of checking out your CREATE PROCEDURE* script, editing it in SSMS, saving the change, executing the script in order to effect the change in your database, and then checking the script back in to source control.  Of course, this is where the lack of integration for source control systems within SSMS becomes an irritation, because this means that in addition to SSMS, I also have my source control client application running to do the check-out and check-in.  And when you have 800+ procedures like we do, that can be quite tedious to locate the procedure I want to change in source control, check it out, then locate the script file in my working folder, open it in SSMS, do the change, save it, and the go back to source control to check in.  Granted, it is not nearly as burdensome as, say, losing your source code and having to rebuild it from memory, or losing the audit trail that good source control systems provide.  It is worth the effort, and this is how I have been doing development for the last several years. Remember that everything that the SQL Server Management Studio does in modifying your database can also be done in plain Transact-SQL code, and this is what you are storing.  And now I have shown you how you can do it all without spending any extra money.  You already have source control, or can get free, open-source source control systems (almost seems like an oxymoron, doesn’t it) and of course Management Studio is free with your SQL Server database engine software. So, whether you spend the money on tools to make it easier, or not, you now have no excuse for not using source control with your SQL development. * In our current model, the scripts for stored procedures and similar database objects are written with an IF EXISTS…DROP… at the top, followed by the CREATE PROCEDURE… section, and that followed by a section that assigns permissions.  This allows me to run the same script regardless of whether the procedure previously existed in the database.  If the script was only an ALTER PROCEDURE, then it would fail the first time that procedure was deployed to a database, unless you wrote other code to stub it if it did not exist.  There are a few different ways you could organize your scripts for deployment, each with its own trade-offs, but I think it is absolutely critical that whichever way you organize things, you ensure that the same script is run throughout the deployment cycle, and do not allow customizations to creep in between TEST and PROD.  If you do, then you have broken the integrity of your deployment process because what you deployed to PROD was not exactly the same as what was tested in TEST, so you effectively have now released untested code into PROD.

    Read the article

  • Converting LINQ to Twitter to Twitter API v1.1

    - by Joe Mayo
    Twitter recently updated their API to v1.1 (Current status: API v1.1). Naturally, LINQ to Twitter  needed to be updated too. This blog post outlines the changes made to LINQ to Twitter during this conversion and highlights important features that LINQ to Twitter developers will want to know. Overall Impact Generally speaking, Twitter API v1.1 is semantically very much the same as it’s predecessor. The base URL changed and so did a few resource segments, but the resources themselves are still intact. The good news is that LINQ to Twitter has always shielded the developer from this plumbing, so the entities, types, and filters didn’t change much at all.  The following sections describe what did  change. Authentication In Twitter API v1.0 authentication was not required for some resources, such as user timelines and search. However, that’s all changed because *all* queries must be authenticated in Twitter API v1.1. LINQ to Twitter has various types of authorizers you can use, supporting whatever OAuth options are available via Twitter.  You can see the LINQ to Twitter documentation, Securing Your Applications, for more info on OAuth support. The New Search One of the larger changes to the API was Search. To be more specific, the Search entity now contains a List<Status>, named Statuses, to hold results.  Additionally, any meta-data associated with the search is now in a property named SearchMetaData. The change to the Search entity and responses is the big change, but the good news is that your Search query syntax doesn’t change. Different Rate Limits The issue of rate limits itself is contentious, but this discussion is focused on the coding experience and I’ll leave the politics to those who prefer to engage in that activity. What’s important here is that both headers and resources have changed. You should review Twitter’s Rate Limit documentation to understand what the changes mean.  A quick explanation is that rate limits are applied individually to each resource in 15 minute time intervals. In LINQ to Twitter these changes surface on the Help entity, via HelpType.RateLimits. The RateLimits query has a Resources filter where you can specify a comma-separated list of categories to return rate limit info for.  The results materialize in the RateLimits dictionary, keyed on category. The Help entity also has a RateLimitsAuthorizationContext, holding the Access Token for the user performing queries – and to whom the rate limits apply. In addition to the new RateLimits query, there are new RateLimit headers that appear in the query response, whose HTTP header name is of the form X-Rate-Limit… which is different from the previous header name. LINQ to Twitter surfaces these headers via the existing properties of the TwitterContext instance. For anyone who retrieved rate limit information via the Headers property of TwitterContext, you should be aware of the new header names.  I haven’t done anything with Feature rate limit properties yet, but they appear to no longer be available – this will require more follow-up. Error Handling Twitter API v1.1 has a new format for Error Codes & Responses. LINQ to Twitter wraps these messages in the TwitterQueryException, which has been updated appropriately. The Message property of TwitterQueryException now reflects the Twitter error message, when available. There’s also a new ErrorCode that’s populated with the message error code. Parameters Most parameters stayed the same, but one of interest is Include Entities (different from LINQ to Twitter data object entities). Entities are metadata hanging off tweets, that provide start/end position in the tweet and other information for mentions, urls, hash tags, and media. Entities used to not be included unless you specified you wanted them. Now, in v1.1, entities are included by default for all APIs that return a Status.  If you were always setting IncludeEntities to true, then you won’t see a change. However, be aware that you’ll now be receiving additional data in your response from Twitter, which will explain a sudden increase in bandwidth utilization. This might or might not  matter to you  depending on the requirements of your application, but you should be aware of it. Everything Else There might be small changes here and there that I haven’t mentioned, but these were the ones you should be most aware of.  Streams didn’t change, but Twitter will be deprecating username/password authentication on public streams, in favor of OAuth, so you’ll be seeing me make that change some time in the future.  Also, Twitter will continue to evolve the API and you can expect that LINQ to Twitter will change accordingly. Summary The big changes to Twitter API were Authentication, Search, Rate Limits, and Error Handling. All API calls must be authenticated. You’ll need to change your code to read Search results differently, but the query is much the same as you use now. There’s a new RateLimits API, one of the Help queries.  Also, the new error messages are integrated into TwitterQueryException. Besides these changes, I expect  most others to be small or affect a smaller percentage of developers.  You can get the latest version of LINQ to Twitter from NuGet or visit the LINQ to Twitter download page at CodePlex.com.   @JoeMayo

    Read the article

  • Uniquely identify a mobile device

    - by Sahil Malik
    SharePoint, WCF and Azure Trainings: more information Sometimes you need to identify every device your app is installed on uniquely. This is for instance important where you have per-device licensing restrictions. For Win8 store apps, You can use ASHWID (Application Specific Hardware Identifier). ASHWID will be different app to app and device to device. Any hardware changes to the device will cause the unique id to change. You can also detect minor change vs. major change to build custom level of tolerance in what is considered a change. For instance, ejecting a USB stick is a minor change. The below code snippet shows you how to get the unique device id, Read full article ....

    Read the article

  • BeansBinding Across Modules in a NetBeans Platform Application

    - by Geertjan
    Here's two TopComponents, each in a different NetBeans module. Let's use BeansBinding to synchronize the JTextField in TC2TopComponent with the data published by TC1TopComponent and received in TC2TopComponent by listening to the Lookup. The key to getting to the solution is to have the following in TC2TopComponent, which implements LookupListener: private BindingGroup bindingGroup = null; private AutoBinding binding = null; @Override public void resultChanged(LookupEvent le) { if (bindingGroup != null && binding != null) { bindingGroup.getBinding("customerNameBinding").unbind(); } if (!result.allInstances().isEmpty()){ Customer c = result.allInstances().iterator().next(); // put the customer into the lookup of this topcomponent, // so that it will remain in the lookup when focus changes // to this topcomponent: ic.set(Collections.singleton(c), null); bindingGroup = new BindingGroup(); binding = Bindings.createAutoBinding( // a two-way binding, i.e., a change in // one will cause a change in the other: AutoBinding.UpdateStrategy.READ_WRITE, // source: c, BeanProperty.create("name"), // target: jTextField1, BeanProperty.create("text"), // binding name: "customerNameBinding"); bindingGroup.addBinding(binding); bindingGroup.bind(); } } I must say that this solution is preferable over what I've been doing prior to getting to this solution: I would get the customer from the resultChanged, set a class-level field to that customer, add a document listener (or action listener, which is invoked when Enter is pressed) on the text field and, when a change is detected, set the new value on the customer. All that is not needed with the above bit of code. Then, in the node, make sure to use canRename, setName, and getDisplayName, so that when the user presses F2 on a node, the display name can be changed. In other words, when the user types something different in the node display name after pressing F2, the underlying customer name is changed, which happens, in the first place, because the customer name is bound to the text field's value, so that the text field's value will also change once enter is pressed on the changed node display name. Also set a PropertyChangeListener on the node (which implies you need to add property change support to the customer object), so that when the customer object changes (which happens, in the second place, via a change in the value of the text field, as defined in the binding defined above), the node display name is updated. In other words, there's still a bit of plumbing you need to include. But less than before and the nasty class-level field for storing the customer in the TC2TopComponent is no longer needed. And a listener on the text field, with a property change listener implented on the TC2TopComponent, isn't needed either. On the other hand, it's more code than I was using before and I've had to include the BeansBinding JAR, which adds a bit of overhead to my application, without much additional functionality over what I was doing originally. I'd lean towards not doing things this way. Seems quite expensive for essentially replacing a listener on a text field and a property change listener implemented on the TC2TopComponent for being notified of changes to the customer so that the text field can be updated. On the other other hand, it's kind of nice that all this listening-related code is centralized in one place now. So, here's a nice improvement over the above. Instead of listening for a customer, listen for a node, from which the customer can be obtained. Then, bind the node display name to the text field's value, so that when the user types in the text field, the node display name is updated. That saves you from having to listen in the node for changes to the customer's name. In addition to that binding, keep the previous binding, because the previous binding connects the customer name to the text field, so that when the customer display name is changed via F2 on the node, the text field will be updated. private BindingGroup bindingGroup = null; private AutoBinding nodeUpdateBinding; private AutoBinding textFieldUpdateBinding; @Override public void resultChanged(LookupEvent le) { if (bindingGroup != null && textFieldUpdateBinding != null) { bindingGroup.getBinding("textFieldUpdateBinding").unbind(); } if (bindingGroup != null && nodeUpdateBinding != null) { bindingGroup.getBinding("nodeUpdateBinding").unbind(); } if (!result.allInstances().isEmpty()) { Node n = result.allInstances().iterator().next(); Customer c = n.getLookup().lookup(Customer.class); ic.set(Collections.singleton(n), null); bindingGroup = new BindingGroup(); nodeUpdateBinding = Bindings.createAutoBinding( AutoBinding.UpdateStrategy.READ_WRITE, n, BeanProperty.create("name"), jTextField1, BeanProperty.create("text"), "nodeUpdateBinding"); bindingGroup.addBinding(nodeUpdateBinding); textFieldUpdateBinding = Bindings.createAutoBinding( AutoBinding.UpdateStrategy.READ_WRITE, c, BeanProperty.create("name"), jTextField1, BeanProperty.create("text"), "textFieldUpdateBinding"); bindingGroup.addBinding(textFieldUpdateBinding); bindingGroup.bind(); } } Now my node has no property change listener, while the customer has no property change support. As in the first bit of code, the text field doesn't have a listener either. All that listening is taken care of by the BeansBinding code.  Thanks to Toni for help with this, though he can't be blamed for anything that is wrong with it, only thanked for anything that is right with it. 

    Read the article

  • TF203015 The Item $/path/file has an incompatible pending change. While trying to unshelve.

    - by drachenstern
    I'm using Visual Studio 2010 Pro against Team Server 2010 and I had my project opened (apparently) as a solution from the repo, but I should've opened it as "web site". I found this out during compile, so I went to shelve my new changes and deleted the project from my local disk, then opened the project again from source (this time as web site) and now I can't unshelve my files. Is there any way to work around this? Did I blow something up? Do I need to do maintenance at the server? I found this question on SO #2332685 but I don't know what cache files he's talking about (I'm on XP :\ ) EDIT: Found this link after posting the question, sorry for the delay in researching, still didn't fix my problem Of course I can't find an error code for TF203015 anywhere, so no resolution either (hence my inclusion of the number in the title, yeah?) EDIT: I should probably mention that these files were never checked in in the first place. Does that matter? Can you shelve an unchecked item? Is that what I did wrong? EDIT: WHAP - FOUND IT!!! Use "Undo" on the items that don't exist because they show up in pending changes as checkins.

    Read the article

  • DTD is prohibited in this XML document -- how to change permissions?

    - by frankadelic
    I am using a 3rd-party .NET component which requires an XML configuration file. I'm am using this in an ASP.NET application. I get an error when configure the XML with the following dtd: <!DOCTYPE prod-config SYSTEM "prod-config.dtd"> The error is as follows: For security reasons DTD is prohibited in this XML document. To enable DTD processing set the ProhibitDtd property on XmlReaderSettings to false and pass the settings into XmlReader.Create method. prod-config.dtd is sitting in the same directory as the XML config file. I don't have access to the component code to modify XmlReaderSettings, ProhibitDtd etc. Is there anotherway I can modify or tag the XML file to permit the DTD to be accessed? (FYI, the component is Oracle Coherence .NET client)

    Read the article

  • How to change the data in Telerik's RadGrid based on Calendar's selected dates?

    - by Jronny
    I was creating another usercontrol with Telerik's RadGrid and Calendar. <%@ Register Assembly="Telerik.Web.UI" Namespace="Telerik.Web.UI" TagPrefix="telerik" %> <table class="style1"> <tr> <td>From</td> <td>To</td> </tr> <tr> <td><asp:Calendar ID="Calendar1" runat="server" SelectionMode="Day"></asp:Calendar></td> <td><asp:Calendar ID="Calendar2" runat="server" SelectionMode="Day"></asp:Calendar></td> </tr> <tr> <td><asp:Button ID="btnSubmit" runat="server" Text="Submit" OnClick="btnSubmit_Click" /></td> <td><asp:Button ID="btnClear" runat="server" Text="Clear" OnClick="btnClear_Click" /></td> </tr> </table> <telerik:RadGrid ID="RadGrid1" runat="server"> <MasterTableView CommandItemDisplay="Top"></MasterTableView> </telerik:RadGrid> and I am using Linq in code-behind: Entities1 entities = new Entities1(); public static object DataSource = null; protected void Page_Load(object sender, EventArgs e) { if (DataSource == null) { DataSource = (from entity in entities.nsc_moneytransaction select new { date = entity.transaction_date.Value, username = entity.username, cashbalance = entity.cash_balance }).OrderByDescending(a => a.date); } BindData(); } public void BindData() { RadGrid1.DataSource = DataSource; } protected void btnSubmit_Click(object sender, EventArgs e) { DateTime startdate = new DateTime(); DateTime enddatedate = new DateTime(); if (Calendar1.SelectedDate != null && Calendar2.SelectedDate != null) { startdate = Calendar1.SelectedDate; enddatedate = Calendar2.SelectedDate; var queryDateRange = from entity in entities.nsc_moneytransaction where DateTime.Parse(entity.transaction_date.Value.ToShortDateString()) >= DateTime.Parse(startdate.ToShortDateString()) && DateTime.Parse(entity.transaction_date.Value.ToShortDateString()) <= DateTime.Parse(enddatedate.ToShortDateString()) select new { date = entity.transaction_date.Value, username = entity.username, cashbalance = entity.cash_balance }; DataSource = queryDateRange.OrderByDescending(a => a.date); } else if (Calendar1.SelectedDate != null) { startdate = Calendar1.SelectedDate; var querySetDate = from entity in entities.nsc_moneytransaction where entity.transaction_date.Value == startdate select new { date = entity.transaction_date.Value, username = entity.username, cashbalance = entity.cash_balance }; DataSource = querySetDate.OrderByDescending(a => a.date); ; } BindData(); } protected void btnClear_Click(object sender, EventArgs e) { Calendar1.SelectedDates.Clear(); Calendar2.SelectedDates.Clear(); } The problems are, (1) when I click the submit button. the data in the RadGrid is not changed. (2) how can we check if there is nothing selected in the Calendar controls, because there is a date (01/01/0001) set even if we do not select anything from that calendar, thus Calendar1.SelectedDate != null is not enough. =( Thanks.

    Read the article

  • haml with rails3 (git master) and devise: form_for syntax change breaks haml -- suggestions?

    - by z3cko
    i am trying to get haml working with a rails3 project; since i am quite far in the modeling i wanted to go to the haml views now -- seems that the current haml (git master) does not work together with the current rails3 git master because of some syntax changes in rails3 form_for does anyone have more information on the syntax changes? is there a temporary workaround to use haml with rails3? (i am on a deadline) :( see also: http://j.mp/9EYraQ thanks!

    Read the article

  • jQuery onkeyup event in textarea that does not fires when nothing change.

    - by Kucebe
    I was thinking to a function that check the key value pressed, and accept only the most common characters, canc, enter, ... because i need it only for basic ascii character set (not ñ, c, chinese chars, ...). Or function that after every keyup checks if value has changed. But really jQuery doesn't have an event handler for this situation? Oh, it should be cross-borwser.

    Read the article

  • How do I change the URL for the wordpress author archive page?

    - by Ben Burleson
    Instead of www.example.com/author/xyz, I want to use www.example.com/artist/xyz. I was hoping it was as easy as copying author.php to artist.php in my theme directory, but no such luck. Where does wordpress handle the special processing for the author archive pages? .htaccess rewriting is another option, but I wasn't able to get anything to work with the existing wordpress rewrite rules. Thanks,

    Read the article

  • How to change stack size for a .NET program?

    - by carter-boater
    I have a program that does recursive calls for 2 billion times and the stack overflow. I make changes, and then it still need 40K resursive calls. So I need probably serveral MB stack memory. I heard the stack size is default to 1MB. I tried search online. Some one said to go properties -linker .........in visual studio, but I cannot find it. Does anybody knows how to increase it? Also I am wondering if I can set it somewhere in my C# program? P.S. I am using 32-bit winXP and 64bit win7.

    Read the article

  • How to change the colors of a legend item in flex legend?

    - by AngelHeart
    in my flex chart I changed the fill of the PieSeries to use custom colors (set colors that I was prepared to be used according to values in the data provider of the Pie Chart)... The problem that the legend that is linked to my PieChart still shows the flex default colors and not the new colors from the PieChart series! Any idea how can I render the marker fill color of the flex legend items to meet the colors in the Pie Chart?

    Read the article

  • How do I change the frame position for a custom MKAnnotationView?

    - by andrei
    I am trying to make a custom annotation view by subclassing MKAnnotationView and overriding the drawRect method. I want the view to be drawn offset from the annotation's position, somewhat like MKPinAnnotationView does it, so that the point of the pin is at the specified coordinates, rather than the middle of the pin. So I set the frame position and size as shown below. However, it doesn't look like I can affect the position of the frame at all, only the size. The image ends up being drawn centered over the annotation position. Any tips on how to achieve what I want? MyAnnotationView.h: @interface MyAnnotationView : MKAnnotationView { } MyAnnotationView.m: - (id)initWithAnnotation:(id <MKAnnotation>)annotation reuseIdentifier:(NSString *)reuseIdentifier { if (self = [super initWithAnnotation:annotation reuseIdentifier:reuseIdentifier]) { self.canShowCallout = YES; self.backgroundColor = [UIColor clearColor]; // Position the frame so that the bottom of it touches the annotation position self.frame = CGRectMake(0, -16, 32, 32); } return self; } - (void)drawRect:(CGRect)rect { [[UIImage imageNamed:@"blue-dot.png"] drawInRect:rect]; }

    Read the article

  • Change the Default Application Pool in IIS7 using .net?

    - by EdenMachine
    I'm using the following function to create a IIS7 Application and/or Virtual Directory. How would I also set the Application to use a different Application Pool? Private Sub CreateVirtualDir(ByVal WebSite As String, ByVal AppName As String, ByVal Path As String, Optional ByVal IsApplication As Boolean = True, Optional ByVal RunScripts As Boolean = True, Optional ByVal IsWrite As Boolean = True) Dim IISSchema As New System.DirectoryServices.DirectoryEntry("IIS://" & WebSite & "/Schema/AppIsolated") Dim CanCreate As Boolean = Not IISSchema.Properties("Syntax").Value.ToString.ToUpper() = "BOOLEAN" IISSchema.Dispose() If CanCreate Then Dim PathCreated As Boolean Try Dim IISAdmin As New System.DirectoryServices.DirectoryEntry("IIS://" & WebSite & "/W3SVC/1/Root") 'make sure folder exists If Not System.IO.Directory.Exists(Path) Then System.IO.Directory.CreateDirectory(Path) PathCreated = True End If 'If the virtual directory already exists then delete it For Each VD As System.DirectoryServices.DirectoryEntry In IISAdmin.Children If VD.Name = AppName Then IISAdmin.Invoke("Delete", New String() {VD.SchemaClassName, AppName}) IISAdmin.CommitChanges() Exit For End If Next VD 'Create and setup new virtual directory Dim VDir As System.DirectoryServices.DirectoryEntry = IISAdmin.Children.Add(AppName, "IIsWebVirtualDir") VDir.Properties("Path").Item(0) = Path If IsApplication Then VDir.Properties("AppFriendlyName").Item(0) = AppName End If VDir.Properties("EnableDirBrowsing").Item(0) = False VDir.Properties("AccessRead").Item(0) = True VDir.Properties("AccessExecute").Item(0) = False VDir.Properties("AccessWrite").Item(0) = IsWrite VDir.Properties("AccessScript").Item(0) = RunScripts VDir.Properties("AuthNTLM").Item(0) = True VDir.Properties("EnableDefaultDoc").Item(0) = True VDir.Properties("DefaultDoc").Item(0) = "default.htm,default.aspx,default.asp" VDir.Properties("AspEnableParentPaths").Item(0) = True 'VDir.Properties("AppCreate").Item(0) = False VDir.CommitChanges() 'the following are acceptable params 'INPROC = 0 'OUTPROC = 1 'POOLED = 2 If IsApplication Then VDir.Invoke("AppCreate", 1) Else VDir.Invoke("AppCreate", False) End If Catch Ex As Exception If PathCreated Then System.IO.Directory.Delete(Path) End If 'MsgBox(Ex.Message) End Try End If End Sub

    Read the article

  • how to change stack size for a C# program?

    - by carter-boater
    Dear friends, I have a program that does recursive calls for 2 billion times and the stack overflow. I make changes, and then it still need 40K resursive calls. So I need probably serveral MB stack memory. I heard the stack size is default to 1MB. I tried search online. Some one said to go properties -linker .........in visual studio, but I cannot find it. Does anybody knows how to increase it? Also I am wondering if I can set it some where in my C# program? P.S. I am using 32-bit winXP and 64bit win7. Thanks a lot

    Read the article

  • How do i modify the XSL to change the xml format.

    - by user323719
    In the below XSL. <xsl:param name="insert-file" as="document-node()" /> <xsl:template match="*"> <xsl:variable name="input">My text</xsl:variable> <xsl:variable name="Myxml" as="element()*"> <xsl:call-template name="populateTag"> <xsl:with-param name="nodeValue" select="$input"/> </xsl:call-template> </xsl:variable> <xsl:copy-of select="$Myxml"></xsl:copy-of> </xsl:template> <xsl:template name="populateTag"> <xsl:param name="nodeValue"/> <xsl:for-each select="$insert-file/insert-data/data"> <xsl:choose> <xsl:when test="@index = 1"> <a><xsl:value-of select="$nodeValue"></xsl:value-of></a> </xsl:when> </xsl:choose> </xsl:for-each> </xsl:template> I am getting the output as: <?xml version="1.0" encoding="UTF-8"? <aMy text</a <aMy text</a <aMy text</a <aMy text</a I want template "populateTag" to retun me the xml in the below format. How do i modify the template "populateTag" to achive the same. Expected output from template "populateTag": <?xml version="1.0" encoding="UTF-8"? <a<a<a<aMy text</a</a</a</a Please give your ideas.

    Read the article

  • ASPNETDB and ASPSTATE database. How to change the connectionstrings?

    - by George
    I have two ASP-specific SQL Server databases 1) ASPState - To store session state 2) ASPNETDB - To store Security/Role stuff. In my web.config, I am specifying the connection string used to identify the location of the APState database: <sessionState mode="SQLServer" sqlConnectionString="server=(local)\sql2008b;uid=sa;pwd=iainttelling;" timeout="120"/> Where is the conenction string specified for the ASPNETDB database? I am trying to point it to a db on a remote server. I have a feeling it is somewhere in IIS orthe Machine Config. I'd like to add it to my WEB.CONFIG Could someone help me to do this?

    Read the article

  • How to change number of tabs in tabbar controller application ?

    - by hib
    Hi I am developing an iPhone tabbar application with 5 tabs . I want to show only two tabs at the launch time such as one is "locate me". When the user taps on the locate me tab another 3 tabs will be shown and can use the current location. I want to do some thing like "urban spoon" . I am using the interface builder for all the stuff. If any one have any idea , suggestion , links then provide me. Thanks .

    Read the article

  • How can I make a custom layout / change header background color … with Tex, Latex, ConTeXt ?

    - by harobed
    Hi, currently I produce dynamically this document http://download.stephane-klein.info/exemple_document.png with Python Report Labs… to produce pdf documents. Now, I would like try to produce this document with Tex / Latex / ConTeXt… I've some questions : how can I make the layout ? how can I make header background color ? how can I define my custom title (with blue box) ? what is the better choice for my project : Latex or ConTeXt ? What package I need to use ? geometry ? fancyhdr ? Have you some example ? some resource ? Yesterday, I've read many many documentation… and I don't found a solution / example for my questions. Thanks for your help, Stephane

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >