Search Results

Search found 2036 results on 82 pages for 'jim smith'.

Page 15/82 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Weekly Cloud Roundup 2012-15

    - by Alan Smith
    Filtering the informative, insightful and quirky from the fire hose of cloud-based hype. Irving Wladawsky-Berger provides some great insight into The Complex Transition to the Cloud, sharing his views on the slow adoption of cloud computing in organizations. “…a prediction by the research firm Gartner that while cloud computing will continue to grow at almost 20 percent a year, it will account for less than 5 percent of totally IT spending in 2015.” With a more positive mindset, Balaji Viswanathan highlights 7 Salient Trends and Directions in Cloud Computing that could be shaping the industry over the next few years. Cloud computing also looks to save energy “A small business with 100 users that moved the Microsoft applications to the cloud could cut energy use and carbon emissions by 90%. Large organizations with 10,000 users saw a 30% reduction.” More on that story here. The expansion of Windows Azure has been in the news with the announcement of “East US” and “West US” datacenters; this was covered by Visual Studio Magazine and Mary-Jo, and according to thenextweb.com Microsoft are also building $112 million data center in Wyoming. The cloud price war is still in full swing with Joe Panettieri discussing the pricing of Windows Azure and Office 365 and asking How Low Can It Go?

    Read the article

  • What happened this type of naming convention?

    - by Smith
    I have read so many docs about naming conventions, most recommending both Pascal and Camel naming conventions. Well, I agree to this, its ok. This might not be pleasing to some, but I am just trying to get you opinion why you name you objects and classes in a certain way. What happened to this type of naming conventions, or why are they bad? I want to name a struct, and i prefix it with struct. My reason, so that in IntelliSense, I see all the struct in one place, and anywhere I see the struct prefix, I know it's a struct: structPerson structPosition anothe example is the enum, although I may not prefix it with "enum", but maybe with "enm": enmFruits enmSex again my reason is so that in IntelliSense, I see all my enums in one place. Because, .NET has so many built in data structures, I think this helps me do less searching. Please I used .NET in this example, but I welcome language agnostic answers.

    Read the article

  • When is the default storage rule not really the default storage rule?

    - by Kevin Smith
    In 11g WebCenter Content (WCC) introduced dispersion rules in the vault and weblayout directory paths to better distribute content across the directories. The dispersion rule was based on dRevClassID. The only problem with this is that dRevClassID did not remain the same when you copied content from one WCC instance to another using Archiver like in a contribution-consumption scenario. This could cause problems because the web-viewable path would not be the same between the contribution and consumption instances. In the PS5 (11.1.1.6.0) release of WCC they addressed this by configuring the File Store Provider (FSP) so that all new content would use a storage rule with a dispersion rule based on dDocName, which would stay the same when content was copied to another WCC instance. To support migration from older versions of WCC they left the default storage rule unchanged and created a new storage rule called DispByContentId and made that the default storage rule for all new content. I only stumbled upon this a while back when I was trying to change the FSP configuration so that all content used a webless storage rule. I changed the default storage rule, restarted WCC, and checked in a new content item. To my surprise the new content was not created as webless. I struggled with this for a while until I noticed there were multiple storage rules defined in the FSP configuration. When I looked at the default value for the xStorageRule field in Configuration Manager, sure enough it was no longer default, but was now DispByContentId. Once I updated the DispByContentId storage rule to webless and restarted WCC all my new content was now created using the webless storage rule, just like I wanted. I noticed when I was creating this blog post that the default storage rule is also listed on the File Store Provider Information page, but I guess I didn't see that when I originally did this.

    Read the article

  • Writing selenium tests, should I just get it done or get it right?

    - by Peter Smith
    I'm attempting to drive my user interface (heavy on javascript) through selenium. I've already tested the rest of my ajax interaction with selenium successfully. However, this one particular method seems to be eluding me because I can't seem to fake the correct click event. I could solve this problem by simply waiting in the test for the user to click a point and then continuing with the test but this seems like a cop out. But I'm really running out of time on my deadline to have this done and working. Should I just get this done and move on or should I spend the extra (unknown) amount of time to fix this problem and be able to have my selenium tests 100% automated?

    Read the article

  • What is wrong with Unix/linux

    - by John Smith
    This is a genuine question motivated by Ideal Operating System When I moved from DOS to Linux in the late 90s it was an eye opener for me. Long file names, arbitrarily many extensions etc... Now I look at Linux and Unix and see all sorts of issues. Here are things I see which could be fixed. Too much depends on root, and rootly powers cannot be voluntarily delegated over several users. (I would love to give up my power to manager printers and delegate the job to another account) File permissions are very limited, and there is not much metadata to go with files. The "everything is a file" metaphor is not true, Plan 9 gets it right(er).

    Read the article

  • what are some good interview questions for a position that consists of reviewing code for security vulnerabilities?

    - by John Smith
    The position is an entry-level position that consists of reading C++ code and identifying lines of code that are vulnerable to buffer overflows, out-of-bounds reads, uncontrolled format strings, and a bunch of other CWE's. We don't expect the average candidate to be knowledgeable in the area of software security nor do we expect him or her to be an expert computer programmer; we just expect them to be able to read the code and correctly identify vulnerabilities. I guess I could ask them the typical interview questions: reverse a string, print a list of prime numbers, etc, but I'm not sure that their ability to write code under pressure (or lack thereof) tells me anything about their ability to read code. Should I instead focus on testing their knowledge of C++? Ask them if they understand what a pointer is and how bitwise operators work? My only concern about asking that kind of question is that I might unfairly weed out people who don't happen to have the knowledge but have the ability to acquire it. After all, it's not like they will be writing a single line of code, and it's not like we are looking only for people who already know C++, since we are willing to train the right candidate. (It is true that I could ask those questions only to those candidates who claim to know C++, but I'd like to give the same "test" to everyone.) Should I just focus on trying to get an idea of their level of intelligence? In other words, should I get them to talk and pay attention to the way they articulate their thoughts, and so on?

    Read the article

  • Gems In The Visual Studio 2010 Training Kit &ndash; Introduction to MEF: Learning Labs

    - by Jim Duffy
    No, this post doesn’t have anything to do with cooking up illegal drugs in some rundown shack outside of town. That, my friends, would be a meth lab and fortunately that is waaaaay outside my area of expertise. Now I can talk Kentucky bourbon, or as Homer Simpson would say “mmmmmmmmmmm bourbon”, with you but please refrain from asking me meth questions. :-) Anyway, what I’m talking about are the MEF (Managed Extensibility Framework) Learning Labs contained in the Visual Studio 2010 and .NET Framework 4 Training Kit. Not sure what MEF is and need an overview? Then start here or here. Ok, so you’ve read a bit about MEF or heard about MEF and you’re thinking it might be something you and your development team might want to take a hands-on look at. I have good news then because contained in the Visual Studio 2010 and .NET Framework 4 Training Kit is a series of hands-on learning labs for MEF. I’ve added working my way through them to my “things I want to take a closer look at” list. Have a day. :-|

    Read the article

  • question about offer letter from tech company [migrated]

    - by paul smith
    I just received an offer letter from a tech company and I am a curious if it is normal practice to state this in the offer letter: "Your salary will be reviewed on a regular cycle as dictated by company policy"?Is this normal? To me it sounds a little shady, but I might just be thinking too much which is why I'd like to hear from others who've seen/received offer letters before from tech companies.

    Read the article

  • Cannot Boot Win XP or Ubuntu from hard drive - get Input Not Supported

    - by Jim Hudspeth
    1) Downloaded 11.10 ISO file to Dell XP Workstation 2) Made bootable USB using Pendrivelinux 3) Installed to harddrive using option 1 (Install along side Windows) 4) Rebooted when instructed 5) Booted into Ubuntu just fine (first time) 6) Attempted restart - got first splash screen followed by "input not supported" - tapped ESC and eventually got into Ubuntu 7) Later attempts failed - got "input not supported"; no eventual boot 8) Many retries holding / tapping various keys - same result 9) Booted from USB - all files appear to be in place - can access GRUB on harddrive Suggestions appreciated - must to be able to boot XP. Thanks in advance

    Read the article

  • Controlling soft errors and false alarms in SSIS

    - by Jim Giercyk
    If you are like me, you dread the 3AM wake-up call.  I would say that the majority of the pages I get are false alarms.  The alerts that require action often require me to open an SSIS package, see where the trouble is and try to identify the offending data.  That can be very time-consuming and can take quite a chunk out of my beauty sleep.  For those reasons, I have developed a simple error handling scenario for SSIS which allows me to rest a little easier.  Let me first say, this is a high level discussion; getting into the nuts and bolts of creating each shape is outside the scope of this document, but if you have an average understanding of SSIS, you should have no problem following along. In the Data Flow above you can see that there is a caution triangle.  For the purpose of this discussion I am creating a truncation error to demonstrate the process, so this is to be expected.  The first thing we need to do is to redirect the error output.  Double-clicking on the Query shape presents us with the properties window for the input.  Simply set the columns that you want to redirect to Redirect Row in the dropdown box and hit Apply. Without going into a dissertation on error handling, I will just note that you can decide which errors you want to redirect on Error and on Truncation.  Therefore, to override this process for a column or condition, simply do not redirect that column or condition. The next thing we want to do is to add some information about the error; specifically, the name of the package which encountered the error and which step in the package wrote the record to the error table.  REMEMBER: If you redirect the error output, your package will not fail, so you will not know where the error record was created without some additional information.    I added 3 columns to my error record; Severity, Package Name and Step Name.  Severity is just a free-form column that you can use to note whether an error is fatal, whether the package is part of a test job and should be ignored, etc.  Package Name and Step Name are system variables. In my package I have created a truncation situation, where the firstname column is 50 characters in the input, but only 4 characters in the output.  Some records will pass without truncation, others will be sent to the error output.  However, the package will not fail. We can see that of the 14 input rows, 8 were redirected to the error table. This information can be used by another step or another scheduled process or triggered to determine whether an error should be sent.  It can also be used as a historical record of the errors that are encountered over time.  There are other system variables that might make more sense in your infrastructure, so try different things.  Date and time seem like something you would want in your output for example.  In summary, we have redirected the error output from an input, added derived columns with information about the errors, and inserted the information and the offending data into an error table.  The error table information can be used by another step or process to determine, based on the error information, what level alert must be sent.  This will eliminate false alarms, and give you a head start when a genuine error occurs.

    Read the article

  • Blogger Template: How is inline style tag getting attached to img? [migrated]

    - by john Smith
    Examining a blogger template's img tag (data:post.thumbnailUrl) i've approached a mystery. An inline style tag controlling the width, margin and heigh perimeters are getting added to my img element. They are auto adjusting the images ratio to fit a smaller size. But I can't figure-out where this style tag script lives and how it's happening in my template. My template has no special javascript or jquery scripts. The full size images in the single posts page don't have this style tag. Is this a css or xml feature? element.style { margin-top: 0px; width: 301.0033444816054px; height: 200px; margin-left: -0.5016722408026908px; }

    Read the article

  • How to handle this unfortunately non hypothetical situation with end-users?

    - by User Smith
    I work in a medium sized company but with a very small IT force. Last year (2011), I wrote an application that is very popular with a large group of end-users. We hit a deadline at the end of last year and some functionality (I will call funcA from now on) was not added into the application that was wanted at the very end. So, this application has been running in live/production since the end of 2011, I might add without issue. Yesterday, a whole group of end-users started complaining that funcA that was never in the application is no longer working. Our priority at this company is that if an application is broken it must be fixed first prior to prioritized projects. I have compared code and queries and there is no difference since 2011, which is proofA. I then was able to get one of the end-users to admit that it never worked proofB, but since then that end-user has went back and said that it was working previously......I believe the horde of end-users has assimilated her. I have also reviewed my notes for this project which has requirements and daily updates regarding the project which specifically states, "funcA not achieved due to time constraints", proofC. I have spoken with many of them and I can see where they could be confused as they are very far from a programming background, but I also know they are intelligent enough to act in a group in order to bypass project prioritization orders in order to get functionality that they want to make their job easier. The worst part is is that now group think is setting in and my boss and the head of IT is actually starting to believe them, even though there is no code or query changes. As far as reviewing the state of the logic it is very cut and dry to the point of if 1 = 1, funcA will not work. So, this is the end of the description of my scenario, but I am trying not to get severally dinged on my performance metrics due to this which would essentially have me moved to fixing a production problem that doesn't exist that will probably take over 1 month. I am looking for direct answers to this question. This question is not for rants, polling, or discussions as this is not the format for StackExchange. Please don't downvote me too terribly it is pretty common on this specific site of stack, I am looking for honest answers to this situation and I couldn't find a forum more appropriate.

    Read the article

  • Ubuntu One Preferences does not show usage, name, e-mail, current plan

    - by Jim
    Ubuntuone is correctly synchronizing selected files between two computers running Ubuntu 10.10. When I open Ubuntuone Preferences, Account tab, on one computer it does not display the usage, name e-mail or current plan. On the other computer all information is shown correctly. On the Devices tab the 2 computers are not shown. They do show correctly on the other computer. Any ideas on how to fix this problem. I have reinstalled Ubuntuone per this link.

    Read the article

  • Windows 8 Developer Camp - Raleigh September 25th

    - by Jim Duffy
    Time is ticking away and the time to act is now! How's that for some motivation? :-)  Microsoft Developer Evangelist Brian Hitney and I want to help you get your app in the store in time for the October 26 Windows 8 launch. Come join us on Tuesday, September 25, at 9:00 AM in the Microsoft RTP offices to learn how simple it can be to construct a world class Windows 8 application. Don't think you can be ready to join the Windows 8 launch? Come anyway. You might be surprised. Don't have any idea what kind of app to build? Come anyway. They're are plenty of places to look for inspiration. Either way you can learn what it takes to create or tune an app for Windows 8 and publish it in the Windows App Store. Learn more about this free event on the registration page. Registration is now open and space is limited. Have a day.

    Read the article

  • Alternative to GoDaddy's ConsoliDate feature (change domain expiration date)

    - by Jim
    I've been using GoDaddy to manage about 50 domain names for a few years, but recently decided to move (probably to namecheap) because of the elephant killing incident. One GoDaddy's feature I like a lot is Consolidate, which allows you to change the expiration date of domain names for a small fee. I've searched for a while but didn't find any other registrar that provides this feature. Does anyone know if there's a registrar that allows you to change the expiration date of domains? Thanks!

    Read the article

  • How to import mass accounts into iKode Newsletter Server?

    - by Brownsithily Smith
    I have sent out emails to my 6.2k subscribers through iKode Newsletter Server. And about 50 to be considered as spam. It is less than 1%. It is amazing! The web based email marketing software of iKode also provides double opt-in subscription form which is effective to target special audience. However, if I want to import mailing list to this software, I need to add the address one by one, which is a waste of time. Does iKode provides mass account import ability? Or just need to upload a mailing list file?

    Read the article

  • Using Definition of Done to Drive Agile Maturity

    - by Dylan Smith
    I’ve been an Agile Coach at a lot of different clients over the years, and I want to share an approach I use to help them adopt and mature over time. It’s important to realize that “Agile” is not a black/white yes/no thing. Teams can be varying degrees of agile. I think of this as their agile maturity level. When I coach teams I want them to start out being a little agile, and get more agile as they mature. The approach I teach them is to use the definition of done as a technique to continuously improve their agile maturity over time. We’re probably all familiar with the concept of “Done Done” that represents what *actually* being done a feature means. Not just when a developer says he’s done right after he writes that last line of code that makes the feature kind-of work. Done Done means the coding is done, it’s been tested, installers and deployment packages have been created, user manuals have been updated, architecture docs have been updated, etc. To enable teams to internalize the concept of “Done Done”, they usually get together and come up with their Definition of Done (DoD) that defines all the activities that need to be completed before a feature is considered Done Done. The Done Done technique typically is applied only to features (aka User Stories). What I do is extend this to apply to several concepts such as User Stories, Sprints, Releases (and sometimes Check-Ins). During project kick-off I’ll usually sit down with the team and go through an exercise of creating DoD’s for each of these concepts (Stories/Sprints/Releases). We’ll usually start by just brainstorming a bunch of activities that could end up in these various DoD’s. Here’s some examples: Code Reviews StyleCop FxCop User Manuals Updated Architecture Docs Updated Tested by QA Tested by UAT Installers Created Support Knowledge Base Updated Deployment Instructions (for Ops) written Automated Unit Tests Run Automated Integration Tests Run Then we start by arranging these activities into the place they occur today (e.g. Do you do UAT testing only once per release? every sprint? every feature?). If the team was previously Waterfall most of these activities probably end up in the Release DoD. An extremely mature agile team would probably have most of these activities in the DoD for the User Stories (because an extremely mature agile team will probably do continuous deployment and release every story). So what we need to do as a team, is work to move these activities from their current home (Release DoD) down into the Sprint DoD and eventually into the User Story DoD (and maybe into the lower-level Check-In DoD if we decide to use that). We don’t have to move them all down to User Story immediately, but as a team we figure out what we think we’re capable of moving down to the Sprint cycle, and Story cycle immediately, and that becomes our starting DoD’s. Over time the team makes an effort to continue moving activities down from Release->Sprint->Story as they become more agile and more mature. I try to encourage them to envision a world in which they deploy to production as each User Story is completed. They would need to be updating User Manuals, creating installers, doing UAT testing (typical Release cycle activities) on every single User Story. They may never actually reach that point, but they should envision that, and strive to keep driving the activities down closer to the User Story cycle s they mature. This is a great technique to give a team an easy-to-follow roadmap to mature their agile practices over time. Sure there’s other aspects to maturity outside of this, but it’s a great technique, that’s easy to visualize, to drive agility into the team. Just keep moving those activities (aka “gates”) down the board from Release->Sprint->Story. I’ll try to give an example of what a recent client of mine had for their DoD’s (this is from memory, so probably not 100% accurate): Release Create/Update deployment Instructions For Ops Instructional Videos Updated Run manual regression test suite UAT Testing In this case that meant deploying to an environment shared across the enterprise that mirrored production and asking other business groups to test their own apps to ensure we didn’t break anything outside our system Sprint Deploy to UAT Environment But not necessarily actually request UAT testing occur User Guides updated Sprint Features Video Created In this case we decided to create a video each sprint showing off the progress (video version of Sprint Demo) User Story Manual Test scripts developed and run Tested by BA Deployed in shared QA environment Using automated deployment process Peer Code Review Code Check-In Compiled (warning-free) Passes StyleCop Passes FxCop Create installer packages Run Automated Tests Run Automated Integration Tests PS – One of my clients had a great question when we went through this activity. They said that if a Sprint is by definition done when the end-date rolls around (time-boxed), isn’t a DoD on a sprint meaningless – it’s done on the end-date regardless of whether those other activities are complete or not? My answer is that while that statement is true – the sprint is done regardless when the end date rolls around – if the DoD activities haven’t been completed I would consider the Sprint a failure (similar to not completing what was committed/planned – failure may be too strong a word but you get the idea). In the Retrospective that will become an agenda item to discuss and understand why we weren’t able to complete the activities we agreed would need to be completed each Sprint.

    Read the article

  • Best Option for Creating A Small Church Website

    - by Jim
    I've been asked to create a website for a small church. Their prior site was hosted on geocities which is no longer. They are not looking for anything robust, just an informational site with a calendar and maybe a contact form. The church would also like to be able to administer the site with little technical know-how. Cost is also an issue. Given these requirements, something like sites.google.com seems like a good option. However, my main concern is that Sites will suffer the same fate as geocities. It is definitely not a flagship Google product. Are there other good alternatives that fit the requirements?

    Read the article

  • ASP.NET MVC for the Rest of Us Videos now available

    - by Jim Duffy
    Microsoft Senior Program Manager, Joe Stagner, has released his first 3 ASP.NET MVC for the Rest of Us Videos. I like the way he helps you learn ASP.NET MVC by building bridges between ASP.NET MVC concepts & ideas and ASP.NET WebForms concepts & ideas which you may already be comfortable working with. Good job Joe. Have a day. :-|

    Read the article

  • You wouldn&rsquo;t drink 9 year old milk would you?

    - by Jim Duffy
    This is an absolutely brilliant campaign to urge users that its time to move on from IE 6. I like how it puts it terms that everyone can understand and has probably experienced at one time or another. How many times have you opened the milk, took a sniff, and experienced that visceral reaction that accompanies catching a whiff of milk that has turned to the dark side of the force? I call it Darth Vader milk. :-) Of course I’m assuming that you haven’t used IE 6 for a long time now. It is our responsibility as information technology workers to communicate to our friends and family how lame using IE 6 is. Shame them into upgrading if necessary. I don’t care how you get through to them but get through. Tell them that only losers use IE 6. Tell them you’ll cut them out of the your will. Tell them they’re banned from your annual BBQ blowout. Tell them that [insert their favorite celebrity’s name here] thinks people using IE6 are losers.  :-) Seriously, IE6 sucks and blows at the same time and has got to go for a number of reasons including the security leaks that come with using it. Confidentially, I urge them to upgrade for purely selfish reasons. Because I am the first level of computer support for waaaaaay to many of my family members I always advocate they use a current browser (IE 8 or Firefox) and anti-virus software (AVG). Call me selfish but I’d rather not waste my time dealing with a virus or malware that could potentially slip through with IE6. Yes, I’m selfish with my time that way. :-) Have a day. :-|

    Read the article

  • The answer to the unfathomable question: what is meaning of error value 2147943645?

    - by Jim Lahman
    I scheduled a task to perform a windows backup of a single disk on the my server.  When I tested it, the task ran successfully – no problems, no errors; just as I expected.    However, when the task ran as scheduled, it failed with error value 2147943645.  I wondered was this the answer to life, the universe and everything in it?  No.  That is 42.    After doing some research and reviewing the task configuration, I realize that the task will only run if the user of logged on:   So, this was the answer!!  I have to configure the task to run whether the user is logged or not.  Or, else I’ll get that nasty error value.

    Read the article

  • Linux,Apache,NetBeans,PHP == Windows,IIS/Cassini,Visual Studio,ASP.Net

    - by Neil Smith
    I've worked out how to get my linux based Netbeans PHP development machine to behave much like what happens when you create a new ASP.Net project in Visual Studio. Firstly create multiple PHP project in Netbeans,say for example mysite1 and mysite2. Next edit the apache2/sites-enabled/000-default file and add two virtualhost sections as below <VirtualHost 127.0.1.1> ServerName mysite1.localhost DocumentRoot /var/www/mysite1/ </VirtualHost> <VirtualHost 127.0.2.1> ServerName mysite2.localhost DocumentRoot /var/www/mysite2/ </VirtualHost> For each site you add, pick a different ip address similar to the above where I use the third octet to increment, next edit the etc/hosts file and add the following two lines 127.0.1.1 mysite1.localhost 127.0.2.1 mysite2.localhost Then in Netbeans, go to File->Project Properties click on 'Run Configuration' and set 'Project Url' to http://mysite1.localhost for the first project and http://mysite2.localhost for the second project. That will give you a PHP development box which develops multiple PHP projects similar to how a Visual Studio Windows based box handles multiple ASP.Net sites. Hope this helps someone :)

    Read the article

  • Checking for DBNull

    - by Jim Lahman
    Using a table adapter to a SQL Server database table that returns a NULL record.  We determine the fields are NULL by comparing against System.DBNull Looking the NULL records in SQL Management studio   Using a table adapter to retrieve a record   1: try 2: { 3: this.vTrackingTableAdapter.FillByTrkZone(this.dsL1Write.vTracking, iTrkZone); 4: } 5: catch (Exception ex) 6: { 7: sLogMessage = String 8: .Format("Error getting coil number from tracking table at {0} - {1}", 9: sTrkName, 10: ex.Message); 11: throw new CannotReadTrackingTableException(sLogMessage); 12: }   Looking at the record as it returned from the table adapter:   ItemArrayObject Column [0] ChargeCoilNumber [1] HeadWeldZone [2] TailWeldZone [3] ZoneLen [4] ZoneCoilLen [5] Confirmed [6] Validated [7] EntryWidth [8] EntryThickness   Since each item in the ItemArray is an object, we can test for null   1: if (dsL1Write.vTracking.Rows[0].ItemArray[0] == System.DBNull.Value) 2: { 3: throw new NoCoilAtPORException("NULL coil found at tracking zone " + sTrkName); 4: }   If no records were returned by the table adapter 1: if (dsL1Write.vTracking.Rows.Count == 0) 2: { 3: throw new NoCoilAtPORException("No coils found at tracking zone " + sTrkName); 4: }

    Read the article

  • Framework Folders and Duplicate File Names

    - by Kevin Smith
    I have been working with Framework folders a little bit in the past few days and found one unexpected behavior that is different from Contribution Folders (Folders_g). If you try and check a file into a Framework Folder that already exists in the folder it will allow it and rename the file for you. In Folders_g this would have generated an error and prevented you from checking in the file. A quick check of the Framework Folder configuration settings in the Application Administrator’s Guide for Content Server does not show a configuration parameter to control this. I'm still thinking about this and not sure if I like this new behavior or not. I guess from a user perspective this more closely aligns Framework Folders to how Windows handle duplicate file names, but if you are migrating from Folders_g and expect a duplicate file name to be rejected, this might cause you some problems.

    Read the article

  • In the Aggregate: How Will We Maintain Legacy Systems? [closed]

    - by Jim G.
    NEW YORK - With a blast that made skyscrapers tremble, an 83-year-old steam pipe sent a powerful message that the miles of tubes, wires and iron beneath New York and other U.S. cities are getting older and could become dangerously unstable. July 2007 Story About a Burst Steam Pipe in Manhattan We've heard about software rot and technical debt. And we've heard from the likes of: "Uncle Bob" Martin - Who warned us about "the consequences of making a mess". Michael C. Feathers - Who gave us guidance for 'Working Effectively With Legacy Code'. So certainly the software engineering community is aware of these issues. But I feel like our aggregate society does not appreciate how these issues can plague working systems and applications. As Steve McConnell notes: ...Unlike financial debt, technical debt is much less visible, and so people have an easier time ignoring it. If this is true, and I believe that it is, then I fear that governments and businesses may defer regular maintenance and fortification against hackers until it is too late. [Much like NYC and the steam pipes.] My Question: Is there a way that we can avoid the software equivalent of NYC and the steam pipes?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >