Search Results

Search found 28747 results on 1150 pages for 'switch case'.

Page 544/1150 | < Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >

  • Oracle @ AIIM Conference

    - by [email protected]
    Oracle will be at the AIIM Conference and Exposition next week in Philadelphia. On the opening morning, Robert Shimp, Group Vice President, Global Technology Business Unit, of Oracle Corporation, will moderate an executive keynote panel. Mr. Shimp will lead four Oracle customer executives through a lively discussion of how innovative organizations are driving the integration of content management with their core business processes on Tuesday April 20th at 8:45 AM. Our panelists are: CINDY BIXLER, CIO, Embry Riddle Aeronautical University TOM SHOWALTER, Managing Director, JP Morgan Chase IRFAN MOTIWALA, Vice President, Moody's Investors Service MIT MONICA CROCKER, CRM, PMP, Corporate Records Manager, Land O'Lakes For more information on our panelists, click here. Oracle will be in booth #2113 at the AIIM Expo. Come by and enter the daily raffle to win a Netbook! Oracle and Oracle partners will demonstrate solutions that increase productivity, reduce costs and ensure compliance for business processes such as accounts payable, human resource onboarding, marketing campaigns, sales management, large scale diagrams for facilities and manufacturing, case management, and others Oracle products including Oracle Universal Content Management, Oracle Imaging and Process Management, Oracle Universal Records Management, Oracle WebCenter, Oracle AutoVue, and Oracle Secure Enterprise Search will be demonstrated in the booth. Oracle will host a private event at The Field House Sports Bar - see your Oracle representative for more details Oracle customers can meet in private meeting rooms with their Oracle representatives Key Sessions Besides the opening morning keynote panel, Oracle will have a number of other sessions at the conference. Oracle Content Management will be featured in the session G08 - A Passage to Improving Healthcare: Enhancing EMR with Electronic Records Wednesday April 21st 2:25PM-3:10PM Kristina Parma of Oracle partner ImageSource will deliver this session, along with Pam Doyle of Fujitsu and Nancy Gladish of Swedish Medical Center. Kristina will also be in the Oracle booth to talk about this solution. On Tuesday April 20th at 4:05 PM Ajay Gandhi of Oracle will deliver a session entitled Harnessing SharePoint Content for Enterprise Processes in PeopleSoft, Siebel, E-Business Suite and JD Edwards Tuesday April 20th 1:15PM-1:45PM - Bringing Content Management to Your AP, HR, Sales and Marketing Processes - Application Showcase Theater (on the AIIM Expo Floor - Booth 1549 Wednesday April 21st 12:30PM-1:00PM - Embed and Edit Content Anywhere - Application Showcase Theater (on the AIIM Expo Floor - Booth 1549 For more information, see the AIIM Expo page on the Oracle website.

    Read the article

  • htaccess/cPanel 301 redirects not working for add-on domain

    - by Clemens
    I've already looked at many samples and tutorials how to set up those 301 redirects on Apache and can't figure out why only the second one is working: Options +FollowSymlinks RewriteEngine on #doesn't work: RewriteCond %{HTTP_HOST} ^old.com$ [OR] RewriteCond %{HTTP_HOST} ^www.old.com$ RewriteRule ^page-still-exists.htm$ "http://www.new.com/new-target-page.htm" [R=301,L] #works: RewriteCond %{HTTP_HOST} ^old.com$ [OR] RewriteCond %{HTTP_HOST} ^www.old.com$ RewriteRule ^page-does-no-longer-exist.htm$ "http://www.new.com/" [R=301,L] #works: RewriteCond %{HTTP_HOST} ^old.com$ [OR] RewriteCond %{HTTP_HOST} ^www.old.com$ RewriteRule ^folder/otherpage.htm$ "http://www.new.com/" [R=301,L] #works: RewriteCond %{HTTP_HOST} ^old.com$ [OR] RewriteCond %{HTTP_HOST} ^www.old.com$ RewriteRule ^/?$ "http://www.new.com/" [R=301,L] #doesn't work: RewriteCond %{HTTP_HOST} ^old.com$ [OR] RewriteCond %{HTTP_HOST} ^www.old.com$ RewriteRule ^somepage.htm$ "http://www.old.com/some-page.htm" [R=301,L] I have no idea why only the second one is working. The only difference I can see is, that in the second case the old page does no longer exist on the old domain. But whenever I want to redirect any still existing page from the old domain to the new domain the page on the old domain is still used. Any input is much appreciated because this is slowly driving me crazy :) EDIT: I added the complete htaccess file. EDIT 2: So I removed almost all redirects and currently my htaccess looks like this: Options +FollowSymlinks RewriteEngine on RewriteCond %{HTTP_HOST} ^old\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.old\.com$ RewriteRule ^(.*)$ "http\:\/\/www\.new\.com\/$1" [R=301,L] The only redirect that is working is the simple one from old.com to new.com. A redirect like old.com/page.htm to new.com or even new.com/page.htm is not working. And actually I really don't know where this redirect is actually coming from... Can a 301 really be so complicated?

    Read the article

  • Allen for Umbraco - Upload photos from your iPhone - iPad and iPod Touch

    - by Vizioz Limited
    At last year's UK Umbraco Festival we gave a demo of our alpha version of Allen for Umbraco, at that stage the application only worked on an iPhone and was a very quick prototype to see what people thought.When we returned to our office the next day, we decided if we were going to release Allen for Umbraco into the wild we really should start again from scratch, the main two reasons for this were;First to ensure it was a truly Universal application ( i.e. it can be installed on an iPhone, iPad or iPod ) which looks and behaves differently depending on the device. The second reason was we really wanted the application to be the foundations of more than just image uploading for Umbraco, for this to be the case we ensured the new version was built following proven design patterns and with lots of unit tests so that we can easily extended it.We have lots of plans for future versions of Allen for Umbraco including adding iCloud support to keep all your settings in sync across your multiple Apple devices. We are also working on support for Umbraco 5 which should be release soon.When you download the App and setup your site, make sure you have a look at the Image Resizing settings, by default we have set these to resize your images to 512 pixels wide, however you can choose from a variety of different resizing methods (by Height, Width, Fit within a frame or the full size image).Also, by default when you select a photo you will see that the image is named with it's date and time stamp of when the photograph was taken (or the current date and time if the original date is not stored in your image). If you click on this name you can edit the name of your photo before it is uploaded.Finally, we are really keep to get your feedback, so within the App help section you will find a way to submit Suggestions and if needed, you can send up Support emails from within the App :)We hope you enjoy the first version of Allen for Umbraco and we look forward to bringing you lots of exciting additional functionality in the future!

    Read the article

  • The Incremental Architect&acute;s Napkin &ndash; #3 &ndash; Make Evolvability inevitable

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/04/the-incremental-architectacutes-napkin-ndash-3-ndash-make-evolvability-inevitable.aspxThe easier something to measure the more likely it will be produced. Deviations between what is and what should be can be readily detected. That´s what automated acceptance tests are for. That´s what sprint reviews in Scrum are for. It´s no small wonder our software looks like it looks. It has all the traits whose conformance with requirements can easily be measured. And it´s lacking traits which cannot easily be measured. Evolvability (or Changeability) is such a trait. If an operation is correct, if an operation if fast enough, that can be checked very easily. But whether Evolvability is high or low, that cannot be checked by taking a measure or two. Evolvability might correlate with certain traits, e.g. number of lines of code (LOC) per function or Cyclomatic Complexity or test coverage. But there is no threshold value signalling “evolvability too low”; also Evolvability is hardly tangible for the customer. Nevertheless Evolvability is of great importance - at least in the long run. You can get away without much of it for a short time. Eventually, though, it´s needed like any other requirement. Or even more. Because without Evolvability no other requirement can be implemented. Evolvability is the foundation on which all else is build. Such fundamental importance is in stark contrast with its immeasurability. To compensate this, Evolvability must be put at the very center of software development. It must become the hub around everything else revolves. Since we cannot measure Evolvability, though, we cannot start watching it more. Instead we need to establish practices to keep it high (enough) at all times. Chefs have known that for long. That´s why everybody in a restaurant kitchen is constantly seeing after cleanliness. Hygiene is important as is to have clean tools at standardized locations. Only then the health of the patrons can be guaranteed and production efficiency is constantly high. Still a kitchen´s level of cleanliness is easier to measure than software Evolvability. That´s why important practices like reviews, pair programming, or TDD are not enough, I guess. What we need to keep Evolvability in focus and high is… to continually evolve. Change must not be something to avoid but too embrace. To me that means the whole change cycle from requirement analysis to delivery needs to be gone through more often. Scrum´s sprints of 4, 2 even 1 week are too long. Kanban´s flow of user stories across is too unreliable; it takes as long as it takes. Instead we should fix the cycle time at 2 days max. I call that Spinning. No increment must take longer than from this morning until tomorrow evening to finish. Then it should be acceptance checked by the customer (or his/her representative, e.g. a Product Owner). For me there are several resasons for such a fixed and short cycle time for each increment: Clear expectations Absolute estimates (“This will take X days to complete.”) are near impossible in software development as explained previously. Too much unplanned research and engineering work lurk in every feature. And then pervasive interruptions of work by peers and management. However, the smaller the scope the better our absolute estimates become. That´s because we understand better what really are the requirements and what the solution should look like. But maybe more importantly the shorter the timespan the more we can control how we use our time. So much can happen over the course of a week and longer timespans. But if push comes to shove I can block out all distractions and interruptions for a day or possibly two. That´s why I believe we can give rough absolute estimates on 3 levels: Noon Tonight Tomorrow Think of a meeting with a Product Owner at 8:30 in the morning. If she asks you, how long it will take you to implement a user story or bug fix, you can say, “It´ll be fixed by noon.”, or you can say, “I can manage to implement it until tonight before I leave.”, or you can say, “You´ll get it by tomorrow night at latest.” Yes, I believe all else would be naive. If you´re not confident to get something done by tomorrow night (some 34h from now) you just cannot reliably commit to any timeframe. That means you should not promise anything, you should not even start working on the issue. So when estimating use these four categories: Noon, Tonight, Tomorrow, NoClue - with NoClue meaning the requirement needs to be broken down further so each aspect can be assigned to one of the first three categories. If you like absolute estimates, here you go. But don´t do deep estimates. Don´t estimate dozens of issues; don´t think ahead (“Issue A is a Tonight, then B will be a Tomorrow, after that it´s C as a Noon, finally D is a Tonight - that´s what I´ll do this week.”). Just estimate so Work-in-Progress (WIP) is 1 for everybody - plus a small number of buffer issues. To be blunt: Yes, this makes promises impossible as to what a team will deliver in terms of scope at a certain date in the future. But it will give a Product Owner a clear picture of what to pull for acceptance feedback tonight and tomorrow. Trust through reliability Our trade is lacking trust. Customers don´t trust software companies/departments much. Managers don´t trust developers much. I find that perfectly understandable in the light of what we´re trying to accomplish: delivering software in the face of uncertainty by means of material good production. Customers as well as managers still expect software development to be close to production of houses or cars. But that´s a fundamental misunderstanding. Software development ist development. It´s basically research. As software developers we´re constantly executing experiments to find out what really provides value to users. We don´t know what they need, we just have mediated hypothesises. That´s why we cannot reliably deliver on preposterous demands. So trust is out of the window in no time. If we switch to delivering in short cycles, though, we can regain trust. Because estimates - explicit or implicit - up to 32 hours at most can be satisfied. I´d say: reliability over scope. It´s more important to reliably deliver what was promised then to cover a lot of requirement area. So when in doubt promise less - but deliver without delay. Deliver on scope (Functionality and Quality); but also deliver on Evolvability, i.e. on inner quality according to accepted principles. Always. Trust will be the reward. Less complexity of communication will follow. More goodwill buffer will follow. So don´t wait for some Kanban board to show you, that flow can be improved by scheduling smaller stories. You don´t need to learn that the hard way. Just start with small batch sizes of three different sizes. Fast feedback What has been finished can be checked for acceptance. Why wait for a sprint of several weeks to end? Why let the mental model of the issue and its solution dissipate? If you get final feedback after one or two weeks, you hardly remember what you did and why you did it. Resoning becomes hard. But more importantly youo probably are not in the mood anymore to go back to something you deemed done a long time ago. It´s boring, it´s frustrating to open up that mental box again. Learning is harder the longer it takes from event to feedback. Effort can be wasted between event (finishing an issue) and feedback, because other work might go in the wrong direction based on false premises. Checking finished issues for acceptance is the most important task of a Product Owner. It´s even more important than planning new issues. Because as long as work started is not released (accepted) it´s potential waste. So before starting new work better make sure work already done has value. By putting the emphasis on acceptance rather than planning true pull is established. As long as planning and starting work is more important, it´s a push process. Accept a Noon issue on the same day before leaving. Accept a Tonight issue before leaving today or first thing tomorrow morning. Accept a Tomorrow issue tomorrow night before leaving or early the day after tomorrow. After acceptance the developer(s) can start working on the next issue. Flexibility As if reliability/trust and fast feedback for less waste weren´t enough economic incentive, there is flexibility. After each issue the Product Owner can change course. If on Monday morning feature slices A, B, C, D, E were important and A, B, C were scheduled for acceptance by Monday evening and Tuesday evening, the Product Owner can change her mind at any time. Maybe after A got accepted she asks for continuation with D. But maybe, just maybe, she has gotten a completely different idea by then. Maybe she wants work to continue on F. And after B it´s neither D nor E, but G. And after G it´s D. With Spinning every 32 hours at latest priorities can be changed. And nothing is lost. Because what got accepted is of value. It provides an incremental value to the customer/user. Or it provides internal value to the Product Owner as increased knowledge/decreased uncertainty. I find such reactivity over commitment economically very benefical. Why commit a team to some workload for several weeks? It´s unnecessary at beast, and inflexible and wasteful at worst. If we cannot promise delivery of a certain scope on a certain date - which is what customers/management usually want -, we can at least provide them with unpredecented flexibility in the face of high uncertainty. Where the path is not clear, cannot be clear, make small steps so you´re able to change your course at any time. Premature completion Customers/management are used to premeditating budgets. They want to know exactly how much to pay for a certain amount of requirements. That´s understandable. But it does not match with the nature of software development. We should know that by now. Maybe there´s somewhere in the world some team who can consistently deliver on scope, quality, and time, and budget. Great! Congratulations! I, however, haven´t seen such a team yet. Which does not mean it´s impossible, but I think it´s nothing I can recommend to strive for. Rather I´d say: Don´t try this at home. It might hurt you one way or the other. However, what we can do, is allow customers/management stop work on features at any moment. With spinning every 32 hours a feature can be declared as finished - even though it might not be completed according to initial definition. I think, progress over completion is an important offer software development can make. Why think in terms of completion beyond a promise for the next 32 hours? Isn´t it more important to constantly move forward? Step by step. We´re not running sprints, we´re not running marathons, not even ultra-marathons. We´re in the sport of running forever. That makes it futile to stare at the finishing line. The very concept of a burn-down chart is misleading (in most cases). Whoever can only think in terms of completed requirements shuts out the chance for saving money. The requirements for a features mostly are uncertain. So how does a Product Owner know in the first place, how much is needed. Maybe more than specified is needed - which gets uncovered step by step with each finished increment. Maybe less than specified is needed. After each 4–32 hour increment the Product Owner can do an experient (or invite users to an experiment) if a particular trait of the software system is already good enough. And if so, she can switch the attention to a different aspect. In the end, requirements A, B, C then could be finished just 70%, 80%, and 50%. What the heck? It´s good enough - for now. 33% money saved. Wouldn´t that be splendid? Isn´t that a stunning argument for any budget-sensitive customer? You can save money and still get what you need? Pull on practices So far, in addition to more trust, more flexibility, less money spent, Spinning led to “doing less” which also means less code which of course means higher Evolvability per se. Last but not least, though, I think Spinning´s short acceptance cycles have one more effect. They excert pull-power on all sorts of practices known for increasing Evolvability. If, for example, you believe high automated test coverage helps Evolvability by lowering the fear of inadverted damage to a code base, why isn´t 90% of the developer community practicing automated tests consistently? I think, the answer is simple: Because they can do without. Somehow they manage to do enough manual checks before their rare releases/acceptance checks to ensure good enough correctness - at least in the short term. The same goes for other practices like component orientation, continuous build/integration, code reviews etc. None of that is compelling, urgent, imperative. Something else always seems more important. So Evolvability principles and practices fall through the cracks most of the time - until a project hits a wall. Then everybody becomes desperate; but by then (re)gaining Evolvability has become as very, very difficult and tedious undertaking. Sometimes up to the point where the existence of a project/company is in danger. With Spinning that´s different. If you´re practicing Spinning you cannot avoid all those practices. With Spinning you very quickly realize you cannot deliver reliably even on your 32 hour promises. Spinning thus is pulling on developers to adopt principles and practices for Evolvability. They will start actively looking for ways to keep their delivery rate high. And if not, management will soon tell them to do that. Because first the Product Owner then management will notice an increasing difficulty to deliver value within 32 hours. There, finally there emerges a way to measure Evolvability: The more frequent developers tell the Product Owner there is no way to deliver anything worth of feedback until tomorrow night, the poorer Evolvability is. Don´t count the “WTF!”, count the “No way!” utterances. In closing For sustainable software development we need to put Evolvability first. Functionality and Quality must not rule software development but be implemented within a framework ensuring (enough) Evolvability. Since Evolvability cannot be measured easily, I think we need to put software development “under pressure”. Software needs to be changed more often, in smaller increments. Each increment being relevant to the customer/user in some way. That does not mean each increment is worthy of shipment. It´s sufficient to gain further insight from it. Increments primarily serve the reduction of uncertainty, not sales. Sales even needs to be decoupled from this incremental progress. No more promises to sales. No more delivery au point. Rather sales should look at a stream of accepted increments (or incremental releases) and scoup from that whatever they find valuable. Sales and marketing need to realize they should work on what´s there, not what might be possible in the future. But I digress… In my view a Spinning cycle - which is not easy to reach, which requires practice - is the core practice to compensate the immeasurability of Evolvability. From start to finish of each issue in 32 hours max - that´s the challenge we need to accept if we´re serious increasing Evolvability. Fortunately higher Evolvability is not the only outcome of Spinning. Customer/management will like the increased flexibility and “getting more bang for the buck”.

    Read the article

  • It's an Oracle Linux Wrap: Oracle Openworld 2012

    - by Zeynep Koch
    Are you still recovering from an amazing Oracle OpenWorld experience? 50,000 attendees had access to thousands of sessions, demos, hands-on-labs, networking opportunities, music concerts, and loads of fun. For the Oracle Linux team, this was a week full of many insightful sessions and customer interactions. In case you were unable to attend Oracle OpenWorld or missed some of content presented, here's a compilation of key session presentations, keynotes, and videos.Go to the Oracle OpenWorld content catalog and access all the session presentations. Oracle Openworld Keynote by Edward Screven Oracle's commitment to Open Source by Edward Screven Oracle Linux Interview with Wim Coekaerts Making the most of mainline kernel by Wim Coekaerts Why DTrace and Ksplice have made Oracle Linux 6 popular by W.Coekaerts How partnership between Oracle Linux and Oracle Partners benefits Sysadmins by Michele Resta Hugepages=Huge Performance on Oracle Linux by Greg Marsden Benefits of Kpslice in your Linux Environment by Tim Hill Oracle Linux, Ksplice and MySQL by Lenz Grimmer We also hosted a successful Oracle Linux Pavilion with 11 of our key partners - Beyond Trust, Centrify, Data Intensity, Fujitsu, HP, LSI, Mellanox, Micro Focus, NetApp, QLogic and Teleran showcased their solutions for Oracle Linux and Oracle VM. Here are some videos from the Oracle Linux Pavilion. Centrify covers Oracle Linux solution they offer at Oracle Linux PavilionMellanox talk about their solution at Oracle Linux Pavilion Eric Pan covers Micro Focus products at Oracle Linux Pavilion There's also collection of the keynotes and executive sessions as on-demand videos posted  here . We hope you find this information useful and look forward to seeing at Oracle OpenWorld 2013! ORACLE LINUX TEAM

    Read the article

  • MySQL, replacing &nbsp; with... nothing (delete those, please!)

    - by javipas
    I'm trying to purge my WordPress content from "false" carriage return (CR). These are caused after a migration of my content, that now presents from time to time a &nbsp; code that makes the web rendering engine to "paint" a CR where I would like to be nothing. The paragraphs seem to have a double CR because of this, and look too far apart. I'd like to be able to make a MySQL query in order to get rid of that strings, but at the moment I haven't found the key. What I've tried is UPDATE wp_posts set post_content = replace (post_content,'&nbsp;',' '); But i get <p> </p> where before were the &nbsp; strings. This seems not the answer at all. Could it have to be with the ampersand, and in that case, should I use something like &amp;nbsp; or something similar?

    Read the article

  • Component Development within SOA

    How do the concepts of component development work within SOA? Let’s first break this question down by defining what component development is. Component development is the process of implementing specific functionality in the form of small units of complied code that can be reused across multiple applications or product families. Typically, components are integrated with other components forming composite components. In general, most interaction between components is done through interfaces to promote loose coupling. The concept of loose coupling refers to the interconnections of components in a system so that their component dependences based on contracts defined by interfaces. A real life example of this can be experienced while using Legos to build a structure. If we consider each Lego block as a component, then when two more Legos are connected they form a composite component due to the fact that the structure is made up of multiple components.   It is important to note that composite components can be made from standard components and other composite components. Eventually as various components and composite components become interconnected a structure begins to form in the shape of an application or in the case of Legos in the form of Lego structure. Software components can loosely be defined as small units of related implemented functionality that can communicate with other components or may have dependencies on other components. Based on the definitions provide above, it is my personal opinion that SOA works well with the concepts of component development. The SOA architectural style focuses on creating loosely coupled services. Each service much like a component offers related functionality that can be accessed by various requesting clients.  In addition services can be derived just like components in that services can be built on other services to form composite services. In summary, the concepts of component development can work within SOA based on the example above.

    Read the article

  • ArchBeat Top 20 for March 11-17, 2012

    - by Bob Rhubart
    The 20 most-clicked links as shared via my social networks for the week of March 11-17, 2012. Start Small, Grow Fast: SOA Best Practices article by @biemond, @rluttikhuizen, @demed Packt Publishing offers discounts of up to 30% on 60+ Oracle titles IT Strategies from Oracle; Three Recipes for Oracle Service Bus 11g ; Stir Up Some SOA Oracle Cloud Conference: dates and locations worldwide Applications Architecture | Roy Hunter and Brian Rasmussen How Strategic is IT? - Assessing Strategic Value | Al Kiessel White Paper: An Architect’s Guide to Big Data | Dr. Helen Sun, Peter Heller Getting Started with Oracle Unbreakable Enterprise Kernel Release 2 | Lenz Grimmer Great Solaris 10 features paving the way to Solaris 11 | Karoly Vegh Who the Linux Developer Met on His Way to St. Ives | Rick Ramsey Peripheral Responsibilities Required for Large IDM Build Outs (Including Fusion Apps) | Brian Eidelman IOUG Real World Performance Tour, w/Tom Kyte, Andrew Holdsworth, Graham Wood Configure IPoIB on Solaris 10 branded zone | Leo Yuen Oracle OpenWorld 2012 Call for Papers Use Case Assumptions versus Pre-Conditions | Dave Burke Handling Custom XML documents in Oracle B2B | @Biemond Building a Coherence Cluster with Multiple Application Servers | Rene van Wijk XMLA vs BAPI | Sunil S. Ranka The Java EE 6 Example - Running Galleria on WebLogic 12 - Part 3 | @MyFear Public Sector Architecture | @jeremy_forman, @hamzajahangir Thought for the Day "The goal of Computer Science is to build something that will last at least until we've finished building it." —Anonymous

    Read the article

  • Now Available: Visual Studio 2010 Release Candidate Virtual Machines with Sample Data and Hands-on-L

    - by John Alexander
    From a message from Brian Keller: “Back in December we posted a set of virtual machines pre-configured with Visual Studio 2010 Beta 2, Visual Studio Team Foundation Server 2010 Beta 2, and 7 hands-on-labs. I am pleased to announce that today we have shipped an updated virtual machine using the Visual Studio 2010 Release Candidate bits, a brand new sample application, and 9 hands-on-labs. This VM is customer-ready and includes everything you need to learn and/or deliver demonstrations of many of my favorite application lifecycle management (ALM) capabilities in Visual Studio 2010. This VM is available in the virtualization platform of your choice (Hyper-V, Virtual PC 2007 SP1, and Windows [7] Virtual PC). Hyper-V is highly recommended because of the performance benefits and snapshotting capabilities. Tailspin Toys The sample application we are using in this virtual machine is a simple ASP.NET MVC 2 storefront called Tailspin Toys. Tailspin Toys sells model airplanes and relies on the application lifecycle management capabilities of Visual Studio 2010 to help them build, test, and maintain their storefront. Major kudos go to Dan Massey for building out this great application for us. Hands-on-Labs / Demo Scripts The 9 hands-on-labs / demo scripts which accompany this virtual machine cover several of the core capabilities of conducting application lifecycle management with Visual Studio 2010. Each document can be used by an individual in a hands-on-lab capacity, to learn how to perform a given set of tasks, or used by a presenter to deliver a demonstration or classroom-style training. Unlike the beta 2 release, 100% of these labs target Tailspin Toys to help ensure a consistent storytelling experience. Software quality: Authoring and Running Manual Tests using Microsoft Test Manager 2010 Introduction to Test Case Management with Microsoft Test Manager 2010 Introduction to Coded UI Tests with Visual Studio 2010 Ultimate Debugging with IntelliTrace using Visual Studio 2010 Ultimate Software architecture: Code Discovery using the architecture tools in Visual Studio 2010 Ultimate Understanding Class Coupling with Visual Studio 2010 Ultimate Using the Architecture Explore in Visual Studio 2010 Ultimate to Analyze Your Code Software Configuration Management: Planning your Projects with Team Foundation Server 2010 Branching and Merging Visualization with Team Foundation Server 2010 “ Check out Brian’s Post for more info including download instructions…

    Read the article

  • Failed to load viewstate.The control tree into which viewstate is being loaded...etc

    - by alaa9jo
    Two days ago,a colleague of mine tried to publish an asp.net website (which is built in VS2008 using framework 3.5) to our server,he configured everything in IIS (he made sure that the selected asp.net version is 2.0) and launched the website..at first it was working great but when he tried to click on a specific treeview...BOOM..: "Failed to load viewstate. The control tree into which viewstate is being loaded must match the control tree that was used to save viewstate during the previous request. For example, when adding controls dynamically, the controls added during a post-back must match the type and position of the controls added during the initial request." In that page there were these control: a TreeView and a Placeholder,when the user selects any node then it's controls will be created dynamically into that placeholder..for the first time it's working fine but when (s)he select another node then that issue appears. He called me to help him with this issue,for me this is the first time I see such an issue,scratch my head then I decided to eliminate the possibilities of this issue one by one,at the development machine it's working perfectly,he published the website at the local IIS and again..it's working perfectly,I took a copy of the website and published it into my laptop but no issues at all,so this is means that it's not an issue in the code. So there is something missing/wrong in our server [it has Windows Server 2003],we went to the server and checked on the web-config and the configurations on IIS...nothing wrong so far,so I decided to check if the framework 3.5 is installed or not and the answer: it wasn't installed Of course he assumed that it was installed and there was nothing to tell if it wasn't from the "ASP.Net version" in IIS because frameworks 3.0 and 3.5 will not be listed there [2.0 will be listed there instead],the only way to check if it was installed or not is to search for the framework in this path:[WINDOWS Folder]\Microsoft.NET\Framework or check if it was installed in Add or remove programs. The obvious solution for his case: We installed Framework 3.5 SP1 into our server,did a restart to the machine and it worked ! If anyone faced the same issue and solved it using the same solution or with a different one please post it here to share experience.

    Read the article

  • How to securely generate memorable passwords?

    - by Tim
    Whenever I need new passwords I use some tools to generate those, preferable memorable passwords, but I've been wondering how secure this might actually be. Using The xkcd random number generator is probably pretty bad, cat /dev/random is probably pretty good, but generating memorable passwords seems a bit more tricky. Whenever a program generates a memorable password, it only uses a subset of the total password space available, and it is not clear to me how big this space is. Of course a long password should help in this case, but if the `memorable' part of the program is too predictable, your passwords are not very good in the end. TL;DR: how secure are memorable password generators, given the fact that `memorable' passwords are a subset of total password space? Some tools I know of: pwgen -- seems ok, but passwords are not too memorable Mac Password Assistant - generates memorable passwords but it is unclear to me how this works.

    Read the article

  • How to cluster two IIS servers for failover?

    - by Ram Gopal
    We have IIS servers running in 2 machines hosting few webservices which provided some integration services to an old document Mgmt system, word/excel related service, etc.... We need to cluster/load balance these 2 IIS in order to achieve a fail-over. i.e If one of the IIS server is down, the other on should be able to handle the request. The reverse proxy used in the DMZ is also IIS 7.5 Our overall business application is in fact a J2EE one and we have successfully deployed on a weblogic cluster installed on the same two machines and load balance from the same above mentioned IIS reverse proxy at DMZ. But we do not know how to achieve this in case of IIS.

    Read the article

  • Can single ESXi host make use of two separete iSCSI box?

    - by user71061
    Hi! I have problem with using multiply iSCSI targets with single ESXi host (in my case they are two FreeNAS hosts, but I suspect that this problem will occur with any two iSCSI box of that same type/model). If I configure two FreeNAS hosts as iSCSI targets (say iSCSI A and iSCSI B), then I can use both of them with my ESXi host, but only one at a time (i.e only iSCSI A or only iSCSI B, but not both of them simultaneously). If I try to add second iSCSI target to my iSCSI adapter (of course it has unique iqn name), then in a details pane of this adapter (it is iSCSI software adapter), I see that total number of paths has increased accordingly, but not total number of devices (so I can't use it as another storage). What should I do? It is impossible to attach two iSCSI targets to single adapter? I'm using free version of ESXi 4.1. Maybe it is an limitation of free version? Thanks in advance for any sugestion.

    Read the article

  • Upgrading from 2005 to R2

    - by DavidWimbush
    We're about to take the plunge and upgrade our servers from SQL 2005 to SQL 2008 R2. Real world accounts of people upgrading to R2 are a bit hard to find so I thought it might be useful to blog what happens. (I don't count marketing 'case studies' that just say stuff like "The process was effortless and the upgrade will pay for itself by the end the week.") We're using the database engine, Analysis Services and Reporting Services so upgrading by a major version number was looking a bit daunting. I wasn't expecting much trouble on the engine side of things but, as most of the action in 2008 and R2 appears to have been on the Reporting and BI front, I expected to have quite a bit of work to do. But our testing so far has been one nice surprise after another: The 2005 backups restore cleanly onto R2. R2's BI Studio upgraded the Reporting and Analysis Services solutions without any issues. The cubes all deployed and processed just fine. R2 BI Studio interacts fine with TFS 2008 version control. I'll blog some more as things develop.  

    Read the article

  • Where to find other versions of Opera browser as deb packages?

    - by cipricus
    I used Opera mainly for the Unite feature now to be abandoned. It is missing in v. 12. Some say its features will re-emerge in future extensions etc. Until then, Unite is still accessible in v. 11. Where do I get the v.11 deb? P.S. In fact it seems that opera unite (at least in its older form) is dying while I am editing this question. Access to opera-unite applications from within opera-unite is poor or absent. This issue is obscure to me for now (31.08.2012) because yesterday I have installed v12 in Windows OS (with opera-unite and basic applications - file sharing and media player - already installed) and it is still working (server is working). The v12 Ubuntu version came today without unite, and after installing v11 (which has unite) I could not get applications (file sharing, etc). But they are still available: here and after downloading these files which have te .ua extension, they can be installed by opening them with Opera (v.11) But as opera-unite is no longer supported, it is possible that the server that provides the file sharing etc will soon be unaccessible. Even if that is the case the question should maybe not be closed at it has a general usefulness independently of the unite issue.

    Read the article

  • Type Casting variables in PHP: Is there a practical example?

    - by Stephen
    PHP, as most of us know, has weak typing. For those who don't, PHP.net says: PHP does not require (or support) explicit type definition in variable declaration; a variable's type is determined by the context in which the variable is used. Love it or hate it, PHP re-casts variables on-the-fly. So, the following code is valid: $var = "10"; $value = 10 + $var; var_dump($value); // int(20) PHP also alows you to explicitly cast a variable, like so: $var = "10"; $value = 10 + $var; $value = (string)$value; var_dump($value); // string(2) "20" That's all cool... but, for the life of me, I cannot conceive of a practical reason for doing this. I don't have a problem with strong typing in languages that support it, like Java. That's fine, and I completely understand it. Also, I'm aware of—and fully understand the usefulness of—type hinting in function parameters. The problem I have with type casting is explained by the above quote. If PHP can swap types at-will, it can do so even after you force cast a type; and it can do so on-the-fly when you need a certain type in an operation. That makes the following valid: $var = "10"; $value = (int)$var; $value = $value . ' TaDa!'; var_dump($value); // string(8) "10 TaDa!" So what's the point? Can anyone show me a practical application or example of type casting—one that would fail if type casting were not involved? I ask this here instead of SO because I figure practicality is too subjective. Edit in response to Chris' comment Take this theoretical example of a world where user-defined type casting makes sense in PHP: You force cast variable $foo as int -- (int)$foo. You attempt to store a string value in the variable $foo. PHP throws an exception!! <--- That would make sense. Suddenly the reason for user defined type casting exists! The fact that PHP will switch things around as needed makes the point of user defined type casting vague. For example, the following two code samples are equivalent: // example 1 $foo = 0; $foo = (string)$foo; $foo = '# of Reasons for the programmer to type cast $foo as a string: ' . $foo; // example 2 $foo = 0; $foo = (int)$foo; $foo = '# of Reasons for the programmer to type cast $foo as a string: ' . $foo;

    Read the article

  • Automated “ubuntu-12.04.1-server-amd64” OS installation on physical machine

    - by user285336
    We are using Physical server and are in process of Automated “ubuntu-12.04.1-server-amd64” OS installation on it. There are two HDD for OS installation purpose and there are RAID1 relation between them. This setup has been done through BIOS. The kickstart configuration file looks like this: #Generated by Kickstart Configurator #platform=AMD64 or Intel EM64T #System language lang en_US #Language modules to install langsupport en_US #System keyboard keyboard us #System mouse mouse #System timezone timezone Asia/Dili #Root password rootpw --iscrypted $1$Yl1QJyta$KzIT.kq3i9E5XaiQKcUJn/ #Initial user user ankit --fullname "Ankit" --iscrypted --password $1$c6Yflpea$pi1QQ59/jgywmGwBv25z3/ #Reboot after installation reboot #Use text mode install text #Install OS instead of upgrade install #Use Web installation url --url my_repo_location #System bootloader configuration bootloader --location=mbr #Clear the Master Boot Record zerombr yes #Partition clearing information clearpart --all --initlabel #Disk partitioning information part /boot --fstype ext4 --size 100 --ondisk sda part / --fstype ext4 --size 10000 --ondisk sda part /var --fstype ext4 --size 10000 --ondisk sda part swap --size 1024 --ondisk sdb #System authorization infomation auth --useshadow --enablemd5 #Network information network --bootproto=dhcp --device=eth0 #Firewall configuration firewall --enabled --trust=eth0 --http --ftp --ssh --telnet --smtp #X Window System configuration information xconfig --depth=8 --resolution=640x480 --defaultdesktop=GNOME But I am getting the below error : No root file system is defined Please suggest on this. Do we need to do any modification in kickstart configuration file. Any help in this regard will be very helpful for us. The automated Ubuntu OS installation is successful in Virtual Machine(VM) with the above ks.cfg (kickstart configuration file ) but failing in case of physical machine. Please suggest on this and if possible provide the new ks.cfg file to resolve above problem. Thanks & Regards, Rajesh Prasad

    Read the article

  • Why can I not map a dynamic texture in D3D?

    - by sebf
    I am trying to map a Texture2D resource in DirectX11 via SharpDX. The resource is declared as a ShaderResource, with Dynamic usage and the 'Write' CPU flag specified. My call however fails with a generic exception from SharpDX: _Parent.Context.MapSubresource( _Resource, 0, SharpDX.Direct3D11.MapMode.Write, SharpDX.Direct3D11.MapFlags.None, out stream ); I see from this question that it is supported. The MSDN docs and this other question hint that instead of using Context.MapSubresource() I should be using Texture2D.Map(), however, the DirectX11 Texture2D class does not define Map() (though it does for the D3D 10 equivalent). If I call the above with MapMode.WriteDiscard, the call succeeds but in this case the previous content of the texture is lost, which is no good when I only want to update a section of it. Has the Map() method been removed in Direct3D 11 or am I looking in the wrong place? Is the MapSubresource() method unsuitable or am I using it wrong? EDIT: I declared my resource as Dynamic with CPU Write Flags - not Default as I originaly wrote - sorry for the fairly huge 'typo' that changes the entire question!

    Read the article

  • Ubuntu 11.10 - can't adjust brightness on my laptop

    - by Danny
    Using every method possible I'm unable to change my laptop brightness.. It's stuck on super max brightness.. Using the slider in the "screens" window int he control panel doesnt do anything, and using the fn keys doesn't do anything... Some info about my system: laptop is a MSI VR420 Running ubuntu 11.10 video card is an integrated intel card Used to work when I ran ubuntu 10.10 and earlier versions (not sure if it worked out of the box or if I inadvertly fixed it in previous versions while installing lots of other packages) Brightness slider on the "screen" window doesn't do anything, "dim when on battery -power" doesnt do anything When I use the fn+f4/f5 keys to adjust brightness there is a popup showing that its receving the input, but i can only go from between 0 brightness and max brightness.. (that is what the output is showing, the brightness does not change though) when attempting to change the brightness with fn+f4/f5 my dmesg log reports "ACPI: Failed to switch the brightness" Here are some outputs from some terminal commands, not sure if any of this is useful or not.. lspci - http://pastebin.com/EimZSGs3 "ls /sys/class/backlight/*/brightness" will output "/sys/class/backlight/intel_backlight/brightness" "cat /sys/class/backlight/intel_backlight/brightness" = 0 (when I use the fn+f4/f5) this will change betwene 0 and, but the actual brightness doesn't change) "cat /sys/class/backlight/intel_backlight/max_brightness" = 1 "lsmod | grep ^i915" = i915 505108 3 here is the list of things I've tried through searching google..... Edit /etc/default/grub?GRUB_CMDLINE_LINUX_DEFAULT: acpi_osi=Linux, acpi_backlight=vendor, nomodeset. (as well as several different combinations of these settings being on or off) Edit /etc/X11/xorg.conf (file doesn't exist on my system) Edit /proc/acpi/video/VGA/LCD/brightness (file doesn't exist) sudo setpci -s 00:02.0 F4.B=XX (does nothing) xbacklight -set XX (does nothing) I've tried about everything with no luck... The only thing i haven't tried is adding the ppa that someone suggested here: Unable to change brightness in a Lenovo laptop .... However according to the notes on the ppa.. all of the changes that are in the ppa are now actually apart of 11.10 and the ppa is only for people with 11.04.. Does anyone have any ideas for me? edit:by setting acpi=off in my /etc/default/grub file I was about to get my fn+f4/f5 keys to work, also "dim when display to save power" now makes my laptop dim when on battery power.. the dimness slider however still doesn't do anything.. also xbacklight doesn't do anything stil or any of the other methods... The thing i don't get is why setting acpi=off makes my fn+f4/f5 keys work? Isn't acpi supposed to be enables the backlight to be dimmed? does anyone know what ubuntu is deciding to do behind the scenes when acpi=off? Does anyone know what/if any features I might be losing with it off?

    Read the article

  • Platform Builder: PBWorkspaces CESysgen.bat Not Used?

    - by Bruce Eitman
    One of the things that I like about Windows CE is that I am always learning new things, but in this case it is a bit disturbing. We working with Multi UI (MUI) this week and discovered some problems with Windows CE 5.0 and Chinese language support. These problems don’t exist in CE 6.0. The problem was that in the batch files in Public\CEBASE\oak\misc, specifically weceshellfe.bat, some of the shell components needed are only included if certain LOCALs are selected. English is not one of them, I suppose this is because someone didn’t think that we would ever use them and English – doh. No problem, just work around this in PBWorkspaces\<workspace>\WINCE500\<BPS>\cesysgen.bat. But that didn’t work. After a lot of trial and error, what I determined is that this cesysgen.bat isn’t actually used by Platform Builder any more.  Instead, in that same folder is a <workspace>.bat file that is called by Public\CEBASE\oak\misc\cesysgen.bat. That leads to some new problems though, but solvable, in that what I really wanted to do was add a fix after the batch files in CEBASE run, but <workspace>.bat runs before the other batch files in CEBASE. So what I finally came up with was to add the fix to the PASS2 handling in <workspace>.bat. Copyright © 2010 – Bruce Eitman All Rights Reserved

    Read the article

  • Immutable hard links on ext3/4?

    - by shovas
    In my research on file versioning at the fs level, snapshotting, and related ideas, I took a look at hard-links and exactly what they are and how they behave. Using rsync you can get a pretty slick poor man's snapshotting system up and running on file systems that don't natively support it. But, can you get immutable hard links on ext3/4 or any other file systems for that matter? My definition for immutable hard link is: A hard link which, when changed on one location, becomes a regular copy and no longer a hard link. I would like this because it would enable snapshotting use of the source data to link against instead of a copy of the data (in the case of the rsync snapshotting technique). I have gigabytes of data that can't be duplicated due to space restrictions but I have enough room if I can intelligently snapshot individual changed files with the rest linked to the source not a copy. Given all that, is there some other technique, feature or technology I'm really looking for?

    Read the article

  • Could not load type System.Configuration.NameValueSectionHandler

    If you upgrade older .NET sites from 1.x to 2.x or greater, you may encounter this error when you have configuration settings that look like this: <section name="CacheSettings" type="System.Configuration.NameValueFileSectionHandler, System"/> Once you try to run this on an upgraded appdomain, you may encounter this error: An error occurred creating the configuration section handler for CacheSettings: Could not load type 'System.Configuration.NameValueSectionHandler' from assembly 'System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. Microsoft moved a bunch of the Configuration related classes into a separate assembly, System.Configuration, and created a new class, ConfigurationManager.  This presents its own challenges which Ive blogged about in the past if you are wondering where ConfigurationManager is located.  However, the above error is separate. The issue in this case is that the NameValueSectionHandler is still in the System assembly, but is in the System.Configuration namespace.  This causes confusion which can be alleviated by using the following section definition: <section name="CacheSettings" type="System.Configuration.NameValueSectionHandler, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" /> (you can remove the extra line breaks within the type=) With this in place, your web application should once more be able to load up the NameValueSectionHandler.  I do recommend using your own custom configuration section handlers instead of appSettings, and I would further suggest that you not use NamveValueSectionHandler if you can avoid it, but instead prefer a strongly typed configuration section handler. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Is there a Windows equivalent of Unix 'CPU steal time'?

    - by Steffen Opel
    In order to assess performance monitoring accuracy on virtualization platforms, the CPU steal time has become an increasingly relevant metric - see EC2 monitoring: the case of stolen CPU for an instructive summary in the context of Amazon EC2 and IBM's paper on CPU time accounting for a more in-depth technical explanation (including illustrations) of the concept: Steal time is the percentage of time a virtual CPU waits for a real CPU while the hypervisor is servicing another virtual processor. Accordingly, it is exposed in most related Unix/Linux monitoring tools nowadays - see e.g. columns %steal or st in sar or top: st -- Steal Time The amount of CPU 'stolen' from this virtual machine by the hypervisor for other tasks (such as running another virtual machine). I've been unable to figure out how to capture the same metric on Windows though, is this possible already? (Ideally for the Windows 2008 Server R2 AMIs on EC2 and via a respective Windows Performance Counters of course.)

    Read the article

  • What is recommended minimum object size for gzip performance benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this down to the 150 byte limit... just to save on bandwidth costs (since CDNs base their charges on bandwith offloaded from origin), or is there a performance gain in doing so?

    Read the article

  • JMS Adapter Step 0 : Configuring the WLS-JMS resources

    - by [email protected]
    Before getting started with the JMS Adapter, we must configure the connection factories/JMS queues on the WLS admin console. In particular, we will be required to follow these stepsCreate a connection factory. In our case, we will create a "XA Connection Factory". This step is mandatory if you need your JMS queues to participate in a global transaction. Create the WLS JMS QueuesCreating the connection factory:1) Login to the WLS Admin console. On my setup, the url looks like "http://localhost:7001/console".2) Select Services -> Messaging -> JMS Modules -> SOAJMSModule as shown below. We can also create a new JMS Module, but, I took the easier way out by selecting the SOAJMSModule. 3) Click on "New" as shown in order to create the Connection factory.4) Select "Connection Factory" radio button and click "Next".5) Enter the Connection Factory properties as shown and click on "Finish".6) Target the connection factory to your managed server and click on "Finish". 7) Now, go back and select the Connection Factory that you've just created (see Step 2 above) . Click on "Transactions" and enable XA and click on "Save".

    Read the article

< Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >